Add Batch 494be22f-1c7e-45ea-af0f-fb9063e7b3a5
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- adversarialtextgenerationviasequencecontrastdiscrimination/a4d95139-771b-47f6-a1f2-6ceae119623e_content_list.json +3 -0
- adversarialtextgenerationviasequencecontrastdiscrimination/a4d95139-771b-47f6-a1f2-6ceae119623e_model.json +3 -0
- adversarialtextgenerationviasequencecontrastdiscrimination/a4d95139-771b-47f6-a1f2-6ceae119623e_origin.pdf +3 -0
- adversarialtextgenerationviasequencecontrastdiscrimination/full.md +238 -0
- adversarialtextgenerationviasequencecontrastdiscrimination/images.zip +3 -0
- adversarialtextgenerationviasequencecontrastdiscrimination/layout.json +3 -0
- adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/a031f169-a4eb-455f-b10e-e7c04edf6505_content_list.json +3 -0
- adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/a031f169-a4eb-455f-b10e-e7c04edf6505_model.json +3 -0
- adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/a031f169-a4eb-455f-b10e-e7c04edf6505_origin.pdf +3 -0
- adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/full.md +284 -0
- adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/images.zip +3 -0
- adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/layout.json +3 -0
- airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/6aec2548-1f5d-46cb-8a5c-7be65a1cc701_content_list.json +3 -0
- airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/6aec2548-1f5d-46cb-8a5c-7be65a1cc701_model.json +3 -0
- airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/6aec2548-1f5d-46cb-8a5c-7be65a1cc701_origin.pdf +3 -0
- airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/full.md +338 -0
- airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/images.zip +3 -0
- airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/layout.json +3 -0
- anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/e742f1db-5949-48df-99bd-dcbcbc58dffb_content_list.json +3 -0
- anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/e742f1db-5949-48df-99bd-dcbcbc58dffb_model.json +3 -0
- anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/e742f1db-5949-48df-99bd-dcbcbc58dffb_origin.pdf +3 -0
- anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/full.md +341 -0
- anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/images.zip +3 -0
- anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/layout.json +3 -0
- anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/cd0643f5-f58d-44c5-9e38-398e4e5d85a2_content_list.json +3 -0
- anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/cd0643f5-f58d-44c5-9e38-398e4e5d85a2_model.json +3 -0
- anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/cd0643f5-f58d-44c5-9e38-398e4e5d85a2_origin.pdf +3 -0
- anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/full.md +364 -0
- anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/images.zip +3 -0
- anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/layout.json +3 -0
- anempiricalinvestigationofbeamawaretraininginsupertagging/0f0d87d3-a9dc-4432-b09a-5f988a3b0d5f_content_list.json +3 -0
- anempiricalinvestigationofbeamawaretraininginsupertagging/0f0d87d3-a9dc-4432-b09a-5f988a3b0d5f_model.json +3 -0
- anempiricalinvestigationofbeamawaretraininginsupertagging/0f0d87d3-a9dc-4432-b09a-5f988a3b0d5f_origin.pdf +3 -0
- anempiricalinvestigationofbeamawaretraininginsupertagging/full.md +265 -0
- anempiricalinvestigationofbeamawaretraininginsupertagging/images.zip +3 -0
- anempiricalinvestigationofbeamawaretraininginsupertagging/layout.json +3 -0
- anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/484069f8-b429-4daa-8c1f-fbd61dcb9856_content_list.json +3 -0
- anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/484069f8-b429-4daa-8c1f-fbd61dcb9856_model.json +3 -0
- anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/484069f8-b429-4daa-8c1f-fbd61dcb9856_origin.pdf +3 -0
- anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/full.md +146 -0
- anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/images.zip +3 -0
- anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/layout.json +3 -0
- anevaluationmethodfordiachronicwordsenseinduction/7b2eec2b-e541-4104-975d-2d7cd4d77ca3_content_list.json +3 -0
- anevaluationmethodfordiachronicwordsenseinduction/7b2eec2b-e541-4104-975d-2d7cd4d77ca3_model.json +3 -0
- anevaluationmethodfordiachronicwordsenseinduction/7b2eec2b-e541-4104-975d-2d7cd4d77ca3_origin.pdf +3 -0
- anevaluationmethodfordiachronicwordsenseinduction/full.md +324 -0
- anevaluationmethodfordiachronicwordsenseinduction/images.zip +3 -0
- anevaluationmethodfordiachronicwordsenseinduction/layout.json +3 -0
- aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/bd003223-e145-4ed7-9ad1-1c55f61df622_content_list.json +3 -0
- aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/bd003223-e145-4ed7-9ad1-1c55f61df622_model.json +3 -0
adversarialtextgenerationviasequencecontrastdiscrimination/a4d95139-771b-47f6-a1f2-6ceae119623e_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:26ce1b873235c4242bec71db6dc0ac56a740449bb326f9e6a0ecbf2125fcc836
|
| 3 |
+
size 46969
|
adversarialtextgenerationviasequencecontrastdiscrimination/a4d95139-771b-47f6-a1f2-6ceae119623e_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4a6adf5b1c8c9fc09426f2818e607e79fbf59edbb8ed77e92d120ac42d1bff77
|
| 3 |
+
size 58051
|
adversarialtextgenerationviasequencecontrastdiscrimination/a4d95139-771b-47f6-a1f2-6ceae119623e_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:986bbf4577c1151c345cbebf57ada2f888599c64ffaa2746bc4437da57bf360c
|
| 3 |
+
size 366546
|
adversarialtextgenerationviasequencecontrastdiscrimination/full.md
ADDED
|
@@ -0,0 +1,238 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Adversarial Text Generation via Sequence Contrast Discrimination
|
| 2 |
+
|
| 3 |
+
Ke Wang, Xiaojun Wan
|
| 4 |
+
|
| 5 |
+
Wangxuan Institute of Computer Technology, Peking University
|
| 6 |
+
|
| 7 |
+
The MOE Key Laboratory of Computational Linguistics, Peking University
|
| 8 |
+
|
| 9 |
+
{wangke17, wanxiaojun}@pku.edu.cn
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
In this paper, we propose a sequence contrast loss driven text generation framework, which learns the difference between real texts and generated texts and uses that difference. Specifically, our discriminator contains a discriminative sequence generator instead of a binary classifier, and measures the 'relative realism' of generated texts against real texts by making use of them simultaneously. Moreover, our generator uses discriminative sequences to directly improve itself, which not only replaces the gradient propagation process from the discriminator to the generator, but also avoids the time-consuming sampling process of estimating rewards in some previous methods. We conduct extensive experiments with various metrics, substantiating that our framework brings improvements in terms of training stability and the quality of generated texts.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Generating human-like texts has always been a fundamental problem in the natural language processing field, which is essential to many applications such as machine translation (Bahdanau et al., 2015), image captioning (Fang et al., 2015), and dialogue systems (Reschke et al., 2013). Currently, the dominant approaches are auto-regressive models, such as Recurrent Neural Network (Mikolov et al., 2011), Transformer (Vaswani et al., 2017), and Convolutional Seq2Seq (Gehring et al., 2017), which have achieved impressive performances for the task of language generation using the Maximum Likelihood Estimation (MLE) method. Nevertheless, some studies reveal that such settings may have three main drawbacks: First, the MLE method makes the generative model extremely sensitive to rare samples, which results in the learned distribution being too conservative (Feng and McCulloch, 1992; Ahmad and Ahmad, 2019). Second, auto-regressive generation models suffer from exposure
|
| 18 |
+
|
| 19 |
+
bias (Bengio et al., 2015) due to the dependence on the previous sampled output during the inferring phase. Third, they only consider the word-level objective and may fail to guarantee some sentence-level goals, such as realism, preserving semantic consistencies, long-range semantic structure, and so on (Ranzato et al., 2016).
|
| 20 |
+
|
| 21 |
+
Recently, lots of recent studies (Yu et al., 2017; Che et al., 2017; Lin et al., 2017; Zhang et al., 2017; Chen et al., 2018; Wang and Wan, 2018; Ke et al., 2019; Nie et al., 2019; Wang and Wan, 2019; Wang et al., 2019) try to apply generative adversarial networks (GAN) (Goodfellow et al., 2014) in text generation, which uses discriminator networks as loss functions to ensure these higher-level objectives. However, the discreteness of texts makes it difficult for the gradient to pass from the discriminator to the generator. The current solution is mainly based on reinforcement learning (Yu et al., 2017) or differentiable sampling functions (Jang et al., 2017). In addition, considering the complexity of the language, the generator is easily much weaker than the discriminator in practice, making it difficult to obtain a clear optimization direction from the discriminator and learn from scratch.
|
| 22 |
+
|
| 23 |
+
In this paper, by borrowing techniques from contrastive learning (Hadsell et al., 2006; Henaff et al., 2019; He et al., 2019; Chen et al., 2020), we propose a sequence contrast loss driven adversarial learning framework for text generation, SLGAN. In our framework, the discriminator $D$ is not just a simple binary classifier, but a Siamese network composed of a sequence generator $G_{d}$ , which can provide sequences with discriminative information. In other words, our discriminator $D$ measures the gap between the generated texts and the real texts, rather than simply predicting the probability of the generated data (by generator $G$ ) being real. Specifically, these discriminative sequences with well-formed textual structure information can be used to
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
Figure 1: Illustration of SLGAN. $\pmb{x}$ is the real text sampled from $\mathcal{D}$ . $\pmb{y}$ is the text generated by $G$ , and $\hat{\pmb{y}}$ is the discriminative text generated by $G_{d}$ .
|
| 27 |
+
|
| 28 |
+
measure the 'relative realism' (sequence contrast loss) of the generated texts against the real texts, and further improve the generator $G$ . Intuitively, the discriminator can not only tell if the text generated by the generator is good, but also teach the generator in which direction to generate better text. Our motivations are two-fold: 1) Our discriminator can provide better discriminative information to the generator because it observes both 'fake' and 'real' data simultaneously. 2) Compared to other gradient propagation strategies based on reinforcement learning or differentiable sampling functions, the contrastive loss between generated sequences and discriminative sequences can improve the generator more time-efficiently and steadily.
|
| 29 |
+
|
| 30 |
+
We conduct experiments on both synthetic and real datasets, and use various metrics (i.e., fluency, novelty, generalization, diversity, human evaluation, and learning curve) to show that our approach not only produces more realistic samples but also greatly stabilizes the adversarial training process.
|
| 31 |
+
|
| 32 |
+
# 2 Method
|
| 33 |
+
|
| 34 |
+
The architecture of our proposed model is depicted in Figure 1. The whole framework can be divided into two adversarial learning objectives: generator learning and discriminator learning. The goal of the discriminator $D$ is to learn the difference ('relative realism') between fake texts $(\pmb{y}$ , texts generated by generators) and real texts $(\pmb{x})$ . While the goal of the generator $G$ is to use this difference (discriminative sequences) to generate more realistic texts, which contains a word-level item $(\mathcal{L}_{mle})$ and a sentence-level item $(\hat{\mathcal{L}}_{adv})$ .
|
| 35 |
+
|
| 36 |
+
To achieve the above goals, we start with two things. One is that discriminator $D$ observes and uses both 'fake' and 'real' data at the same time, rather than considering them in an alternating fashion. The other is that the inside of the discriminator is not a binary classifier, but a sequence generator $G_{d}$ . $G_{d}$ aims to generate discriminative sequence $\hat{\pmb{y}}$ , which can be considered as a sequence representation to be used for better measurement of 'relative realism'. To some extent, $G_{d}$ can be seen as an 'amplifier', and the closer the input text is to real texts, the less it changes. Further, $\hat{\pmb{y}}$ not only can be used to measure the 'relative realism' of generated texts against real texts, but also can be used to directly affect $G$ through sequence contrast loss. Therefore, by calculating the contrastive loss, the gradient back-propagation process from the discriminator to the generator is avoided, which is of significant importance in adversarial learning.
|
| 37 |
+
|
| 38 |
+
Discriminator Learning: The contrastive loss of our discriminator takes the output of the discriminative sequence generator $G_{d}$ for a positive example (real texts $\pmb{x}$ ), and calculates its similarity to an example of the same class $(\pmb{x})$ and contrasts that with the distance to negative examples $(\pmb{y}$ , texts generated by generators):
|
| 39 |
+
|
| 40 |
+
$$
|
| 41 |
+
\mathcal {L} _ {\text {d i s c r i m i n a t o r}} = \lambda_ {i} \operatorname {S i m} _ {s} - \operatorname {S i m} _ {d}, \tag {1}
|
| 42 |
+
$$
|
| 43 |
+
|
| 44 |
+
where $Sim_{d}$ and $Sim_{s}$ are the similarity measure of a pair of dissimilar points and a pair of similar points, respectively. $\lambda_{i} = \max \{\lambda, 1 - \alpha i\}$ is the coefficient to balance two terms at $i$ -th epoch. It is worth noting that Eq 1 degenerates into the vanilla GAN's adversarial loss when $\lambda_{i} = 0$ .
|
| 45 |
+
|
| 46 |
+
We use the $KL$ -divergence to measure how similar two word distributions of generated sequences are to each other, and the inter-class loss $Sim_{d}$ is:
|
| 47 |
+
|
| 48 |
+
$$
|
| 49 |
+
\begin{array}{l} \operatorname {S i m} _ {d} = \mathcal {L} _ {a d v} = \mathbb {E} _ {\boldsymbol {x} \sim \mathcal {D}, z \sim \mathcal {P}} \\ [ | | G _ {d} (\boldsymbol {x}; \theta_ {d}) - G _ {d} (G (z; \theta_ {g}); \theta_ {d}) | | _ {k l} ], \tag {2} \\ \end{array}
|
| 50 |
+
$$
|
| 51 |
+
|
| 52 |
+
where $z$ is sampled from a noise distribution $\mathcal{P}$ . The outputs of $G_{d}$ is not a probability between 0 and 1, but a representation with more discriminative information. That is, the generator $G_{d}$ in our discriminator takes input of the real data $\pmb{x}$ or the fake data $G(z;\theta_g)$ , and then generates word sequence $\hat{\pmb{y}}$ for each input.
|
| 53 |
+
|
| 54 |
+
In addition, we consider making $\hat{\pmb{y}}$ meaningful, with the purpose that it can be used not only to discriminate but also to represent 'realism' features.
|
| 55 |
+
|
| 56 |
+
We hence rewrite the intra-class loss $Sim_{s}$ with a similar idea as:
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
S i m _ {s} = \mathcal {L} _ {r e c} = \mathbb {E} _ {\boldsymbol {x} \sim \mathcal {D}} [ | | G _ {d} (\boldsymbol {x}; \theta_ {d}) - \boldsymbol {x} | | _ {k l} ]. (3)
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
In practice, we add noise to $\pmb{x}$ by randomly replacing input word with the noise word $(<\text{unk}>)$ .
|
| 63 |
+
|
| 64 |
+
Generator Learning: The loss function of our generator includes two terms: one term $(\mathcal{L}_{mle})$ is to concern word-level fitness and another term $(\hat{\mathcal{L}}_{adv})$ is to ensure a higher level of 'realism' resembling qualities:
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
\mathcal {L} _ {\text {g e n e r a t o r}} = \mathcal {L} _ {m l e} + \hat {\lambda} _ {i} \hat {\mathcal {L}} _ {a d v}, \tag {4}
|
| 68 |
+
$$
|
| 69 |
+
|
| 70 |
+
where $\hat{\lambda}_i = \hat{\lambda} (i / k)$ is the balance coefficient and $k$ is the number of all epochs.
|
| 71 |
+
|
| 72 |
+
Given a training sentence $\pmb{x} = \{x_0, \dots, x_t, \dots\}$ with length $|x|$ , the word-level objective $\mathcal{L}_{mle}$ is to minimize the negative log-likelihood loss as follows:
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
\mathcal {L} _ {m l e} = \mathbb {E} _ {\boldsymbol {x} \sim \mathcal {D}} [ - \sum_ {t = 1} ^ {| \boldsymbol {x} | - 1} \log G (x _ {t} | \boldsymbol {x} _ {0: t - 1}) ], \tag {5}
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
where $G(x_{t}|\pmb{x}_{0:t - 1})$ denotes the probability that the output of $G$ is $x_{t}$ under the condition of the former given sequence $\pmb{x}_{0:t - 1} = \{x_0,x_1,\dots x_{t - 1}\}$ at time step $t$ . While in the inference phase, generator $G$ will take the previous sampled output $\pmb{y}_{0:t - 1}$ as the input at time step $t$ . Here $G$ is an auto-regressive generation model (e.g., RNN and its variants (Mikolov et al., 2011; Hochreiter and Schmidhuber, 1997; Chung et al., 2014), Transformer (Vaswani et al., 2017) and Convolutional Seq2Seq (Gehring et al., 2017)).
|
| 79 |
+
|
| 80 |
+
Furthermore, the other goal of generator $G$ is to minimize $Sim_{d}$ in Eq 2, with the intuition that using a discriminator network to learn the loss function of sentence-level properties (e.g., long-range semantic structure, preserving semantic consistencies, etc.) over time, rather than explicitly formulating these properties. According to the discriminator's loss (Eq 1), the closer $G(z; \theta_{g})$ is to $x$ , the closer $G_{d}(G(z; \theta_{g}); \theta_{d})$ is to $G(z; \theta_{g})$ . As such, we resort to an approximation approach to define the generator's adversarial loss as:
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
\hat {\mathcal {L}} _ {a d v} = \mathbb {E} _ {z \sim \mathcal {P}} [ \| G _ {d} (G (z; \theta_ {g}); \theta_ {d}) - G (z; \theta_ {g}) \| _ {k l} ]. \tag {6}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
In this way, we can directly guide the generation of $G$ by measuring the sequence contrast loss of the output between $G$ and $G_{d}$ , which not only avoids the gradient back-propagation process from the discriminator to the generator, but also makes the generator use the discriminator's discriminative information more effectively.
|
| 87 |
+
|
| 88 |
+
# 3 Experiments
|
| 89 |
+
|
| 90 |
+
# 3.1 Setup
|
| 91 |
+
|
| 92 |
+
In this study, we use Texygen (Zhu et al., 2018), a benchmarking platform that implements a majority of GAN-based text generation models and covers a set of metrics, to standardize comparisons with other GAN models. We compare SLGAN with several typical and state-of-the-art unsupervised generic text generation models, including MLE (Mikolov et al., 2011), SeqGAN (Yu et al., 2017), MaliGAN (Che et al., 2017), RankGAN(Lin et al., 2017), GSGAN (Kusner and Hernandez-Lobato, 2016), TextGAN (Zhang et al., 2017), LeakGAN (Guo et al., 2018). Without loss of generality, we evaluate our model on two benchmark datasets: a synthetic dataset and a real text dataset (COCO image caption (Lin et al., 2014)).
|
| 93 |
+
|
| 94 |
+
# 3.1.1 Implementation Details
|
| 95 |
+
|
| 96 |
+
In our model, the default initial parameters of all generators follow a Gaussian distribution $\mathcal{N}(0,1)$ . The total number of adversarial training epochs is 200 and the sampling temperature is set to 1.0. We set $\lambda = 1.0$ and $\alpha = 0.1$ , and $G_{d}$ is a seq2seq model based on single-layer RNN-GRU and Luong attention. $\hat{\lambda}$ is set to 1.0, and the number of all epochs $k = 200$ , based on performance. $G$ is a single-layer RNN-GRU network and can be easily extended to other types of generators as well. We implement our model based on Pytorch and use a TITAN X graphic card for learning.
|
| 97 |
+
|
| 98 |
+
# 3.1.2 Dataset Statistics
|
| 99 |
+
|
| 100 |
+
A summary of statistics for each dataset is provided in Table 1. To be fair, on the synthetic and real datasets, we train all models using the same-size (size = 10,000) training set and use the models to generate the same-size (size = 10,000) set of sentences for evaluation.
|
| 101 |
+
|
| 102 |
+
# 3.2 Synthetic Data Experiment
|
| 103 |
+
|
| 104 |
+
Here we use the synthetic dataset used by Texygen (Zhu et al., 2018), which consists of a set of sequential tokens which can be seen as the simulated
|
| 105 |
+
|
| 106 |
+
<table><tr><td>Datasets</td><td>#Train</td><td>#Test</td><td>#Vocab</td><td>Max-Length</td></tr><tr><td>Synthetic</td><td>10,000</td><td>10,000</td><td>5,000</td><td>20</td></tr><tr><td>Real</td><td>10,000</td><td>10,000</td><td>4,684</td><td>38</td></tr></table>
|
| 107 |
+
|
| 108 |
+
Table 1: Statistics for the synthetic and real dataset we use.
|
| 109 |
+
|
| 110 |
+

|
| 111 |
+
Figure 2: The illustration of learning curves. Dotted line is the end of pre-training for baseline models except GSGAN and TextGAN.
|
| 112 |
+
|
| 113 |
+
data comparing to the real-word language data. We compare our model with various models on this dataset, as shown in Figure 2. We observe that our model outperforms all other competitors with a large margin and the NLL loss declines rapidly and steadily from the beginning, demonstrating that our model is more stable and time-efficient.
|
| 114 |
+
|
| 115 |
+
# 3.3 Real Data Experiment
|
| 116 |
+
|
| 117 |
+
We also conduct experiments on a real-world dataset (i.e., COCO image caption), and present a variety of evaluation methods for a comprehensive comparison.
|
| 118 |
+
|
| 119 |
+
Fluency: We show the perplexity of generated sentences in Figure 3, which shows that our model is good at keeping the fluency of sentences.
|
| 120 |
+
|
| 121 |
+
Novelty: We use the novelty measure in (Wang and Wan, 2018) to investigate how different the generated sentences and the training corpus are. From
|
| 122 |
+
|
| 123 |
+

|
| 124 |
+
Figure 3: Comparison of fluency (lower perplexity means better fluency) and novelty of generated sentences.
|
| 125 |
+
|
| 126 |
+

|
| 127 |
+
Figure 4: Different loss curves during adversarial training process.
|
| 128 |
+
|
| 129 |
+
the results in Figure 3, we observe that our model has a better ability to generate new sequences.
|
| 130 |
+
|
| 131 |
+
Generalization: Same as Texygen, we also evaluate BLEU (Papineni et al., 2002) between generated sentences and the test set to see the generalization capacity of different models. The BLEU scores are shown in Table 2, which show that our model has a rather good generalization capacity. Moreover, as the order (i.e., n) of n-gram rises, the corresponding BLEU performance of our model does not drop as fast as other models.
|
| 132 |
+
|
| 133 |
+
Diversity: We use Self-BLEU to evaluate how one sentence resembles the rest in a generated collection. From Table 2, we see that the sentences generated by our model have the lowest Self-BLEU score, indicating the highest diversity.
|
| 134 |
+
|
| 135 |
+
Human Evaluation: We randomly extract 100 sentences from the generated sentences and then hire three workers on Amazon Mechanical Turk to rate each of them according to its 'grammaticality', 'topicality', and 'overall' aspects, where 'topicality' indicates the semantic consistency of the entire sentence. The rating score ranges from 1 to 5, and 5 is the best. As shown in Table 2, our model outperforms several baseline models, especially in the aspects of 'topicality' and overall quality.
|
| 136 |
+
|
| 137 |
+
Training Stability: We also show the different loss curves of our model during the adversarial training process in Figure 4. As can be seen in Figure 4, the adversarial process between $G$ and $G_{d}$ is quite stable. Firstly, the discriminator is not powerful enough to let loss $\mathcal{L}_{adv}$ fall to 0, because it does more things than a simple binary prediction. Secondly, the ability $(\hat{\mathcal{L}}_{adv})$ of the generator to attempt to deceive the discriminator has been fluctuating. As the discriminator has been getting better, we argue that the capabilities of the generator are constantly being enhanced, that is, more similar to real texts.
|
| 138 |
+
|
| 139 |
+
<table><tr><td rowspan="2">Models</td><td colspan="4">Generalization ↑</td><td colspan="4">Diversity ↓</td><td colspan="3">Human Evaluation ↑</td></tr><tr><td>BLEU-2</td><td>BLEU-3</td><td>BLEU-4</td><td>BLEU-5</td><td>BLEU-2</td><td>BLEU-3</td><td>BLEU-4</td><td>BLEU-5</td><td>Grammaticality</td><td>Topicality</td><td>Overall</td></tr><tr><td>MLE</td><td>0.731</td><td>0.497</td><td>0.305</td><td>0.189</td><td>0.916</td><td>0.769</td><td>0.583</td><td>0.408</td><td>3.68</td><td>2.03</td><td>2.57</td></tr><tr><td>SeqGAN</td><td>0.745</td><td>0.498</td><td>0.294</td><td>0.180</td><td>0.950</td><td>0.840</td><td>0.670</td><td>0.498</td><td>3.73</td><td>3.29</td><td>3.36</td></tr><tr><td>MaliGAN</td><td>0.673</td><td>0.432</td><td>0.257</td><td>0.159</td><td>0.918</td><td>0.781</td><td>0.606</td><td>0.437</td><td>3.83</td><td>2.32</td><td>2.79</td></tr><tr><td>RankGAN</td><td>0.743</td><td>0.467</td><td>0.264</td><td>0.156</td><td>0.959</td><td>0.882</td><td>0.762</td><td>0.618</td><td>3.94</td><td>3.83</td><td>3.78</td></tr><tr><td>LeakGAN</td><td>0.746</td><td>0.528</td><td>0.355</td><td>0.230</td><td>0.966</td><td>0.913</td><td>0.848</td><td>0.780</td><td>4.08</td><td>4.04</td><td>3.96</td></tr><tr><td>TextGAN</td><td>0.593</td><td>0.463</td><td>0.277</td><td>0.207</td><td>0.942</td><td>0.931</td><td>0.804</td><td>0.746</td><td>4.23</td><td>3.46</td><td>3.99</td></tr><tr><td>SLGAN</td><td>0.753</td><td>0.502</td><td>0.348</td><td>0.251</td><td>0.751</td><td>0.573</td><td>0.422</td><td>0.313</td><td>3.93</td><td>4.29</td><td>4.16</td></tr></table>
|
| 140 |
+
|
| 141 |
+
Table 2: Results on real dataset. $\downarrow$ means the smaller the better, and $\uparrow$ is the opposite. The best scores are bold and our scores are underlined. The kappa coefficient of the three workers is 0.63.
|
| 142 |
+
|
| 143 |
+
# 4 Conclusion and Future Work
|
| 144 |
+
|
| 145 |
+
In this study, we propose a sequence contrast loss for adversarial text generation, where the discriminator outputs discriminative sequences rather than binary classification probabilities. Extensive experimental results demonstrate that our model brings improvements in training stability and the quality of generated texts.
|
| 146 |
+
|
| 147 |
+
In future work, we will expand our method to have specific targets, to benefit more conditional text generation tasks (e.g., sentimental text generation, dialogue response generation).
|
| 148 |
+
|
| 149 |
+
# Acknowledgments
|
| 150 |
+
|
| 151 |
+
This work was supported by National Natural Science Foundation of China (61772036), Beijing Academy of Artificial Intelligence (BAAI) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We appreciate the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author.
|
| 152 |
+
|
| 153 |
+
# References
|
| 154 |
+
|
| 155 |
+
Kaisar Ahmad and Sheikh Parvaiz Ahmad. 2019. A comparative study of maximum likelihood estimation and bayesian estimation for erlang distribution and its applications. In Statistical Methodologies. InTechOpen.
|
| 156 |
+
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR 2015.
|
| 157 |
+
Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In NeurIPS 2015, pages 1171-1179.
|
| 158 |
+
Tong Che, Yanran Li, Ruixiang Zhang, R. Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. 2017. Maximum-likelihood augmented discrete generative adversarial networks. CoRR, abs/1702.07983.
|
| 159 |
+
|
| 160 |
+
Liqun Chen, Shuyang Dai, Chenyang Tao, Haichao Zhang, Zhe Gan, Dinghan Shen, Yizhe Zhang, Guoyin Wang, Ruiyi Zhang, and Lawrence Carin. 2018. Adversarial text generation via feature-mover's distance. In NeurIPS 2018, pages 4671-4682.
|
| 161 |
+
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020. A simple framework for contrastive learning of visual representations. CoRR, abs/2002.05709.
|
| 162 |
+
Junyoung Chung, Caglar Gülcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555.
|
| 163 |
+
Hao Fang, Saurabh Gupta, Forrest N. Iandola, Rupesh Kumar Srivastava, Li Deng, Piotr Dolkar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, and Geoffrey Zweig. 2015. From captions to visual concepts and back. In CVPR 2015, pages 1473-1482.
|
| 164 |
+
Ziding Feng and Charles E McCulloch. 1992. Statistical inference using maximum likelihood estimation and the generalized likelihood ratio when the true parameter is on the boundary of the parameter space. Statistics & Probability Letters, 13(4):325-332.
|
| 165 |
+
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In ICML 2017, pages 1243-1252.
|
| 166 |
+
Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In NeurIPS 2014, pages 2672-2680.
|
| 167 |
+
Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. 2018. Long text generation via adversarial training with leaked information. In AAAI 2018, pages 5141-5148.
|
| 168 |
+
Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006. Dimensionality reduction by learning an invariant mapping. In CVPR 2006, pages 1735-1742.
|
| 169 |
+
|
| 170 |
+
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. 2019. Momentum contrast for unsupervised visual representation learning. CoRR, abs/1911.05722.
|
| 171 |
+
Olivier J. Henaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aaron van den Oord. 2019. Data-efficient image recognition with contrastive predictive coding. CoR-R, abs/1905.09272.
|
| 172 |
+
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.
|
| 173 |
+
Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In ICLR 2017. OpenReview.net.
|
| 174 |
+
Pei Ke, Fei Huang, Minlie Huang, and Xiaoyan Zhu. 2019. ARAML: A stable adversarial training framework for text generation. In EMNLP-IJCNLP 2019, pages 4270-4280. Association for Computational Linguistics.
|
| 175 |
+
Matt J. Kusner and José Miguel Hernández-Lobato. 2016. GANS for sequences of discrete elements with the gumbel-softmax distribution. CoRR, abs/1611.04051.
|
| 176 |
+
Kevin Lin, Dianqi Li, Xiaodong He, Ming-Ting Sun, and Zhengyou Zhang. 2017. Adversarial ranking for language generation. In NeurIPS 2017, pages 3158-3168.
|
| 177 |
+
Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. In ECCV 2014, pages 740-755.
|
| 178 |
+
Tomas Mikolov, Stefan Kombrink, Lukás Burget, Jan Cernocký, and Sanjeev Khudanpur. 2011. Extensions of recurrent neural network language model. In ICASSP 2011, pages 5528-5531.
|
| 179 |
+
Weili Nie, Nina Narodytska, and Ankit Patel. 2019. *Relgan: Relational generative adversarial networks* for text generation. In *ICLR 2019*. OpenReview.net.
|
| 180 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL 2002, pages 311-318.
|
| 181 |
+
Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In ICLR 2016.
|
| 182 |
+
Kevin Reschke, Adam Vogel, and Dan Jurafsky. 2013. Generating recommendation dialogs by extracting information from user reviews. In ACL 2013, pages 499-504.
|
| 183 |
+
|
| 184 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS 2017, pages 6000-6010.
|
| 185 |
+
Ke Wang, Hang Hua, and Xiaojun Wan. 2019. Controllable unsupervised text attribute transfer via editing entangled latent representation. In NeurIPS 2019, pages 11034-11044.
|
| 186 |
+
Ke Wang and Xiaojun Wan. 2018. Sentigan: Generating sentimental texts via mixture adversarial networks. In *IJCAI* 2018, pages 4446-4452.
|
| 187 |
+
Ke Wang and Xiaojun Wan. 2019. Automatic generation of sentimental texts via mixture adversarial networks. Artif. Intell., 275:540-558.
|
| 188 |
+
Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI 2017, pages 2852-2858.
|
| 189 |
+
Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. 2017. Adversarial feature matching for text generation. In ICML 2017, pages 4006-4015.
|
| 190 |
+
Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texy- gen: A benchmarking platform for text generation models. In SIGIR 2018, pages 1097-1100.
|
| 191 |
+
|
| 192 |
+
# A Appendices
|
| 193 |
+
|
| 194 |
+
# A.1 Implementation Details
|
| 195 |
+
|
| 196 |
+
In our model, the default initial parameters of all generators follow a Gaussian distribution $\mathcal{N}(0,1)$ . The total number of adversarial training epochs is 200 and the sampling temperature is set to 1.0. We set $\lambda = 1.0$ and $\alpha = 0.1$ , and $G_{d}$ is a seq2seq model based on single-layer RNN-GRU and Luong attention. $\hat{\lambda}$ is set to 1.0, and the number of all epochs $k = 200$ , based on performance. $G$ is a single-layer RNN-GRU network and can be easily extended to other types of generators as well. We implement our model based on Pytorch and use a TITAN X graphic card for learning.
|
| 197 |
+
|
| 198 |
+
# A.2 Generated Cases
|
| 199 |
+
|
| 200 |
+
In Table 3, we show example sentences generated by different models trained on a real-world dataset. From the examples, we see that: 1) Although the sentence produced by the MLE method is longer, it may have unreadable and unreasonable problems. 2) The sentences generated by LeakGAN and TextGAN are more readable, but they are not diversified and relatively short. 3) In particular, compared with all benchmark methods, the sentences
|
| 201 |
+
|
| 202 |
+
<table><tr><td>MLE</td><td>a store is blue sink in a water bottle. (Unreasonable)
|
| 203 |
+
serious air force jet mid flight during a cobblestone day, where a flooded street
|
| 204 |
+
a simple bathroom with some wood cupboards.
|
| 205 |
+
a girafee is standing in the spot for a village in parking spot with four hinged cakes trees
|
| 206 |
+
a jet jet flying away on the runway, in the sky.
|
| 207 |
+
a fat orange motorcycle is low building.
|
| 208 |
+
a bathroom with a sink, a sink, refrigerator and the walls. (Unreadable)
|
| 209 |
+
a living room with a blue roof and green traffic lights blue.
|
| 210 |
+
person sitting in a commercial plane at night.</td></tr><tr><td>LeakGAN</td><td>a view of a parking desk with two plungers
|
| 211 |
+
a desk with multiple large monitors. (Very short)
|
| 212 |
+
a woman wearing a glass is sitting on a cupboard.
|
| 213 |
+
a kitchen with a shelf area.
|
| 214 |
+
a man tinkers with his ear.
|
| 215 |
+
a white stove top open from a wood oven.
|
| 216 |
+
a group of men talking.
|
| 217 |
+
a kitchen with a shelf area. (Repeated)
|
| 218 |
+
two people sitting on.</td></tr><tr><td>TextGAN</td><td>a man riding a motorcycle. (Very short)
|
| 219 |
+
is to a bathroom with a sink. (Unreadable)
|
| 220 |
+
a man is on a motorcycle.
|
| 221 |
+
a white toilet a sink.
|
| 222 |
+
with a sink and a table.
|
| 223 |
+
a motorcycle in a blue sky.
|
| 224 |
+
a bathroom with a sink.
|
| 225 |
+
a man is sitting on a motorcycle. (Repeated)
|
| 226 |
+
a bathroom with a sink.</td></tr><tr><td>SLGAN</td><td>a group of people sat in front of the house together.
|
| 227 |
+
several people stood in front of the bicycle.
|
| 228 |
+
a person is holding a monitor range in the kitchen.
|
| 229 |
+
a woman is riding a motorcycle on the street.
|
| 230 |
+
three adults sat in his car with hats.
|
| 231 |
+
two people in a public parking lot.
|
| 232 |
+
white bathtub, toilet and basin under the bathroom wall.
|
| 233 |
+
an old brick building with a wooden manufacturer next to it.
|
| 234 |
+
a motor scooter parked in the street with a crowd waiting for a parade.</td></tr></table>
|
| 235 |
+
|
| 236 |
+
Table 3: Example sentences generated by different models.
|
| 237 |
+
|
| 238 |
+
produced by our model are more readable, diversified and of better quality.
|
adversarialtextgenerationviasequencecontrastdiscrimination/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:33c9beb8b08e032158f52b17b1901e1e01298b8c83fa1405fad7a86ff8017e92
|
| 3 |
+
size 368019
|
adversarialtextgenerationviasequencecontrastdiscrimination/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:924669e5c76b569ffdb48bcbee19c2b9a1b4b052fec0cc37e5424dffd3d5798b
|
| 3 |
+
size 266030
|
adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/a031f169-a4eb-455f-b10e-e7c04edf6505_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:704d34673615bcc84069ceaa14c485a8b0ce3aff602479e99bfad6897189cdbc
|
| 3 |
+
size 77005
|
adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/a031f169-a4eb-455f-b10e-e7c04edf6505_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d096ff19e30e59bd7cc59a5558aeacd1673ba2cc18f821634a6f6dfec9a6c807
|
| 3 |
+
size 96067
|
adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/a031f169-a4eb-455f-b10e-e7c04edf6505_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:43c802455a205219c728c126167fe547c9db950b170c43588d6dc3b10f6bc456
|
| 3 |
+
size 501421
|
adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/full.md
ADDED
|
@@ -0,0 +1,284 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Adversarial Training for Code Retrieval with Question-Description Relevance Regularization
|
| 2 |
+
|
| 3 |
+
Jie Zhao
|
| 4 |
+
The Ohio State University
|
| 5 |
+
zhao.1359@osu.edu
|
| 6 |
+
|
| 7 |
+
Huan Sun
|
| 8 |
+
The Ohio State University
|
| 9 |
+
sun.397@osu.edu
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
Code retrieval is a key task aiming to match natural and programming languages. In this work, we propose adversarial learning for code retrieval, that is regularized by question-description relevance. First, we adapt a simple adversarial learning technique to generate difficult code snippets given the input question, which can help the learning of code retrieval that faces bi-modal and data-scarce challenges. Second, we propose to leverage question-description relevance to regularize adversarial learning, such that a generated code snippet should contribute more to the code retrieval training loss, only if its paired natural language description is predicted to be less relevant to the user given question. Experiments on large-scale code retrieval datasets of two programming languages show that our adversarial learning method is able to improve the performance of state-of-the-art models. Moreover, using an additional duplicate question prediction model to regularize adversarial learning further improves the performance, and this is more effective than using the duplicated questions in strong multi-task learning baselines.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Recently there has been a growing research interest in the intersection of natural language (NL) and programming language (PL), with exemplar tasks including code generation (Agashe et al., 2019; Bi et al., 2019), code summarizing (LeClair and McMillan, 2019; Panthaplackel et al., 2020), and code retrieval (Gu et al., 2018). In this paper, we study code retrieval, which aims to retrieve code snippets for a given NL question such as "Flatten a shallow list in Python." Advanced code retrieval tools can save programmers tremendous time in
|
| 18 |
+
|
| 19 |
+
various scenarios, such as how to fix a bug, how to implement a function, which API to use, etc. Moreover, even if the retrieved code snippets do not perfectly match the NL question, editing them is often much easier than generating a code snippet from scratch. For example, the retrieve-and-edit paradigm (Hayati et al., 2018; Hashimoto et al., 2018; Guo et al., 2019) for code generation has attracted growing attention recently, which first employs a code retriever to find the most relevant code snippets for a given question, and then edit them via a code generation model. Previous work has shown that code retrieval performance can significantly affect the final generated results (Huang et al., 2018) in such scenarios.
|
| 20 |
+
|
| 21 |
+
There have been two groups of work on code retrieval: (1) One group of work (e.g., the recent retrieve-and-edit work (Hashimoto et al., 2018; Guo et al., 2019)) assumes each code snippet is associated with NL descriptions and retrieves code snippets by measuring the relevance between such descriptions and a given question. (2) The other group of work (e.g., CODENN (Iyer et al., 2016) and Deep Code Search (Gu et al., 2018)) directly measures the relevance between a question and a code snippet. Comparing with the former group, this group of work has the advantage that they can still apply when NL descriptions are not available for candidate code snippets, as is often the case for many large-scale code repositories (Dinella et al., 2020; Chen and Monperrus, 2019). Our work connects with both groups: We aim to directly match a code snippet with a given question, but during training, we will utilize question-description relevance to improve the learning process.
|
| 22 |
+
|
| 23 |
+
Despite the existing efforts, we observe two challenges for directly matching code snippets with NL questions, which motivate this work. First, code retrieval as a bi-modal task requires representation learning of two heterogeneous but complementary
|
| 24 |
+
|
| 25 |
+
modalities, which has been known to be difficult (Cvitkovic et al., 2019; LeC; Akbar and Kak, 2019) and may require more training data. This makes code retrieval more challenging compared to document retrieval where the target documents often contain useful shallow NL features like keywords or key phrases. Second, code retrieval often encounters special one-to-many mapping scenarios, where one NL question can be solved by multiple code solutions that take very different approaches. Table 1 illustrates the challenges. For $i = 1,2$ or 3, $q^{(i)}$ is an NL question/description that is associated with a Python answer $c^{(i)}$ . Here, question $q^{(1)}$ should be matched with multiple code snippets: $c^{(1)}$ and $c^{(2)}$ , because they both flatten a 2D list despite with different programming approaches. In contrast, $c^{(3)}$ is performing a totally different task, but uses many overlapped tokens with $c^{(1)}$ . Hence, it can be difficult to train a code retrieval model that generalizes well to match $q^{(1)}$ with both $c^{(1)}$ and $c^{(2)}$ , and is simultaneously able to distinguish $c^{(1)}$ from $c^{(3)}$ .
|
| 26 |
+
|
| 27 |
+
To address the first challenge, we propose to introduce adversarial training to code retrieval, which has been successfully applied to transfer learning from one domain to another (Tzeng et al., 2017) or learning with scarce supervised data (Kim et al., 2019). Our intuition is that by employing a generative adversarial model to produce challenging negative code snippets during training, the code retrieval model will be strengthened to distinguish between positive and negative $\langle q, c \rangle$ pairs. In particular, we adapt a generative adversarial sampling technique (Wang et al., 2017), whose effectiveness has been shown in a wide range of uni-modal text retrieval tasks.
|
| 28 |
+
|
| 29 |
+
For the second challenge, we propose to further employ question-description (QD) relevance as a complementary uni-modal view to reweight the adversarial training samples. In general, our intuition is that the code retrieval model should put more weights on the adversarial examples that are hard to distinguish by itself, but easy from the view of a QD relevance model. This design will help solve the one-to-many issue in the second challenge, by differentiating true negative and false negative adversarial examples: If a QD relevance model also suggests that a code snippet is not relevant to the original question, it is more likely to be a true negative, and hence the code retrieval model should put more weights on it. Note that this QD relevance
|
| 30 |
+
|
| 31 |
+
$\pmb{q}^{(1)}$ Flatten a shallow list in Python
|
| 32 |
+
|
| 33 |
+
$\pmb{c}^{(1)}$ from itertools import chain
|
| 34 |
+
rslt = chain(*list_2d)
|
| 35 |
+
|
| 36 |
+
$\pmb{q}^{(2)}$ How to flatten a 2D list to 1D without using numpy?
|
| 37 |
+
$c^{(2)}$ list_of_lists $= [[1,2,3],[1,2],[1,4,5,6,7]]$ [j for sub in list_of_lists for j in sub]
|
| 38 |
+
|
| 39 |
+
$\pmb{q}^{(3)}$ How to get all possible combinations of a list's elements?
|
| 40 |
+
$\pmb{c}^{(3)}$ from itertools import chain, combinations subsets $=$ chain(*map( lambda x: combinations(mylist,x),range(0,len(mylist)+1)))
|
| 41 |
+
|
| 42 |
+
Table 1: Motivating Example. $\langle q^{(i)},c^{(i)}\rangle$ denotes an associated (natural language question, code snippet) pair. $q^{(i)}$ can also be viewed as a description of $c^{(i)}$ . Given $q^{(1)}$ , the ideal code retrieval result is to return both $c^{(1)}$ and $c^{(2)}$ as their programming semantics are equivalent. Contrarily, $c^{(3)}$ is semantically irrelevant to $q^{(1)}$ and should not be returned, although its surface form is similar to $c^{(1)}$ . In such cases, it can be easier to decide their relationships from the question perspective, because $\langle q^{(1)},q^{(2)}\rangle$ are more alike than $\langle q^{(1)},q^{(3)}\rangle$ .
|
| 43 |
+
|
| 44 |
+
design aims to help train the code retrieval model better and we do not need NL descriptions to be associated with code snippets at testing phase.
|
| 45 |
+
|
| 46 |
+
We conduct extensive experiments using a large-scale <question, code snippet> dataset StaQC (Yao et al., 2018) and our collected duplicated question dataset from Stack Overflow<sup>2</sup>. The results show that our proposed learning framework is able to improve the state-of-the-art code retrieval models and outperforms using adversarial learning without QD relevance regularization, as well as strong multitask learning baselines that also utilize question duplication data.
|
| 47 |
+
|
| 48 |
+
# 2 Overview
|
| 49 |
+
|
| 50 |
+
The work studies code retrieval, a task of matching questions with code, which we will use QC to stand for. The training set $\mathcal{D}^{\mathrm{QC}}$ consists of NL question and code snippet pairs $\mathcal{D}^{\mathrm{QC}} = \{q^{(i)},c^{(i)}\}$ . Given NL question $q^{(i)}$ , the QC task is to find $c^{(i)}$ from $\mathcal{D}^{\mathrm{QC}}$ among all the code snippets. For simplicity, we omit the data sample index and use $q$ and $c$ to denote a QC pair, and $c^{-}$ to represent any other code snippets in the dataset except for $c$ .
|
| 51 |
+
|
| 52 |
+
Our goal is to learn a QC model, denoted as $f_{\theta}^{\mathrm{QC}}$ , that retrieves the highest score code snippets for an input question: arg max $c' \in \{c\} \cup \{c^{-}\} f_{\theta}^{\mathrm{QC}}(q, c')$ . Note that at testing time, the trained QC model $f^{\mathrm{QC}}$ can be used to retrieve code snippets from any code bases, unlike the group of QC methods (Hayati et al., 2018; Hashimoto et al., 2018; Guo et al.,
|
| 53 |
+
|
| 54 |
+
2019) relying on the availability of NL descriptions of code.
|
| 55 |
+
|
| 56 |
+
We aim to address the aforementioned challenges in code retrieval through two strategies: (1) We introduce adversarial learning (Goodfellow et al., 2014a) to alleviate the bi-modal learning challenges. Specifically an adversarial QC generator selects unpaired code snippets that are difficult for the QC model to discriminate, to strengthen its ability to distinguish top-ranked positive and negative samples (Wang et al., 2017). (2) We also propose to employ a question-description (QD) relevance model to provide a secondary view on the generated adversarial samples, inspired by the group of QC work that measures the relevance of code snippets through their associated NL descriptions.
|
| 57 |
+
|
| 58 |
+
Figure 1 gives an overview of our proposed learning framework, which does not assume specific model architectures and can be generalized to different base QC models or use different QD relevance models. A general description is given in the caption. In summary, the adversarial QC generator selects $\hat{c}$ that is unpaired with a given $q$ . $\hat{q}$ is an NL description of $\hat{c}$ . Details on how to acquire $\hat{q}$ will be introduced in Section 3.2. Next, a QD model predicts a relevance score for $\langle q,\hat{q}\rangle$ . A pairwise ranking loss is calculated based on whether the QC model discriminates ground-truth QC pair $\langle q,c\rangle$ from unpaired $\langle q,\hat{c}\rangle$ . Learning through this loss is reweighted by a down-scale factor, which is dynamically determined by the QD relevance prediction score. This works as a regularization term over potential false negative adversarial samples.
|
| 59 |
+
|
| 60 |
+
# 3 Proposed Methodology
|
| 61 |
+
|
| 62 |
+
We now introduce in detail our proposed learning framework. We start with the adversarial learning method in Section 3.1 and then discuss the rationale to incorporate question-description or QD relevance feedback in Section 3.2, before putting them together in Section 3.3 and Section 3.4.
|
| 63 |
+
|
| 64 |
+
# 3.1 Adversarial Learning via Sampling
|
| 65 |
+
|
| 66 |
+
We propose to apply adversarial learning (Goodfellow et al., 2014a) to code retrieval. Our goal is to train a better QC model $f_{\theta}^{\mathrm{QC}}$ by letting it play the adversarial game with a QC generator model $g_{\phi}^{\mathrm{QC}}$ . $\theta$ represents the parameters of the QC model and $\phi$ represents the parameters of the adversarial QC generator. As in standard adversarial learning, $f_{\theta}^{\mathrm{QC}}$ plays the discriminator role to distinguish ground
|
| 67 |
+
|
| 68 |
+

|
| 69 |
+
Figure 1: Regularized adversarial learning framework. Best viewed in color. The adversarial QC generator (middle) produces an adversarial code given an NL question. The QD relevance model (right) then predicts a relevance score between the given question and the NL description or the generated adversarial code. A pairwise ranking loss is computed between the ground-truth code and the adversarial code. The QC model (left) is trained with the ranking loss, after it is scaled by a QD relevance regularization weight that depends on the QD relevance score. The parameter update is larger when the relevance score is smaller and vice versa.
|
| 70 |
+
|
| 71 |
+
truth code snippet $c$ from generated pairs $\hat{c}$ . The training objective of the QC model is to minimize $\mathcal{L}_{\theta}$ below:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\begin{array}{l} \mathcal {L} _ {\theta} = \sum_ {i} \mathbb {E} _ {\hat {c} \sim P _ {\phi} (c | q ^ {(i)})} l _ {\theta} \left(q ^ {(i)}, c ^ {(i)}, \hat {c}\right), \\ l _ {\theta} = \max (0, d + f _ {\theta} ^ {\mathrm {Q C}} (q ^ {(i)}, \hat {c}) - f _ {\theta} ^ {\mathrm {Q C}} (q ^ {(i)}, c ^ {(i)})), \\ \end{array}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
where $l_{\theta}$ is a pairwise ranking loss, and specifically we use a hinge loss with margin $d$ . $\hat{c}$ is generated by $g_{\phi}^{\mathrm{QC}}$ and follows a probability distribution $P_{\phi}(c|q^{(i)})$ . $g_{\phi}^{\mathrm{QC}}$ aims to assign higher probabilities to code snippets that would mislead $f_{\theta}^{\mathrm{QC}}$ .
|
| 78 |
+
|
| 79 |
+
There are many ways to realize the QC generator. For example, one may employ a sequence model to generate the adversarial code snippet $\hat{c}$ token by token (Bi et al., 2019; Agashe et al., 2019). However, training a sequence generation model is difficult, because the search space of all code token combinations is huge. Henceforce, we turn to a simpler idea inspired by Wang et al. (2017), and restrict the generation of $\hat{c}$ to the space of all the existing code snippets in the training dataset $\mathcal{D}^{\mathrm{QC}}$ . The QC generator then only needs to sample an existing code snippet $c^{(j)}$ from an adversarial probability distribution conditioned on a given query and let it be $\hat{c}$ , i.e., $\hat{c} = c^{(j)} \sim P_{\phi}(c|q^{(i)})$ . Adopting this method will make training the QC generator easier, and ensures that the generated code snippets are legitimate as they directly come from the training dataset. We
|
| 80 |
+
|
| 81 |
+
define the adversarial code distribution as:
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
P _ {\phi} (c | q ^ {(i)}) = \frac {\exp (g _ {\phi} ^ {\mathrm {Q C}} (q ^ {(i)} , c) / \tau)}{\sum_ {c ^ {\prime}} \exp (g _ {\phi} ^ {\mathrm {Q C}} (q ^ {(i)} , c ^ {\prime}) / \tau)},
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where $g_{\phi}^{\mathrm{QC}}$ represents an adversarial QC matching function. $\tau$ is a temperature hyper-parameter used to tune the distribution to concentrate more of less on top-scored code snippets. Moreover, scoring all code snippets can be computationally inefficient in practice. Therefore, we use the method of Yang et al. (2019) to first uniformly sample a subset of data, whose size is much smaller than the entire training set size, and then perform adversarial sampling on this subset.
|
| 88 |
+
|
| 89 |
+
The generator function $g_{\phi}^{\mathrm{QC}}$ can be pre-trained in the same way as the discriminator (i.e., $f_{\theta}^{\mathrm{QC}}$ ) and then get updated using standard policy gradient reinforcement learning algorithms, such as REINFORCE (Williams, 1992), to maximize the ranking losses of the QC model. Formally, the QC generator aims to maximize the following expected reward: $J(\phi) = \sum_{i}\mathbb{E}_{c^{(j)}\sim P_{\phi}(c|q^{(i)})}[l_{\theta}(q^{(i)},c^{(i)},c^{(j)})]$ where $l_{\theta}(q^{(i)},c^{(i)},c^{(j)})$ is the pairwise ranking loss of the discriminator model defined earlier. The gradient of $J$ can be derived as $\nabla_{\phi}J = \sum_{i}\mathbb{E}_{c^{(j)}\sim P_{\phi}(c|q^{(i)})}[l_{\theta}\cdot \nabla_{\phi}\log P_{\phi}(c^{(j)}|q^{(i)})]$ . Another option is to let $g_{\phi}^{\mathrm{QC}}$ use the same architecture as $f_{\theta}^{\mathrm{QC}}$ and use tied parameters (i.e., $\phi = \theta$ ), as adopted in previous work (Deshpande and M.Khapra, 2019; Park and Chang, 2019).
|
| 90 |
+
|
| 91 |
+
The focus of this work is to show the effectiveness of applying adversarial learning to code retrieval, and how to regularize it with QD relevance. We leave more complex adversarial techniques (e.g. adversarial perturbation (Goodfellow et al., 2014b; Miyato et al., 2015) or adversarial sequence generation (Li et al., 2018)) for future studies.
|
| 92 |
+
|
| 93 |
+
# 3.2 Question-Description Relevance Regularization
|
| 94 |
+
|
| 95 |
+
Intuitively, we can train a better code retrieval model, if the negative code snippets are all true-negative ones, i.e., if they are confusingly similar to correct code answers, but perform different functionalities. However, because of the one-to-many mapping issue, some negative code snippets sampled by the adversarial QC generator can be false-negative, i.e. they are equally good answers for a given question despite that they are not paired with the question in the training set. Unfortunately during training, this problem could become increas
|
| 96 |
+
|
| 97 |
+
ingly obvious as the adversarial will be improved along with the code retrieval model, and eventually makes learning less and less effective. Since both the QC model and the adversarial QC generator operates from the QC perspective, it is difficult to further discriminate true-negative and false-negative code snippets.
|
| 98 |
+
|
| 99 |
+
Therefore, we propose to alleviate this problem with QD relevance regularization. This idea is inspired by the group of QC work mentioned in Section 1 that retrieves code snippets by matching their NL descriptions with a given question. But different from them, we only leverage QD relevance during training to provide a secondary view and to reweight the adversarial samples. Fortunately, an adversarial code snippet $\hat{c}$ sampled from the original training dataset $\mathcal{D}^{\mathrm{QC}}$ is paired with an NL question $\hat{q}$ , which can be regarded as its NL description and used to calculate the relevance to the given question $q$ .
|
| 100 |
+
|
| 101 |
+
Let us refer to the example in Table 1 again. At a certain point of training, with $q^{(1)}$ "Flatten a shallow list in Python" being the given question, the adversarial QC generator may choose $c^{(2)}$ and $c^{(3)}$ as the negative samples, but instead of treating them equivalently, we can infer from the QD matching perspective that $c^{(3)}$ is likely to be true negative, because $q^{(3)}$ "How to get all possible combinations of a list's elements" clearly has different meanings from $q^{(1)}$ , while $c^{(2)}$ is likely to be a false negative example since $q^{(2)}$ "How to flatten a 2D list to 1D without using numpy?" is similar to $q^{(1)}$ . Hence, during training, the discriminative QC model should put more weights on negative samples like $c^{(3)}$ rather than $c^{(2)}$ .
|
| 102 |
+
|
| 103 |
+
We now explain how to map QD relevance scores to regularization weights. Let $f^{\mathrm{QD}}(q,\hat{q})$ denote the predicted relevance score between the given question $q$ and the question paired with an adversarial code snippet $\hat{q}$ , and let $f^{\mathrm{QD}}(q,\hat{q})$ be normalized to the range from 0 to 1. We can see from the above example that QD relevance and adjusted learning weight should be reversely associated, so we map the normalized relevance score to a weight using a monotonously decreasing polynomial function: $w^{\mathrm{QD}}(x) = (1 - x^{a})^{b}$ , $0\leq x\leq 1$ . Both $a$ and $b$ are positive integer hyper-parameters that control the shape of the curve and can be tuned on the dev sets. In this work, they are both set to one by default for simplicity. $w^{\mathrm{QD}}\in [0,1]$ allows the optimization objective to weigh less on adversarial samples that
|
| 104 |
+
|
| 105 |
+
Algorithm 1: Question-Description Relevance Regularized Adversarial Learning.
|
| 106 |
+
QC training data: $\mathcal{D}^{\mathrm{QC}} = \{q^{(i)},c^{(i)}\}$ QD model $f^{\mathrm{QD}}$ Constants :positive intergers $N,\tau ,a,b$ Result: QC model $f_{\theta}^{\mathrm{QC}}$
|
| 107 |
+
1 Pretrain $f_{\theta}^{\mathrm{QC}}$ on $\mathcal{D}^{\mathrm{QC}}$ using pairwise ranking loss $l_{\theta}^{\mathrm{QC}}$ with randomly sampled negative code snippets ;
|
| 108 |
+
2 Initialize QC generator $g_{\phi}^{\mathrm{QC}}$ with $f_{\theta}^{\mathrm{QC}}$ .. $\phi \leftarrow \theta$ .
|
| 109 |
+
while not converge or not reach max iter number do
|
| 110 |
+
for random sampled $\langle q^{(i)},c^{(i)}\rangle \in \mathcal{D}^{QC}$ do
|
| 111 |
+
Randomly choose $D = \{q,c\} \subset \mathcal{D}^{\mathrm{QC}}$ where $|D| = N$ Sample $c^{(j)}\in D$ that $c^{(j)}\sim P_{\phi}(c^{(j)}|q^{(i)})=$ softmax $\tau (g_{\phi}^{\mathrm{QC}}(q^{(i)},c^{(j)}))$ . $l_{\theta}^{\mathrm{QC}}\gets l_{\theta}(q^{(i)},c^{(i)},c^{(j)})$ . Find $q^{(j)}$ associated with $q^{(j)}$ $w^{\mathrm{QD}}\gets (1 - f^{\mathrm{QD}}(q^{(i)},q^{(j)})^{a})^{b}$ Update QC model with gradient descent to reduce loss: $w^{\mathrm{QD}}\cdot l_{\theta}^{\mathrm{QC}}$ Update adversarial QC generator with gradient ascent: $l_{\theta}^{\mathrm{QC}}\cdot \nabla_{\phi}\log P_{\phi}(c^{(j)}|q^{(i)})$
|
| 112 |
+
end
|
| 113 |
+
Optional QD model update. (See Section 3.4.)
|
| 114 |
+
|
| 115 |
+
are more likely to be false negative.
|
| 116 |
+
|
| 117 |
+
# 3.3 Question-Description Relevance Regularized Adversarial Learning
|
| 118 |
+
|
| 119 |
+
Now we describe the proposed learning framework in Algorithm 1 that combines adversarial learning and QD relevance regularization. Let us first assume the QD model is given and we will explain how to pre-train, and optionally update it shortly.
|
| 120 |
+
|
| 121 |
+
The QC model can be first pre-trained on $\mathcal{D}^{\mathrm{QC}}$ using standard pairwise ranking loss $l_{\theta}(q^{(i)},c^{(i)},c^{(j)})$ with randomly sampled $c^{(j)}$ . Line 3-11 show the QC model training steps. For each QC pair $\langle q^{(i)},c^{(i)}\rangle$ , a batch of negative QC pairs are sampled randomly from the training set $\mathcal{D}^{\mathrm{QC}}$ . The QC generator then choose an adversarial $c^{(j)}$ from distribution $P_{\phi}(c|q^{(i)})$ defined in Section 3.1, and its paired question is $q^{(j)}$ . Two questions $q^{(i)}$ and $q^{(j)}$ are then passed to the QD model, and the QD relevance prediction is mapped to a regularization weight $w^{\mathrm{QD}}$ . Finally, the regularization weight is used to control the update of the QC model on the ranking loss with the adversarial $\hat{c}$ .
|
| 122 |
+
|
| 123 |
+
# 3.4 Base Model Architecture
|
| 124 |
+
|
| 125 |
+
Our framework can be instantiated with various model architectures for QC or QD. Here we choose the same neural network architecture as (Gu et al.,
|
| 126 |
+
|
| 127 |
+
2018; Yao et al., 2019) as our base QC model, that achieves competitive or state-of-the-art code retrieval performances. Concretely, both a natural language question $q$ and a code snippet $c$ are sequences of tokens. They are encoded respectively by separate bi-LSTM networks (Schuster and Paliwal, 1997), passed through a max pooling layer to extract the most salient features of the entire sequence, and then through a hyperbolic tangent activate function. The encoded question and code representations are denoted as $h^q$ and $h^c$ . Finally, a matching component scores the vector representation between $q$ and $c$ and outputs their matching score for ranking. We follow previous work to use cosine similarity: $f^{\mathrm{QC}}(q,c) = \cos \mathrm{ine}(h^{q},h^{c})$ .
|
| 128 |
+
|
| 129 |
+
QD Model. There are various model architecture choices, but here for simplicity, we adapt the QC model for QD relevance prediction. We let the QD model use the same neural architecture as the QC model, but with Siamese question encoders. The QD relevance score is the cosine similarity between $h^{q(i)}$ and $h^{q(j)}$ , the bi-LSTM encoding outputs for question $q^{(i)}$ and $q^{(j)}$ respectively: $f^{\mathrm{QD}}(q^{(i)},q^{(j)}) = \mathrm{cosine}(h^{q^{(i)}},h^{q^{(j)}})$ . This method allows using a pre-trained QC model to initialize the QD model parameters, which is easy to implement and the pre-trained question encoder in the QC model can help the QD performance. Since programming-domain question paraphrases are rare, we collect a small QD training set consisting of programming related natural language question pairs $\mathcal{D}^{\mathrm{QD}} = \{q^{(j)},p^{(j)}\}$ based on duplicated questions in Stack Overflow.
|
| 130 |
+
|
| 131 |
+
The learning framework can be symmetrically applied, as indicated by Line 12 in Algorithm 1, so that the QD model can also be improved. This may provide better QD relevance feedback to help train a better QC model. In short, we can use a discriminative and a generative QD model. The generative QD model selects adversarial questions to help train the discriminative QD model, and this training can be regularized by the relevance predictions from a QC model. More details will be introduced in the experiments.
|
| 132 |
+
|
| 133 |
+
# 4 Experiments
|
| 134 |
+
|
| 135 |
+
In this section, we first introduce our experimental setup, and then will show that our method not only outperforms the baseline methods, but also multi-task learning approaches, where question-description relevance prediction is the other task. In
|
| 136 |
+
|
| 137 |
+
<table><tr><td rowspan="2"></td><td colspan="3">Python</td><td colspan="3">SQL</td></tr><tr><td>Train</td><td>Dev</td><td>Test</td><td>Train</td><td>Dev</td><td>Test</td></tr><tr><td>QC</td><td>68,235</td><td>8,529</td><td>8,530</td><td>60,509</td><td>7,564</td><td>7,564</td></tr><tr><td>QD</td><td>1,085</td><td>1,085</td><td>1,447</td><td>18,020</td><td>2,252</td><td>2,253</td></tr></table>
|
| 138 |
+
|
| 139 |
+
Table 2: Dataset statistics. QD is used to represent the duplicate question dataset.
|
| 140 |
+
|
| 141 |
+
particular, the QD relevance regularization consistently improves QC performance upon adversarial learning, and the effectiveness of relevance regularization can also be verified as it is symmetrically applied to improve the QD task.
|
| 142 |
+
|
| 143 |
+
# 4.1 Datasets
|
| 144 |
+
|
| 145 |
+
We use StaQC (Yao et al., 2018) to train and evaluate our code retrieval model, which contains automatically extracted questions on Python and SQL and their associated code answers from Stack Overflow. We use the version of StaQC that each question is associated with a single answer, as those associated with multiple answers are predicted by an automatic answer detection model and therefore noisier. We randomly split this QC datasets by a 70/15/15 ratio into training, dev and testing sets. The dataset statistics are summarized in Table 2.
|
| 146 |
+
|
| 147 |
+
We use Stack Exchange Data Explorer<sup>3</sup> to collect data for training and evaluating QD relevance prediction. Specifically, we collect the question pairs from posts that are manually labeled as duplicate by users, which are related by LinkTypeId=3. It turns out that the QD datasets are substantially smaller than the QC datasets, especially for Python, as shown in Table 2. This makes it more interesting to check whether a small amount of QD relevance guidance can help improve code retrieval performances.
|
| 148 |
+
|
| 149 |
+
# 4.2 Baselines and Evaluation Metrics
|
| 150 |
+
|
| 151 |
+
We select state-of-the-art methods from both groups of work for QC (mentioned in Introduction). DecAtt and DCS below are methods that directly match questions with code. EditDist and vMF-VAE transfer code retrieval into a question matching problem.
|
| 152 |
+
|
| 153 |
+
- DecAtt (Parikh et al., 2016). This is a widely used neural network model with attention mechanism for sentence pairwise modeling.
|
| 154 |
+
- DCS (Gu et al., 2018). We use this as our base model, because it is a simple yet effective code
|
| 155 |
+
|
| 156 |
+
retrieval model that achieves competitive performance without introducing additional training overheads (Yao et al., 2019). Its architecture has been described in Section 3.4.
|
| 157 |
+
|
| 158 |
+
- EditDist (Hayati et al., 2018). Code snippets are retrieved by measuring an edit distance based similarity function between their associated NL descriptions and the input questions. Since there is only one question for each sample in the QC datasets, we apply a standard code summarization tool (Iyer et al., 2016) to generate code descriptions to match with input questions.
|
| 159 |
+
- vMF-VAE (Guo et al., 2019). This is similar to EditDist, but a vMF Variational Autoencoder (Xu and Durrett, 2018) is separately trained to embed questions and code descriptions into latent vector distributions, whose distance is then measured by KL-divergence. This method is also used by Hashimoto et al. (2018).
|
| 160 |
+
|
| 161 |
+
We further consider multi-task learning (MTL) as an alternative way how QD can help QC. It is worth mentioning that our method does not require associated training data or the sharing of trained parameters between the QD and QC tasks, whereas MTL typically does. For fair comparison, we adapt two MTL methods to our scenario that use the same base model, or its question and code encoders:
|
| 162 |
+
|
| 163 |
+
- MTL-DCS. This is a straightforward MTL adaptation of DCS, where the code encoder is updated on the QC dataset and the question encoder is updated on both QC and QD datasets. The model is alternatively trained on both datasets.
|
| 164 |
+
- MTL-MLP (Gonzalez et al., 2018). This recent MTL method is originally designed to rank relevant questions and question-related comments. It uses a multi-layer perceptron (MLP) network with one shared hidden layer, a task-specific hidden layer and a task-specific classification layer for each output. We adapt it for our task. The input to the MLP is the concatenation of similarity features $\left[\max(h^{q}, h^{c}), h^{q} - h^{c}, h^{q} \odot h^{c}\right]$ , where $\odot$ is element-wise product. $h^{q}$ and $h^{c}$ are learned using the same encoders as our base model.
|
| 165 |
+
|
| 166 |
+
The ranking metrics used for evaluation are Mean Average Precision (MAP) and Normalize Discounted Cumulative Gain (nDCG) (Järvelin and Kekäläinen, 2002). The same evaluation method as previous work is adopted (Iyer et al., 2016; Yao et al., 2019) for both QC and QD, where we randomly choose from the testing set a fixed-size (49) pool of negative candidates for each question, and
|
| 167 |
+
|
| 168 |
+
<table><tr><td></td><td colspan="2">Python</td><td colspan="2">SQL</td></tr><tr><td></td><td>MAP</td><td>nDCG</td><td>MAP</td><td>nDCG</td></tr><tr><td>EditDist (Hayati et al., 2018)</td><td>0.2348</td><td>0.3844</td><td>0.2096</td><td>0.3641</td></tr><tr><td>vMF-VAE (Guo et al., 2019)</td><td>0.2886</td><td>0.4511</td><td>0.2921</td><td>0.4537</td></tr><tr><td>DecAtt (Parikh et al., 2016)</td><td>0.5744</td><td>0.6716</td><td>0.5142</td><td>0.6231</td></tr><tr><td>DCS (Gu et al., 2018)</td><td>0.6015</td><td>0.6929</td><td>0.5155</td><td>0.6237</td></tr><tr><td>MTL-MLP (Gonzalez et al., 2018)</td><td>0.5737</td><td>0.6712</td><td>0.5079</td><td>0.6179</td></tr><tr><td>MTL-DCS</td><td>0.6024</td><td>0.6935</td><td>0.5160</td><td>0.6237</td></tr><tr><td>Our</td><td>0.6372*</td><td>0.7206*</td><td>0.5404*</td><td>0.6429*</td></tr><tr><td>Our - RR</td><td>0.6249*</td><td>0.7111*</td><td>0.5274*</td><td>0.6327*</td></tr></table>
|
| 169 |
+
|
| 170 |
+
Table 3: Code retrieval (QC) performance on test sets. * denotes significantly different from DCS (Gu et al., 2018) in one-tailed t-test $(p < 0.01)$
|
| 171 |
+
|
| 172 |
+

|
| 173 |
+
Figure 2: QC learning curves on the Python dev set.
|
| 174 |
+
|
| 175 |
+
evaluate the ranking of its paired code snippet or questions among these negative candidates.
|
| 176 |
+
|
| 177 |
+
# 4.3 Implementation Details
|
| 178 |
+
|
| 179 |
+
Our implementation is based on Yao et al. (2019). We follow this work to set the base model hyperparameters. The vocabulary embedding size for both natural language and programming language is set at 200. The LSTM hidden size is 400. Margin in the hinge loss is 0.05. The trained DCS model is used as pre-training for our models. The learning rate is set at 1e-4 and the dropout rate set at 0.25. For adversarial training, we set $\tau$ to 0.2 following (Wang et al., 2017) and limit the maximum number of epochs to 300. Standard L2-regularization is used on all the models. We empirically tried to tie the parameters of the discriminator and the generator following previous work (Deshpande and M.Khapra, 2019; Park and Chang, 2019), which shows similar improvements over the baselines. Implementation from Xu and Durrett (2018) is used for the vMF-VAE baseline.
|
| 180 |
+
|
| 181 |
+
We follow the code preprocessing steps in Yao et al. (2018) for Python and Iyer et al. (2016) for SQL. We use the NLTK toolkit (Bird and Loper, 2004) to tokenize the collected duplicate questions, and let it share the same NL vocabulary as the QC dataset $\mathcal{D}^{\mathrm{QC}}$ .
|
| 182 |
+
|
| 183 |
+
# 4.4 Results and Analyses
|
| 184 |
+
|
| 185 |
+
Our experiments aim to answer the following research questions:
|
| 186 |
+
|
| 187 |
+
(1) Can the question regularized adversarial learning framework improve code retrieval (QC) performance? We will first compare the code retrieval performance of different methods. Table 3 summarizes the test results, which are consistent on both Python and SQL datasets. Code retrieval baselines by measuring QD relevance, e.g., EditDist and vMF-VAE, are popularly used in code generation related work, but do not perform well compared to other code retrieval baselines in our experiments, partly because they are not optimized toward the QC task. This suggests that applying more advanced code retrieval methods for retrieve-and-edit code generation can be an interesting future research topic. DCS is a strong baseline, as it outperforms DecAtt that uses a more complex attention mechanism. This indicates that it is not easy to automatically learn pairwise token associations between natural language and programming languages from software community data, which is also suggested by previous work (Panthaplackel et al., 2019; Vinayakarao et al., 2017).
|
| 188 |
+
|
| 189 |
+
Our proposed learning algorithm can improve the QC performance compared to all the baselines. The “-RR” variant is to only apply adversarial sampling without QD relevance regularization. It already leads to improvements compared to the base model (i.e. DCS), but does not perform as well as our full model. This proves the usefulness of the QD relevance regularization and indicates that selectively weighting the contribution of adversarial samples to the training loss can help the model generalize better to test data. Figure 2 compares QC learning curves on the dev set. The full model curve being the smoothest qualitatively suggests that the adversarial learning has been well regularized.
|
| 190 |
+
|
| 191 |
+
(2) How does the proposed algorithm compare with multi-task learning methods? The results are reported in Table 4. The MTL-MLP model is originally proposed to improve question-question relevance prediction by using question-comment relevance prediction as a secondary task (Gonzalez et al., 2018). It does not perform as well as MTL-DCS, which basically uses hard parameter sharing between the two tasks and does not require additional similarity feature definitions. In general, the effectiveness of these MTL baselines on the QC task is limited because there are only a small amount of QD pairs available for training. Both our method and its ablated variant outperform the
|
| 192 |
+
|
| 193 |
+
<table><tr><td></td><td colspan="2">Python</td><td colspan="2">SQL</td></tr><tr><td></td><td>MAP</td><td>nDCG</td><td>MAP</td><td>nDCG</td></tr><tr><td>MTL-MLP (Gonzalez et al., 2018)</td><td>0.5737</td><td>0.6712</td><td>0.5079</td><td>0.6179</td></tr><tr><td>MTL-DCS</td><td>0.6024</td><td>0.6935</td><td>0.5160</td><td>0.6237</td></tr><tr><td>Our</td><td>0.6372</td><td>0.7206</td><td>0.5404</td><td>0.6429</td></tr></table>
|
| 194 |
+
|
| 195 |
+
MTL baselines. This shows that it may be more effective to use a data scarce task to regularize the adversarial learning of a relatively data rich task, than using those scarce data in MTL.
|
| 196 |
+
|
| 197 |
+
(3) Can the QD performance be improved by the proposed method? Although QD is not the focus of this work, we can use it to verify that generalizability of our method by symmetrically applying it to update the QD model as mentioned in Section 3.2. To be concrete, a generative adversarial QD model selects difficult questions from the a distribution of question pair scores: $\hat{q}\sim \mathrm{softmax}_{\tau}(f^{\mathrm{QD}}(\hat{q},q^{(i)}))$ Then a QC model is used to calculate a relevance score for a question-code pair, and this can regularize the adversarial learning of the QD model.
|
| 198 |
+
|
| 199 |
+
Table 5 shows the results. Our method and its ablated variants outperform the QD baselines EditDist and vMF-VAE, again suggesting that supervised learning is more effective. The full model achieves the best overall performance and removing relevance regularization (-RR) from the QC model consistently leads to performance drop. In contrast, further removing adversarial sampling (-AS) hurts the performance on SQL dataset slightly, but not on Python. This is probably because the Python QD dataset is very small and using adversarial learning can easily overfit, which again suggests the importance of our proposed relevance regularization. Finally, removing QC as pretraining (-Pretrain) greatly hurts the performance, which is understandable since QC datasets are much larger.
|
| 200 |
+
|
| 201 |
+
Because the QD model performance can be improved in such a way, we allow it to get updated in our QC experiments (corresponding to line 12 in Algorithm 1) and the results have been discussed in Table 3. We report here the QC performance using a fixed QD model (i.e. Our - RR - AS) for relevance regularization: $\mathrm{MAP} = 0.6371$ , $\mathrm{nDCG} = 0.7205$ for Python and $\mathrm{MAP} = 0.5366$ , $\mathrm{nDCG} = 0.6398$ for SQL. Comparing these results with those in Table3 (Our), one can see that allowing the QD model to update consistently improves QC performance, which suggests that a better QD model can provide more accurate relevance regularization to the QC model and leads to better results.
|
| 202 |
+
|
| 203 |
+
Table 4: Compare QC performance with MTL.
|
| 204 |
+
|
| 205 |
+
<table><tr><td></td><td colspan="2">Python</td><td colspan="2">SQL</td></tr><tr><td></td><td>MAP</td><td>nDCG</td><td>MAP</td><td>nDCG</td></tr><tr><td>EditDist (Hayati et al., 2018)</td><td>0.3617</td><td>0.4883</td><td>0.3246</td><td>0.4580</td></tr><tr><td>vMF-VAE (Guo et al., 2019)</td><td>0.3009</td><td>0.4616</td><td>0.3029</td><td>0.4641</td></tr><tr><td>Our</td><td>0.7162</td><td>0.7821</td><td>0.6947</td><td>0.7651</td></tr><tr><td>Our - RR</td><td>0.7046</td><td>0.7734</td><td>0.6846</td><td>0.7575</td></tr><tr><td>Our - RR - AS</td><td>0.7116</td><td>0.7787</td><td>0.6764</td><td>0.7512</td></tr><tr><td>Our - RR - AS - Pretrain</td><td>0.3882</td><td>0.5170</td><td>0.6284</td><td>0.7129</td></tr></table>
|
| 206 |
+
|
| 207 |
+
Table 5: Question relevance prediction results, evaluated on the question duplication dataset we collected.
|
| 208 |
+
|
| 209 |
+
# 5 Related Work
|
| 210 |
+
|
| 211 |
+
Code Retrieval. Code retrieval has developed from using classic information retrieval techniques (Hill et al., 2014; Haiduc et al., 2013; Lu et al., 2015) to recently deep neural methods that can be categorized into two groups. The first group directly model the similarity across the natural language and programming language modalities. Besides CODENN (Iyer et al., 2016) and DCS (Gu et al., 2018) discussed earlier, Yao et al. (2019) leverage an extra code summarization task and ensemble a separately trained code summary retrieval model with a QC model to achieve better overall code retrieval performances. Ye et al. (2020) further train a code generation model and a code summarization model through dual learning, which helped to learn better NL question and code representations. Both works employ additional sequence generation models that greatly increases the training complexity, and they both treat all unpaired code equally as negatives. Our work differs from them as we introduce adversarial learning for code retrieval, and the existing work do not leverage question relevance for code retrieval as we do.
|
| 212 |
+
|
| 213 |
+
The second group of works transfer code retrieve to a code description retrieval problem. This methodology has been widely adopted as a component in the retrieve-and-edit code generation literature. For example, heuristic methods such as measuring edit distance (Hayati et al., 2018) or comparing code type and length (Huang et al., 2018) are used, and separate question latent representations (Hayati et al., 2018; Guo et al., 2019) are learned. Our work shares with them the idea to exploit QD relevance, but we use QD relevance in a novel way to regularize the adversarial learning of QC models. It will be an interesting future work to leverage the proposed code retrieval method for retrieve-and-edit code generation.
|
| 214 |
+
|
| 215 |
+
Adversarial Learning. Adversarial learning has been widely used in areas such as computer vision
|
| 216 |
+
|
| 217 |
+
(Mirza and Osindero, 2014; Chen et al., 2016; Radford et al., 2015; Arjovsky et al., 2017), text generation (Yu et al., 2017; Chen et al., 2019; Liang, 2019; Gu et al., 2018; Liu et al., 2017; Ma et al., 2019), relation extraction (Wu et al., 2017; Qin et al., 2018), question answering (Oh et al., 2019; Yang et al., 2019), etc. We proposed to apply adversarial learning to code retrieval, because they have effectively improved cross-domain task performances and helped generate useful training data. We adapted the method from Wang et al. (2017) for the bi-modal QC scenario. As future work, adversarial learning for QC can be generalized to other settings with different base neural models (Yang et al., 2019) or with more complex adversarial learning methods, such as adding perturbed noises (Park and Chang, 2019) or generating adversarial sequences (Yu et al., 2017; Li et al., 2018). Our method differs from most adversarial learning work in that the discriminator (QC model) does not see all generated samples as equally negative.
|
| 218 |
+
|
| 219 |
+
# 6 Conclusion
|
| 220 |
+
|
| 221 |
+
This work studies the code retrieval problem, and tries to tackle the challenges of matching natural language questions with programming language (code) snippets. We propose a novel learning algorithm that introduces adversarial learning to code retrieval, and it is further regularized from the perspective of a question-description relevance prediction model. Empirical results show that the proposed method can significantly improve the code retrieval performances on large-scale datasets for both Python and SQL programming languages.
|
| 222 |
+
|
| 223 |
+
# Acknowledgments
|
| 224 |
+
|
| 225 |
+
We would like to thank the anonymous reviewers for their helpful comments. This research was sponsored in part by the Army Research Office under cooperative agreements W911NF-17-1-0412, NSF Grant IIS1815674, and NSF CAREER #1942980. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice herein.
|
| 226 |
+
|
| 227 |
+
# References
|
| 228 |
+
|
| 229 |
+
Rajas Agashe, Srini Iyer, and Luke Zettlemoyer. 2019. Juice: A large scale distantly supervised dataset for open domain context-based code generation. ArXiv, abs/1910.02216.
|
| 230 |
+
Shayan A. Akbar and Avinash C. Kak. 2019. Scor: Source code retrieval with semantics and order. 2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR), pages 1-12.
|
| 231 |
+
Martín Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein generative adversarial networks. In ICML.
|
| 232 |
+
Bin Bi, Chen Wu, Ming Yan, Wei Wang, Jiangnan Xia, and Chenliang Li. 2019. Incorporating external knowledge into machine reading for generative question answering. ArXiv, abs/1909.02745.
|
| 233 |
+
Steven Bird and Edward Loper. 2004. NLTK: The natural language toolkit. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 214-217, Barcelona, Spain. Association for Computational Linguistics.
|
| 234 |
+
MK Chen, Xinyi Lin, Chen Wei, and Rui Yan. 2019. Bofgan: Towards a new structure of backward-orforward generative adversarial nets. In The World Wide Web Conference, pages 2652-2658. ACM.
|
| 235 |
+
Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS.
|
| 236 |
+
Zimin Chen and Martin Monperrus. 2019. A literature study of embeddings on source code. *ArXiv*, abs/1904.03061.
|
| 237 |
+
Milan Cvtkovic, Badal Singh, and Anima Anandkumar. 2019. Open vocabulary learning on source code with a graph-structured cache. In ICML.
|
| 238 |
+
Ameet Deshpande and Mitesh M.Khapra. 2019. Dissecting an adversarial framework for information retrieval.
|
| 239 |
+
Elizabeth Dinella, Hanjun Dai, Ziyang Li, Mayur Naik, Le Song, and Ke Wang. 2020. Hoppity: Learning graph transformations to detect and fix bugs in programs. In ICLR.
|
| 240 |
+
Ana Gonzalez, Isabelle Augenstein, and Anders Søgaard. 2018. A strong baseline for question relevancy ranking. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4810-4815.
|
| 241 |
+
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014a. Generative adversarial nets. In Advances in neural information processing systems, pages 2672-2680.
|
| 242 |
+
|
| 243 |
+
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014b. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
|
| 244 |
+
Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE), pages 933-944. IEEE.
|
| 245 |
+
Daya Guo, Duyu Tang, Nan Duan, Ming Zhou, and Jian Yin. 2019. Coupling retrieval and meta-learning for context-dependent semantic parsing. In ACL.
|
| 246 |
+
Sonia Haiduc, Gabriele Bavota, Andrian Marcus, Rocco Oliveto, Andrea De Lucia, and Tim Menzies. 2013. Automatic query reformulations for text retrieval in software engineering. In Proceedings of the 2013 International Conference on Software Engineering, pages 842-851. IEEE Press.
|
| 247 |
+
Tatsunori B. Hashimoto, Kelvin Guu, Yonatan Oren, and Percy Liang. 2018. A retrieve-and-edit framework for predicting structured outputs. ArXiv, abs/1812.01194.
|
| 248 |
+
Shirley Anugrah Hayati, Raphael Olivier, Pravalika Avvaru, Pengcheng Yin, Anthony Tomasic, and Graham Neubig. 2018. Retrieval-based neural code generation. In EMNLP.
|
| 249 |
+
Emily Hill, Manuel Roldan-Vega, Jerry Alan Fails, and Greg Mallet. 2014. Nl-based query refinement and contextualized code search results: A user study. In 2014 Software Evolution Week-IEEE Conference on Software Maintenance, Reengineering, and Reverse Engineering (CSMR-WCRE), pages 34-43. IEEE.
|
| 250 |
+
Po-Sen Huang, Chenglong Wang, Rishabh Singh, Wen tau Yih, and Xiaodong He. 2018. Natural language to structured query generation via meta-learning. In NAACL-HLT.
|
| 251 |
+
Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2073-2083.
|
| 252 |
+
Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 20(4):422-446.
|
| 253 |
+
Dong-Jin Kim, Jinsoo Choi, Tae-Hyun Oh, and In So Kweon. 2019. Image captioning with very scarce supervised data: Adversarial semi-supervised learning approach. In EMNLP/IJCNLP.
|
| 254 |
+
Alexander LeClair and Collin McMillan. 2019. Recommendations for datasets for source code summarization. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers),
|
| 255 |
+
|
| 256 |
+
pages 3931-3937, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 257 |
+
Dianqi Li, Qiuyuan Huang, Xiaodong He, Lei Zhang, and Ming-Ting Sun. 2018. Generating diverse and accurate visual captions by comparative adversarial learning. ArXiv, abs/1804.00861.
|
| 258 |
+
Shangsong Liang. 2019. Unsupervised semantic generative adversarial networks for expert retrieval. In The World Wide Web Conference, pages 1039-1050. ACM.
|
| 259 |
+
Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classification. In ACL.
|
| 260 |
+
Meili Lu, Xiaobing Sun, Shaowei Wang, David Lo, and Yucong Duan. 2015. Query expansion via wordnet for effective code search. In 2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER), pages 545-549. IEEE.
|
| 261 |
+
Jing Ma, Wei Gao, and Kam-Fai Wong. 2019. Detect rumors on twitter by promoting information campaigns with generative adversarial learning. In The World Wide Web Conference, pages 3049-3055. ACM.
|
| 262 |
+
Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. ArXiv, abs/1411.1784.
|
| 263 |
+
Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. 2015. Distributional smoothing with virtual adversarial training. arXiv preprint arXiv:1507.00677.
|
| 264 |
+
Jong-Hoon Oh, Kazuma Kadowaki, Julien Kloetzer, Ryu Iida, and Kentaro Torisawa. 2019. Open-domain why-question answering with adversarial learning to encode answer texts. In ACL.
|
| 265 |
+
Sheena Panthaplackel, Milos Gligoric, Raymond J. Mooney, and Junyi Jessy Li. 2019. Associating natural language comment and source code entities. ArXiv, abs/1912.06728.
|
| 266 |
+
Sheena Panthaplackel, Pengyu Nie, Milos Gligoric, Junyi Jessy Li, and Raymond J. Mooney. 2020. Learning to update natural language comments based on code changes. ArXiv, abs/2004.12169.
|
| 267 |
+
Ankur Parikh, Oscar Tackstrom, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249-2255, Austin, Texas. Association for Computational Linguistics.
|
| 268 |
+
Dae Hoon Park and Yi Chang. 2019. Adversarial sampling and training for semi-supervised information retrieval. In The World Wide Web Conference, pages 1443-1453. ACM.
|
| 269 |
+
|
| 270 |
+
Pengda Qin, Weiran Xu, and William Yang Wang. 2018. Dsgan: Generative adversarial training for distant supervision relation extraction. In ACL.
|
| 271 |
+
Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
|
| 272 |
+
Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.
|
| 273 |
+
Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7167-7176.
|
| 274 |
+
Venkatesh Vinayakarao, Anita Sarma, Rahul Purandare, Shuktika Jain, and Saumya Jain. 2017. Anne: Improving source code search using entity retrieval approach. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pages 211-220. ACM.
|
| 275 |
+
Jun Wang, Lantao Yu, Weinan Zhang, Yu Gong, Yinghui Xu, Benyou Wang, Peng Zhang, and Dell Zhang. 2017. Irgan: A minimax game for unifying generative and discriminative information retrieval models. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 515-524. ACM.
|
| 276 |
+
Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256.
|
| 277 |
+
|
| 278 |
+
Yi Wu, David Bamman, and Stuart J. Russell. 2017. Adversarial training for relation extraction. In EMNLP.
|
| 279 |
+
Jiacheng Xu and Greg Durrett. 2018. Spherical latent spaces for stable variational autoencoders. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.
|
| 280 |
+
Xiao Yang, Madian Khabsa, Miaosen Wang, Wei Wang, Ahmed Hassan Awadallah, Daniel Kifer, and C Lee Giles. 2019. Adversarial training for community question answer selection based on multiscale matching. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 395-402.
|
| 281 |
+
Ziyu Yao, Jayavardhan Reddy Peddamail, and Huan Sun. 2019. Coacor: Code annotation for code retrieval with reinforcement learning. In *The World Wide Web Conference*, pages 2203-2214. ACM.
|
| 282 |
+
Ziyu Yao, Daniel S Weld, Wei-Peng Chen, and Huan Sun. 2018. Staqc: A systematically mined questioncode dataset from stack overflow. In Proceedings of the 2018 World Wide Web Conference, pages 1693-1703. International World Wide Web Conferences Steering Committee.
|
| 283 |
+
Wei Ye, Rui Xie, Jinglei Zhang, Tianxiang Hu, Xiaoyin Wang, and Shikun Zhang. 2020. Leveraging code generation to improve code retrieval and summarization via dual learning. In Proceedings of The Web Conference 2020, pages 2309-2319.
|
| 284 |
+
Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Conference on Artificial Intelligence.
|
adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8a20c2f715c59e042066cc37cfc5b8a8f1f989f658e700ab5dcad1cdf6ccf8b4
|
| 3 |
+
size 171874
|
adversarialtrainingforcoderetrievalwithquestiondescriptionrelevanceregularization/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9a9760fbd3f843a0a202c844081197180d2ac44d72f127d892861fa601d842be
|
| 3 |
+
size 430909
|
airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/6aec2548-1f5d-46cb-8a5c-7be65a1cc701_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:85f7f398c3d03848b2853e85862f2670d8d5244608b7ad9fc54c7cd88fb71b81
|
| 3 |
+
size 84956
|
airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/6aec2548-1f5d-46cb-8a5c-7be65a1cc701_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:da1d2340ea5c1acbb583c335b8f131085ed844576381644a1ec32aef04d0dbc8
|
| 3 |
+
size 98343
|
airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/6aec2548-1f5d-46cb-8a5c-7be65a1cc701_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:adf8596a36c5a23cf11fbf44cb89496674b2c6947dc8b31c742bdc5adaa4a1b1
|
| 3 |
+
size 703601
|
airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/full.md
ADDED
|
@@ -0,0 +1,338 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AirConcierge: Generating Task-Oriented Dialogue via Efficient Large-Scale Knowledge Retrieval
|
| 2 |
+
|
| 3 |
+
Chieh-Yang Chen† Pei-Hsin Wang† Shih-Chieh Chang† Da-Cheng Juan¶ Wei Wei¶ Jia-Yu Pan¶ †National Tsing-Hua University ¶Google Research {darius107062542, peihsin}@gapp.nthu.edu.tw scchang@cs.nthu.edu.tw {dacheng, wewei, jypan}@google.com
|
| 4 |
+
|
| 5 |
+
# Abstract
|
| 6 |
+
|
| 7 |
+
Despite recent success in neural task-oriented dialogue systems, developing such a real-world system involves accessing large-scale knowledge bases (KBs), which cannot be simply encoded by neural approaches, such as memory network mechanisms. To alleviate the above problem, we propose AirConcierge, an end-to-end trainable text-to-SQL guided framework to learn a neural agent that interacts with KBs using the generated SQL queries. Specifically, the neural agent first learns to ask and confirm the customer's intent during the multi-turn interactions, then dynamically determining when to ground the user constraints into executable SQL queries so as to fetch relevant information from KBs. With the help of our method, the agent can use less but more accurate fetched results to generate useful responses efficiently, instead of incorporating the entire KBs. We evaluate the proposed method on the AirDialogue dataset, a large corpus released by Google, containing the conversations of customers booking flight tickets from the agent. The experimental results show that AirConcierge significantly improves over previous work in terms of accuracy and the BLEU score, which demonstrates not only the ability to achieve the given task but also the good quality of the generated dialogues.
|
| 8 |
+
|
| 9 |
+
# 1 Introduction
|
| 10 |
+
|
| 11 |
+
The task-oriented dialogue system (Young et al., 2013) is one of the rapidly growing fields with many practical applications, attracting more and more research attention recently (Zhao and Eskenazi, 2016; Wen et al., 2016; Bordes et al., 2017; Dhingra et al., 2017; Eric and Manning, 2017; Liu and Lane, 2017). In order to assist users in solving a specific task while holding conversations with human, the agent needs to understand the intentions of a user during the conversation and
|
| 12 |
+
|
| 13 |
+

|
| 14 |
+
Figure 1: An example of the task-oriented dialogue that incorporates a knowledge base (KB) from the AirDialogue dataset. The agent ground the user constraints into executable SQL query at the turn annotated in red.
|
| 15 |
+
|
| 16 |
+
fulfills the request. Such a process often involves interacting with external KBs to access task-related information. Figure 1 shows an example of a task-oriented dialogue between a user and an airline ticket reservation agent.
|
| 17 |
+
|
| 18 |
+
Traditional dialogue systems (Kim et al., 2008; Deoras and Sarikaya, 2013) may rely on the predefined slot-filling pairs, where a set of slots needs to be filled during the conversation. In addition, some works (Sukhbaatar et al., 2015; Madotto et al., 2018; Wu et al., 2019) have considered integrating KBs in a task-oriented dialogue system to generate a suitable response and have achieved promising performance. However, these methods either are limited by predefined configurations or do not scale to large KBs. Since real-world KBs typically contain millions of records, end-to-end dialogue systems are not able to incorporate external KBs effectively, leading to unstable dialogue responses.
|
| 19 |
+
|
| 20 |
+
Moreover, very few research has attempted to
|
| 21 |
+
|
| 22 |
+
explore how to efficiently cooperate with KBs or taken resource consumption, such as FLOPs or memory space, into consideration when designing the model. In order to solve the issues mentioned above, we propose AirConcierge, an SQL-guided task-oriented dialogue system that can efficiently work with real-world, large-scale KBs, by formulating SQL queries based on the context of the dialogue so as to retrieve relevant information from KBs.
|
| 23 |
+
|
| 24 |
+
We evaluate and demonstrate AirConcierge on AirDialogue (Wei et al., 2018), a large-scale airline reserving dataset published recently. AirDialogue has high complexity in contexts, creating the opportunity and the necessity of forming diverse task-oriented conversations. Our experiments show that AirConcierge achieves improvements in accuracy and resource usage compared to previous work.
|
| 25 |
+
|
| 26 |
+
# 2 Related Work
|
| 27 |
+
|
| 28 |
+
# 2.1 Task-oriented Dialogue System
|
| 29 |
+
|
| 30 |
+
Traditional task-oriented dialogue systems are usually accompanied by complex modular pipelines (Rudnicky et al., 1999; Zue, 2000; Zue et al., 2000). Each module is trained individually and follows by being pipelined for testing, so error from previous modules may propagate to downstream modules. Therefore, several jointed learning (Yang et al., 2017) and end-to-end reinforcement learning (RL) framework (Zhao and Eskenazi, 2016) are proposed to jointly train NLU and dialog manager using specifically collected supervised labels or user utterances to migrate the above problems. Other different end-to-end trainable dialogue systems (Wen et al., 2016; Li et al., 2017) have also been proposed and achieved successful performance by using supervised learning or RL. Compared to the pure end-to-end system, intermediate labels are still added to the model to train NLU and DST.
|
| 31 |
+
|
| 32 |
+
Existing pipeline methods to task-oriented dialogue systems still have problems of structural complexity and fragility. For example, NLU typically detects dialog domains by parsing user utterances, then classifying user intentions, and filling a set of slots to form domain-specific semantic frames. These models may highly rely on manual feature engineering, which makes them laborious and time-consuming and are difficult to adapt to new domains. Therefore, more and more research (Manning and Eric, 2017; Sukhbaatar et al., 2015; Dodge et al., 2016; Serban et al., 2016; Bordes
|
| 33 |
+
|
| 34 |
+
et al., 2017; Eric and Manning, 2017) dedicated to building end-to-end dialogue systems, in which all their components are trained entirely from the utterances themselves without the need to assume domains or dialog state structure, so it is easy to automatically extend to new domains and free it from manually designed pipeline modules. For example, (Bordes et al., 2017) treated dialogue system learning as the problem of learning a mapping from dialogue histories to system responses.
|
| 35 |
+
|
| 36 |
+
The common point of the pipeline and end-to-end methods is that they both need to acquire knowledge from the knowledge base to produce more contentful responses. For instance, (Eric and Manning, 2017) represent each entry as several key-value tuples and attend on each key to extract useful information from a KB in an end-to-end fashion, KB-InfoBot (Dhingra et al., 2017) directly model posterior distributions over KBs according to the user input and a prior distribution, and GLMP (Wu et al., 2019) use a global to local memory network (Weston et al., 2014; Sukhbaatar et al., 2015) to encode KBs and query it in a continuous neural. However, as the KBs continue to grow in the real-world scenarios, such end-to-end methods of directly encoding and integrating whole KBs will eventually result in inefficiency and incorrect responses.
|
| 37 |
+
|
| 38 |
+
On the other hand, some works may put the user utterances through a semantic parser to obtain executable logical forms and apply this symbolic query to the KB to retrieve entries based on their attributes. A common practice for generating queries is to record the slot values that appeared in each dialogue turn. For instance, (Lei et al., 2018) design text spans named belief spans to track dialogue beliefs and record infromable and requestable slots<sup>1</sup>, then converting them into a query with human efforts. Additionally, (Bordes et al., 2017) generate API calls from predefined candidates. Use such pipeline methods can interact and cooperate with the knowledge base efficiently by issuing API calls such as SQL-like queries. However, such symbolic operations break the differentiability of the system and prevent end-to-end training of neural dialogue agents.
|
| 39 |
+
|
| 40 |
+
In particular, it is unclear if end-to-end models can completely replace and perform better than pipeline methods in a task-directed setting. In comparison, our end-to-end trainable text-to-SQL
|
| 41 |
+
|
| 42 |
+
guided framework balances the strengths and the weaknesses of the two research methods. We first introduce the natural-language-to-SQL concept into task-oriented systems that map context dialogue histories and table schema to a SQL query and choose instead to rely on learned neural representations for implicit modeling of user intent and current state. Moreover, we provide more efficient labeling by only generating a query at an appropriate timing based on current state representations, instead of recording each slot values at each time step. By doing this, we do not need predefined slot-value pair or domain ontology, but just input dialogue histories and table schema and output synthesized SQL queries. Then we use a memory network to encode the results retrieved from KBs. Thus, we can access KBs more efficiently and achieve a high task success rate.
|
| 43 |
+
|
| 44 |
+
# 2.2 Semantic Parsing in SQL
|
| 45 |
+
|
| 46 |
+
Another related research is text-to-SQL, a sub-task of semantic parsing that aims at synthesizing SQL queries from natural language. The widely adopted dataset is the WikiSQL (Zhong et al., 2017). The task goal is to generate a corresponding SQL query given a natural language question and sets of table schema (Xu et al., 2018; Yu et al., 2018a; McCann et al., 2018; Hwang et al., 2019). Furthermore, cross-domain semantic parsing in text-to-SQL has been investigated (Yu et al., 2019b, 2018b, 2019a). In comparison, the SQL generator in our model is a task-oriented dialogue-to-SQL generator, which aims to help users accomplish a specific task, and dynamically determines whether to ground the dialogue context to an executable SQL.
|
| 47 |
+
|
| 48 |
+
# 3 The Proposed Framework
|
| 49 |
+
|
| 50 |
+
Our design of the AirConcierge system addresses the following challenges in developing an effective task-oriented dialogue system, including
|
| 51 |
+
|
| 52 |
+
- When should the system access the KBs to obtain task-relevant information during a conversation?
|
| 53 |
+
- How does the system formulate a query that retrieves task-relevant data from the KBs?
|
| 54 |
+
|
| 55 |
+
# 3.1 System Architecture of AirConcierge
|
| 56 |
+
|
| 57 |
+
AirConcierge is a task-oriented dialogue system for flight reservations and therefore depends on
|
| 58 |
+
|
| 59 |
+
flight information in large external KBs to fulfill user requests. Unlike previous work that directly encodes the entire KBs, AirConcierge issues API calls to the KBs at the appropriate time to retrieve the information relevant to the task. Besides, during the dialogue with a user, AirConcierge actively prompts and guides the user for key information, and responds with informative and human-comprehensible sentences based on the retrieved results from the KBs. In particular, the "dialogueto-SQL-to-dialogue" approach, which we implement in AirConcierge allows it to integrate with large-scale, real-world KBs.
|
| 60 |
+
|
| 61 |
+
Figure 2 shows the system architecture of AirConcierge. During a dialogue with a user, AirConcierge processes the dialogue lines in the following procedures: For each new line of a dialogue, it serves as an input to the Dialogue Encoder, which encodes the conversation history. The hidden states of Dialogue Encoder are next used by the Dialogue State Tracker to determine the phase of the dialogue (e.g., greeting phase or the problem-solving phase). If the system determines that enough information about the user's request has been collected, the SQL generator then generates a SQL query, according to the context of the dialogue so far, to retrieve information from KBs. Next, the retrieved results are encoded and stored in a Memory Network. With the encoded dialogue and the memory readout, a context-aware Dialogue Decoder generates a corresponding response. In addition to the process described above, there is a Dialogue Goal Generator which predicts the final status of the full dialogue, given the entire conversation history, to measure the agent performance.
|
| 62 |
+
|
| 63 |
+
# 3.2 Dialogue Encoder
|
| 64 |
+
|
| 65 |
+
We implement the Dialogue Encoder using a RNN with a gated recurrent unit (GRU) (Chung et al., 2014). Given a sequence of the conversation history $X = \{x_{1}, x_{2}, \dots, x_{t}\}$ , a word embedding matrix $W_{emb}$ embeds each token $x_{t}$ . A GRU then models the sequence of tokens by taking the embedded token $W^{emb}(x_{t-1})$ and the hidden state $h_{t-1}^{e}$ from time step $t - 1$ as inputs at the next time step $t$ :
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
h _ {t} ^ {e} = G R U \left(W _ {e m b} \left(x _ {t - 1}\right), h _ {t - 1} ^ {e}\right) \tag {1}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
The whole dialogue history is encoded into the hidden states $H = (h_1^e, \ldots, h_T^e)$ , where $T$ is the total number of time steps.
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
Figure 2: An overview of the system architecture of AirConcierge.
|
| 75 |
+
|
| 76 |
+
# 3.3 Dialogue State Tracker (Information Gate Module)
|
| 77 |
+
|
| 78 |
+
In order to determine whether a dialogue has reached a state where the system has received enough initial information about a user's need and transitioned from the "greeting state" into the "problem-solving state", we design a Dialogue State Tracker to model such a transition of states. This is a module introduced by AirConcierge to determine when to retrieve and incorporate data from the KBs into the dialogue, so we also consider it as an "information gate". The Dialogue State Tracker takes the information about the schema of KBs as an input to the model. Intuitively, by matching the information in the dialogue history with the available columns in the KBs, a better decision can be made about whether it is the right time to start querying the KBs. This module takes the last hidden state $h_T^e$ from the Dialogue Encoder and outputs a binary value $s \in \{0,1\}$ indicating whether the current information is sufficient to generate a query. Let $P(s)$ denote the probability that the agent would send a query:
|
| 79 |
+
|
| 80 |
+
$$
|
| 81 |
+
P \left(s \mid h _ {T} ^ {e}, x _ {1: J} ^ {\text {c o l}}\right) = \sigma \left(W _ {2} ^ {s} \left(W _ {1} ^ {s} h _ {T} ^ {e} + \Sigma U _ {2} W _ {\text {e m b}} \left(x _ {1: J} ^ {\text {c o l}}\right)\right)\right), \tag {2}
|
| 82 |
+
$$
|
| 83 |
+
|
| 84 |
+
where $x_{1:J}^{col}$ denotes the tokens of the $J$ column names; $W_{emb}$ is the word embedding matrix as in Equation (1); $U_{2} \in \mathbb{R}^{d_{enc} \times d_{enc}}$ is a bidirectional LSTM; $W_{1}^{s}$ and $W_{2}^{s}$ are fully-connected layers with size $d_{enc} \times d_{enc}$ ; and $\sigma$ is the sigmoid function. Note that we denote $U_{2} W_{emb}(x_{1:J}^{col})$ as $h^{col}$ in Figure 2.
|
| 85 |
+
|
| 86 |
+
# 3.4 SQL Generator
|
| 87 |
+
|
| 88 |
+
In order to enable AirConcierge to handle large-scale KBs, we devise a SQL Generator and
|
| 89 |
+
|
| 90 |
+
deployed it in AirConcierge. If the state s from the Dialogue State Tracker is "problem-solving state", AirConcierge will activate the SQL Generator and generate a SQL query to access the KBs. A SQL query is in the form of SELECT * FROM KBs WHERE $COL $OP $VALUE (AND $COL $OP $VALUE)*, where $COL is a column name. Here we focus on predicting the constraints in the WHERE clause.
|
| 91 |
+
|
| 92 |
+
To predict the column $\mathbb{S}\mathbb{C}\mathbb{O}\mathbb{L}$ , we follow the sequence-to-set idea from SQLNet (Xu et al., 2018). That is, given the encoded column names $\{h_j^{col}\}_{j = 1\dots J}$ and the last encoding of the dialogue history $h_T^e$ , the model computes the probability $P_{col}(x_j^{col})$ of column $j$ to appear in the SQL query:
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
P _ {c o l} \left(x _ {j} ^ {c o l} \mid h _ {j} ^ {c o l}, h _ {T} ^ {e}\right) = \sigma \left(W _ {1} ^ {c o l} h _ {j} ^ {c o l} + W _ {2} ^ {c o l} h _ {T} ^ {e}\right) \tag {3}
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
The $\$ 0\mathsf{P}$ slots are predicted using similar architecture:
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
P _ {o p} \left(x _ {j} ^ {o p} \mid h _ {j} ^ {c o l}, h _ {T} ^ {e}\right) = \sigma \left(W _ {1} ^ {o p} h _ {j} ^ {c o l} + W _ {2} ^ {o p} h _ {T} ^ {e}\right) \tag {4}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
As for predicting the $VALUE slot for a particular $COL, we model it as a classification problem. Let $v_{i}^{j}$ be the $i$ -th value of the $j$ -th column. The predicted probability of the value $v_{i}^{j}$ is:
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\begin{array}{c} P _ {v a l u e} \left(v _ {i} ^ {j} \mid h _ {j} ^ {c o l}, h _ {T} ^ {e}\right) = \\ S o f t m a x \left(W _ {1} ^ {v a l} \left(W _ {2} ^ {v a l} h _ {T} ^ {e} + W _ {3} ^ {v a l} h _ {j} ^ {c o l}\right)\right) \end{array} \tag {5}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
where all $W_{1,2}^{col}$ , $W_{1,2}^{op}$ and $W_{1,2,3}^{val}$ are trainable matrices of size $d_{enc} \times d_{enc}$ .
|
| 111 |
+
|
| 112 |
+
# 3.5 Knowledge Base Memory Encoder
|
| 113 |
+
|
| 114 |
+
We encode the retrieved data from the KBs with a memory network mechanism. Unlike previous
|
| 115 |
+
|
| 116 |
+
work (Wei et al., 2018) which applies a hierarchical RNN to encode the entire KBs directly, we only model the retrieved results from the KBs. Thanks to the SQL Generator module that filters out most of the irrelevant data in KBs, AirConcierge is needless to encode the entire KBs and can focus on the small set of relevant data records.
|
| 117 |
+
|
| 118 |
+
Let the data records of flights retrieved from the KBs be $\{f_1,..,f_F\}$ , each flight containing 12 column attributes and one additional "flight number" column attribute. These records are converted into memory vectors $\{m_1,\dots,m_F\}$ using a set of trainable embedding matrices $C = \{C^1,\dots,C^{K + 1}\}$ , where $C^k\in \mathbb{R}^{|V|\times d_{emb}}$ and $K$ is the number of hops. Note that we additionally add an empty flight vector $m_{empty}$ to represent the case where no flight in the KBs meets the customer's intent.
|
| 119 |
+
|
| 120 |
+
An initial query vector $q^0$ is defined to be the output of the dialogue encoder $h_T^e$ . Then, the query vector is passed through a few "hops" where, at each hop $k$ , a vector $q^k$ is computed as attention weights with respect to each memory vector $m_i$ :
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
p _ {i} ^ {k} = \operatorname {S o f t m a x} \left(\left(q ^ {k}\right) ^ {T} c _ {i} ^ {k}\right) \tag {6}
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
where $c_i^k = B(C^k (f_i))$ is the embedding vector at the $i^{th}$ memory position, and $B(\cdot)$ is a bag-of-word function. Here, $p_i^k$ decides which ticket has higher relevance to the customer intent. Then, the memory readout $o^k$ is summed over $c^{k + 1}$ weighted by $p^k$ as:
|
| 127 |
+
|
| 128 |
+
$$
|
| 129 |
+
o ^ {k} = \sum_ {i = 1} ^ {F} p _ {i} ^ {k} c _ {i} ^ {k + 1} \tag {7}
|
| 130 |
+
$$
|
| 131 |
+
|
| 132 |
+
To continue to the next hop, the query vector is updated by $q^{k + 1} = q^k + o^k$ .
|
| 133 |
+
|
| 134 |
+
We use the pointer $G = (g_{1},\ldots ,g_{F})$ to pick the most relevant ticket and also filter out unimportant or unqualified tickets. $\mathbf{K}$ denotes the last hop.
|
| 135 |
+
|
| 136 |
+
$$
|
| 137 |
+
\boldsymbol {g} _ {i} ^ {K} = \operatorname {S o f t m a x} \left(\left(\boldsymbol {q} ^ {K}\right) ^ {\top} \boldsymbol {c} _ {i} ^ {K}\right) \tag {8}
|
| 138 |
+
$$
|
| 139 |
+
|
| 140 |
+
# 3.6 Dialogue Decoder
|
| 141 |
+
|
| 142 |
+
We adopt a GRU model as the Dialogue Decoder to generate the agent's response. At each time step, the Dialogue Decoder generates a token based on the encoded dialogue $h_T^e$ and flight ticket information $g_i^K$ , by calculating a probability over all tokens:
|
| 143 |
+
|
| 144 |
+
$$
|
| 145 |
+
\begin{array}{l} h _ {t} ^ {d} = G R U \left(W _ {e m b} \left(\hat {y} _ {t - 1}\right), h _ {t - 1} ^ {d}\right), \tag {9} \\ P (\hat {y} _ {t}) = \mathrm {S o f t m a x} (W _ {d e c} h _ {t} ^ {d}) \\ \end{array}
|
| 146 |
+
$$
|
| 147 |
+
|
| 148 |
+
where $W_{dec} \in \mathbb{R}^{d_{enc} \times |V|}$ is a trainable matrix, and $h_0$ is initialized as a concatenation of $q^K$ and $h_T^e$ , $\hat{y}_t$ is output tokens at timestep $t$ .
|
| 149 |
+
|
| 150 |
+
# 3.7 Dialogue Goal Generator
|
| 151 |
+
|
| 152 |
+
As stated in the AirDialogue (Wei et al., 2018), three final dialogue goals $s_a, s_n, s_f$ are generated by the agent to examine the correctness at the end of conversations. $s_n$ represents the name of the customer. The flight state $s_f$ is the flight number selected from $F$ flights in the KBs. The action $s_a$ that accomplished at the end of a dialogue can be one of the following five choices: "booked", "changed", "no flight found", "no reservation" and "cancel". We feed $h_T^e$ into three fully-connected layers, $W_i^{goal}$ , to predict the three goals $(i \in \{\mathrm{n},\mathrm{f},\mathrm{a}\})$ , respectively:
|
| 153 |
+
|
| 154 |
+
$$
|
| 155 |
+
P (s _ {i}) = W _ {i} ^ {\text {g o a l}} h _ {T} ^ {e}. \tag {10}
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
# 3.8 Objective Function
|
| 159 |
+
|
| 160 |
+
In order to train the dialogue system in an end-to-end fashion, loss functions are defined for the above modules. The loss for Dialogue State Tracker, $\mathcal{L}_{gate}$ , is the binary cross entropy (BCE). The loss for SQL generator consists of three parts: $\mathcal{L}_{SQL} = \mathcal{L}_{col} + \mathcal{L}_{op} + \mathcal{L}_{value}$ . The loss for the $\$ \mathrm{COL}$ slots $\mathcal{L}_{col}$ is the BCE, and the loss for both $\$ \mathrm{OP}$ and $\$ \mathrm{VALUE}$ slots is CE. For the KB memory encoder, we use CE: $\mathcal{L}_{mem} = -\sum_{i=1}^{N} \sum_{j=1}^{F} (y_{ij} \cdot \log(g_{ij}^{K}))$ , where $g_{ij}^{K}$ is the pointer, $N$ is the number of samples, and $F$ is the number of flights retrieved from KBs. For the state generator, CE is used for all three states, that is, $\mathcal{L}_{goal} = \mathcal{L}_{name} + \mathcal{L}_{flight} + \mathcal{L}_{action}$ .
|
| 161 |
+
|
| 162 |
+
The overall loss function is formed by summing up the losses of all modules:
|
| 163 |
+
|
| 164 |
+
$$
|
| 165 |
+
\mathcal {L} = \mathcal {L} _ {\text {g a t e}} + \mathcal {L} _ {\text {S Q L}} + \mathcal {L} _ {\text {m e m}} + \mathcal {L} _ {\text {g o a l}} \tag {11}
|
| 166 |
+
$$
|
| 167 |
+
|
| 168 |
+
# 4 Experiments
|
| 169 |
+
|
| 170 |
+
# 4.1 Dataset
|
| 171 |
+
|
| 172 |
+
AirDialogue Dataset We evaluate the proposed framework on the AirDialogue dataset, a large-scale task-oriented dialogue dataset released by Google. The dataset contains 402,038 conversations, with an average length of 115. For data pre-processing, we follow the steps in the original paper (Wei et al., 2018) and their official code<sup>2</sup>.
|
| 173 |
+
|
| 174 |
+
Labels for State Tracker Since the original Air-Dialogue dataset lacks the labels for learning the Dialogue State Tracker, we devise a method to annotate each dialogue turn with a "ground-truth" state label. We define two dialogue states: At the beginning of a dialogue, while the customer expresses travel constraints and the agent asks for information, we define this as the "greeting state" of the dialogue. Once the agent receives adequate information from the user and decides to send a query, we define that the dialogue enters the "problem-solving state" and will remain in this state afterward.
|
| 175 |
+
|
| 176 |
+
We use a rule-based model to annotate. For most dialogues, the first turn of the "problem-solving state" is where the flight number is mentioned. With this observation, we label the turn where the flight number first occurs to be the starting point of the "problem-solving state". As for the dialogues that either issue multiple SQL queries or have no mention of the flight number, we apply a set of keywords to mark the problem-solving state.
|
| 177 |
+
|
| 178 |
+
Labels for SQL Generator In the original AirDialogue dataset, each dialogue is accompanied with an intention indicating the customer's travel constraints. We construct the "ground-truth query" based on the user's intention of each dialogue.
|
| 179 |
+
|
| 180 |
+
# 4.2 Training Details
|
| 181 |
+
|
| 182 |
+
We conduct experiments using one 2080 Ti GPU and the Pytorch (Paszke et al., 2017) environment. We use Adam (Kingma and Ba, 2015) to optimize the model parameters with a learning rate $1e^{-3}$ and a batch size of 32. The word embedding size and GRU hidden dimension are 256. The hop of the memory encoder $K$ is set to 3. For Dialogue Decoder, a greedy strategy is used instead of beam-search. The accelerated training technique used in Wei et al. (2018) is also adopted in our model. The models are trained for 5 epochs, roughly equals to 44000 steps.
|
| 183 |
+
|
| 184 |
+
# 4.3 Evaluation
|
| 185 |
+
|
| 186 |
+
There are two important perspectives about the model: the quality of the dialogue and the correctness of the exact information. In order to properly evaluate these two, we use the BLEU score to evaluate the dialogues and use accuracy to evaluate the dialogue goals and SQL queries. While providing a human-like interaction with the customers is important, it is even more critical to guarantee that all
|
| 187 |
+
|
| 188 |
+

|
| 189 |
+
Figure 3: Inference time under different numbers of KB records on the AirDialogue dev set. "1x." denotes 30 records in the KBs, "10x." is 300 records, and so on.
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
Figure 4: Memory consumption under different amounts of KB data on the AirDialogue dev set. "1x." denotes 30 records in the KBs, "10x." is 300 records, and so on.
|
| 193 |
+
|
| 194 |
+
of the provided information is correct.
|
| 195 |
+
|
| 196 |
+
For example, the agent might reply "We have found a flight number 1011 which meets your need. Should I book it?". Suppose the actual correct flight number is 1012, this sentence may have a high BLEU score while the provided information is misleading. Such an error further reveals the importance of the accuracy of Dialogue Goal Generator.
|
| 197 |
+
|
| 198 |
+
As for the correctness of the provided information, we evaluate the performance by SQL accuracy and state accuracy. The SQL accuracy is critical in filtering and accessing data from the KBs.
|
| 199 |
+
|
| 200 |
+
User simulator For self-play evaluation, we build a simulator to model a user's utterances. The simulator generates a response based on three things: a list of travel constraints, the user's intent (\{"book", "change", "cancel"\}), and the dialogue history. Similar to the previous work, we adopt a
|
| 201 |
+
|
| 202 |
+
<table><tr><td>Model</td><td>Name Acc.</td><td>Flight Acc.</td><td>State Acc.</td><td>BLEU</td></tr><tr><td>Supervised (2018) (AirDialogue dev)</td><td>0.9 %</td><td>1.2%</td><td>12%</td><td>23.26</td></tr><tr><td>RL (2018) (AirDialogue dev)</td><td>1%</td><td>4%</td><td>29%</td><td>19.65</td></tr><tr><td>AirConcierge (AirDialogue dev)</td><td>100%</td><td>72.2%</td><td>90.0%</td><td>32.59</td></tr><tr><td>Supervised (2018) (Synthesized dev)</td><td>0%</td><td>8%</td><td>32%</td><td>68.72</td></tr><tr><td>RL (2018) (Synthesized dev)</td><td>0%</td><td>35%</td><td>39%</td><td>62.71</td></tr><tr><td>AirConcierge (Synthesized dev)</td><td>100%</td><td>58.9%</td><td>86.0%</td><td>73.51</td></tr><tr><td>Human (AirDialogue test)</td><td>98%</td><td>91.4%</td><td>91.8%</td><td>-</td></tr></table>
|
| 203 |
+
|
| 204 |
+
Table 1: Dialogue performance under self-play evaluation. The agent model is the model in the first column, while the customer is the user simulator described in section 4.3. The supervised model and the Reinforcement Learning (RL) model are the baseline models reported in the original AirDialogue paper.
|
| 205 |
+
|
| 206 |
+
sequence-to-sequence model to build the simulator.
|
| 207 |
+
|
| 208 |
+
SQL evaluation We use logical-form accuracy $(Acc_{lf})$ and execution accuracy $(Acc_{ex})$ (Zhong et al., 2017) to measure the SQL quality. For $Acc_{lf}$ , we directly compare the generated SQL query with the ground truth to check whether they match each other. For $Acc_{ex}$ , we execute both the generated query and the ground truth and compare whether the retrieved results match each other. We also evaluate the accuracy of the 3 components ( $COL,$ OP, and $VALUE) of a WHERE condition: $Acc_{col}$ , $Acc_{op}$ , and $Acc_{val}$ , respectively. For each dialogue, we evaluate only the SQL query at the turn when the "problem-solving state" first occurs.
|
| 209 |
+
|
| 210 |
+
# 4.4 Experimental Results: Accuracy
|
| 211 |
+
|
| 212 |
+
In Table 1, we compare the performance of AirConcierge with the baseline in the AirDialogue paper. On generating a response that matches the ground-truth dialogue line, AirConcierge achieves improvements on the BLEU score by 9.33 and 4.79 on the dev set and the synthesized set, respectively. In the self-play evaluation, AirConcierge achieves significant improvements on NameAcc, FlightAcc, and ActionAcc. We attribute the high accuracy to the correctness of SQL queries, since the data retrieved from KBs is correctly filtered and thus helps the agent make suitable and better predictions.
|
| 213 |
+
|
| 214 |
+
Besides the model's overall performance in accomplishing a user's task, we are interested in the accuracy of the SQL queries generated by Air-Concierge based on the dialogue context. In this evaluation, we consider two cases: the accuracy of the 6 essential attributes (departure airport, return airport, departure month, return month, departure day, and return day), and the accuracy on all 12 at
|
| 215 |
+
|
| 216 |
+
tributes. The 6 essential attributes are the ones that are essential in identifying a ticket and therefore appear in nearly all dialogue samples.
|
| 217 |
+
|
| 218 |
+
Table 2 shows the model's accuracy in generating SQL queries. The model achieves outstanding accuracy in predicting the column-name slots, the operator slots, and the value slots. The metric $Acc_{lf}$ evaluates whether two queries are exactly the same, so its value is typically smaller than $Acc_{col}$ , $Acc_{op}$ , or $Acc_{val}$ , especially when more conditions are considered. This can be observed in the table, where the accuracy $Acc_{lf}$ under 12 conditions is much smaller than that under only 6 essential conditions.
|
| 219 |
+
|
| 220 |
+
Furthermore, we break down the performance of overall SQL queries into each $\$ 25$ VALUE slot, results presented in Table 3. AirConcierge achieves high accuracy on predicting the values of the 6 essential conditions, but performs not as good on the other 6 conditions (departure time, return time, class, price, connections, and airline). This may be due to that the essential 6 conditions are provided in nearly all dialogues, while the other conditions are only provided from time to time. Having fewer data about the other conditions makes it harder for the model to learn about them.
|
| 221 |
+
|
| 222 |
+
# 4.5 Experimental Results: Scalability
|
| 223 |
+
|
| 224 |
+
An important contribution of AirConcierge is the efficiency in cooperating with KBs. By employing the SQL Generator, AirConcierge increases the model's ability to handle large-scale KBs. In Figure 3, we show the model's inference time with respect to the number of data records in the KBs. The "1x." at the x-axis corresponds to having 30 data records in the KBs, and "10x." corresponds to 300 entries in the KBs, and so on. As shown in the
|
| 225 |
+
|
| 226 |
+
<table><tr><td>Experiment</td><td>Acccol</td><td>Accop</td><td>Accval</td><td>Acclf</td><td>Accex</td></tr><tr><td>AirConcierge†</td><td>98.96%</td><td>99.7%</td><td>97.9%</td><td>95.54%</td><td>96.44%</td></tr><tr><td>AirConcierge‡</td><td>97.24%</td><td>98.6%</td><td>61.4%</td><td>28.11%</td><td>86.28%</td></tr></table>
|
| 227 |
+
|
| 228 |
+
Table 2: Performance on the AirDialogue dataset. $\dagger$ indicates considering only 6 conditions, such as departure city, return city, departure month, return month, departure day, and return day. $\ddagger$ means considering all 12 conditions. The models of $\dagger$ and $\ddagger$ are the same. We report the average accuracy.
|
| 229 |
+
|
| 230 |
+
<table><tr><td>Experiment</td><td>dep. city</td><td>ret. city</td><td>dep. month</td><td>ret. month</td><td>dep. day</td><td>ret. day</td></tr><tr><td>AirConcierge</td><td>98.89%</td><td>97.93%</td><td>97.52%</td><td>97.49%</td><td>97.27%</td><td>97.29%</td></tr><tr><td>Experiment</td><td>dep. time</td><td>ret. time</td><td>class</td><td>price</td><td>connections</td><td>airline</td></tr><tr><td>AirConcierge</td><td>49.60%</td><td>52.46%</td><td>42.74%</td><td>37.60%</td><td>95.36%</td><td>42.12%</td></tr></table>
|
| 231 |
+
|
| 232 |
+
Table 3: Performance of each $VALUE slot to be generated in the query.
|
| 233 |
+
|
| 234 |
+
figure, the inference time of AirConcierge remains short as the KBs grows larger. On the contrary, the baseline model, AirDialogue, requires obviously more inference time: when the KBs are 70 times larger, AirDialogue takes 5 times longer to complete the dialogue. We also compare the memory consumption of AirConcierge with that of AirDialogue. In Figure 4, it is shown that AirConcierge consumes a constant amount of memory regardless of the KBs size, while AirDialogue requires more memory as the KBs size grows. This indicates that AirConcierge is scalable from the aspect of memory consumption as well.
|
| 235 |
+
|
| 236 |
+
We inflate the size of KBs by augmenting additional data records. To generate a variant data record, we choose an existing ground-truth record and modify the values of some of its columns. The modified column value is sampled from a prior distribution defined for that column. We experiment with different numbers of columns to modify. For an augmentation where the last $i$ columns subject to variations, we denote such an augmentation as "#Augment-column- $i$ ".
|
| 237 |
+
|
| 238 |
+
Intuitively, the more columns are subject to variations, the more diverse the records are. Therefore, fewer records will match the query when more columns are subject to variations. This is shown in Figure 5. When more records are added in the KBs, for an augmentation that has more variant columns (e.g., #Augment-column-10), the growth of the number of records returned for a SQL query is slower than the growth experienced by augmentation with fewer variation columns (e.g., #Augment-column-6). This also illustrates the importance of having a high-quality SQL Generator. Since gener
|
| 239 |
+
|
| 240 |
+

|
| 241 |
+
Figure 5: Number of returned data from different augment types of KBs using SQL queries generated by our model.
|
| 242 |
+
|
| 243 |
+
ating precise SQL queries can effectively cut down the data records to be considered.
|
| 244 |
+
|
| 245 |
+
# 5 Conclusions
|
| 246 |
+
|
| 247 |
+
We propose AirConcierge, a task-oriented dialogue system that has high accuracy in achieving the user's tasks. By employing a subsystem, including a Dialogue State Tracker and a SQL Generator, AirConcierge can issue a precise SQL query at the right time during a dialogue and retrieve relevant data from KBs. As a result, AirConcierge can handle large-scale KBs efficiently, in terms of shorter processing time and less memory consumption. Using a precise SQL query also filters out noise and irrelevant data from the KBs, which improves the quality of the dialogue responses. Our experiments demonstrate the better performance and efficiency of AirConcierge, over the previous work.
|
| 248 |
+
|
| 249 |
+
# References
|
| 250 |
+
|
| 251 |
+
Antoine Bordes, Y-Lan Boureau, and Weston Jason. 2017. Learning end-to-end goal-oriented dialog. In ICLR.
|
| 252 |
+
Junyoung Chung, Caglar Gülcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. ArXiv, abs/1412.3555.
|
| 253 |
+
Anoop Deoras and Ruhi Sarikaya. 2013. Deep belief network based semantic taggers for spoken language understanding. In INTERSPEECH.
|
| 254 |
+
Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2017. Towards end-to-end reinforcement learning of dialogue agents for information access. In ACL.
|
| 255 |
+
Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander H. Miller, Arthur Szlam, and Jason Weston. 2016. Evaluating prerequisite qualities for learning end-to-end dialog systems. CoRR, abs/1511.06931.
|
| 256 |
+
Mihail Eric and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In SIGDIAL.
|
| 257 |
+
Wonseok Hwang, Jinyeung Yim, Seunghyun Park, and Minjoon Seo. 2019. A comprehensive exploration on wikisql with table-aware word contextualization. arXiv preprint arXiv:1902.01069.
|
| 258 |
+
Kyungduk Kim, Cheongjae Lee, Sangkeun Jung, and Gary Geunbae Lee. 2008. A frame-based probabilistic framework for spoken dialog management using dialog examples. In SIGDIAL Workshop.
|
| 259 |
+
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.
|
| 260 |
+
Wenqiang Lei, Xisen Jin, Zhaochun Ren, Xiangnan He, Min-Yen Kan, and Dawei Yin. 2018. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In ACL.
|
| 261 |
+
Xiujun Li, Yun-Nung Chen, Lihong Li, Jianfeng Gao, and Asli Çelikyilmaz. 2017. End-to-end task-completion neural dialogue systems. ArXiv, abs/1703.01008.
|
| 262 |
+
Bing Liu and Ian Lane. 2017. An end-to-end trainable neural network model with belief tracking for task-oriented dialog. ArXiv, abs/1708.05956.
|
| 263 |
+
Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. ArXiv, abs/1804.08217.
|
| 264 |
+
Christopher D. Manning and Mihail Eric. 2017. A copy-augmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. In EACL.
|
| 265 |
+
|
| 266 |
+
Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730.
|
| 267 |
+
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary Devito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W.
|
| 268 |
+
Alexander I. Rudnicky, Eric H. Thayer, Paul C. Constantinides, Chris Tchou, R. Shern, Kevin A. Lenzo, Weiyang Xu, and Alice H. Oh. 1999. Creating natural dialogs in the Carnegie mellon communicator system. In EUROSPEECH.
|
| 269 |
+
Iulian Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI.
|
| 270 |
+
Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In NIPS.
|
| 271 |
+
Wei Wei, Quoc V. Le, Andrew M. Dai, and Jia Li. 2018. Airdialogue: An environment for goal-oriented dialogue research. In EMNLP.
|
| 272 |
+
Tsung-Hsien Wen, David Vandyke Lina Maria Rojas-Barahona, Milica Gasic, Nikola Mrksic, Pei hao Su, Stefan Ultes, and Steve J. Young. 2016. A network-based end-to-end trainable task-oriented dialogue system. In EACL.
|
| 273 |
+
Jason Weston, Sumit Chorpa, and Antoine Bordes. 2014. Memory networks. arXiv:1410.3916.
|
| 274 |
+
Chien-Sheng Wu, Richard Socher, and Caiming Xiong. 2019. Global-to-local memory pointer networks for task-oriented dialogue. ArXiv, abs/1901.04713.
|
| 275 |
+
Xiaojun Xu, Chang Liu, and Dawn Song. 2018. Sqlnet: Generating structured queries from natural language without reinforcement learning. In ICLR.
|
| 276 |
+
Xuesong Yang, Yun-Nung Chen, Dilek Z. Hakkani-Tür, Paul Crook, Xiujun Li, Jianfeng Gao, and Li Deng. 2017. End-to-end joint learning of natural language understanding and dialogue manager. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5690-5694.
|
| 277 |
+
Steve J. Young, Milica Gasic, Blaise Thomson, and Jason D. Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101:1160-1179.
|
| 278 |
+
Tao Yu, Zifan Li, Zilin Zhang, Rui Zhang, and Dragomir Radev. 2018a. Typesql: Knowledge-based type-aware neural text-to-sql generation. In NAACL.
|
| 279 |
+
|
| 280 |
+
Tao Yu, Rui Zhang, He Yang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Yasunaga, Sungrok Shim, Tao Chen, Alexander R. Fabbri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vincent Zhang, Caiming Xiong, Richard Socher, Walter S. Lasecki, and Dragomir R. Radev. 2019a. Cosql: A conversational text-to-sql challenge towards cross-domain natural language interfaces to databases. In EMNLP/IJCNLP.
|
| 281 |
+
|
| 282 |
+
Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir R. Radev. 2018b. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. In EMNLP.
|
| 283 |
+
|
| 284 |
+
Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Chern Tan, Xi Victoria Lin, Suyi Li, Heyang Er, Irene Li, Bo Pang, Tao Chen, Emily Ji, Shreya Dixit, David N Proctor, Sungrok Shim, Jonathan Kraft, Vincent Zhang, Caiming Xiong, Richard Socher, and Dragomir R. Radev. 2019b. Sparc: Cross-domain semantic parsing in context. In ACL.
|
| 285 |
+
|
| 286 |
+
Tiancheng Zhao and Maxine Eskenazi. 2016. Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning. In SIGDIAL Conference.
|
| 287 |
+
|
| 288 |
+
Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. ArXiv, abs/1709.00103.
|
| 289 |
+
|
| 290 |
+
Victor Zue. 2000. Conversational interfaces: advances and challenges. Proceedings of the IEEE, 88:1166-1180.
|
| 291 |
+
|
| 292 |
+
Victor Zue, Stephanie Seneff, James R. Glass, Joseph Polifroni, Christine Pao, Timothy J. Hazen, and I. Lee Hetherington. 2000. Jupiter: a telephone-based conversational interface for weather information. IEEE Trans. Speech Audio Process., 8:85-96.
|
| 293 |
+
|
| 294 |
+
# A Appendices
|
| 295 |
+
|
| 296 |
+
# A.1 Data Statistics
|
| 297 |
+
|
| 298 |
+
For the data records in the KBs, each of them is generated using the prior distributions defined in Table 4. In section 4.5, we conduct experiments under different scales of the KBs, where the newly augmented records are generated according to these prior distributions. The original AirDialogue dataset contains 30 records in the KBs, and we augment the KBs to "10x.", "50x.", and "70x". That is, we additionally add 270 records, sampled according to the prior distributions, into the "10x." KBs. Similar things are done to the "50x." KBs and "70x." KBs.
|
| 299 |
+
|
| 300 |
+
# A.2 Qualitative Analysis
|
| 301 |
+
|
| 302 |
+
We provide samples of dialogues generated by our agent and the user simulator under the self-play evaluation. The user simulator has a pre-defined intent that belongs to one of the three: "book", "change", "cancel", as well as a list of travel constraints. On the other hand, responses provided by the agent may result in one of the five actions: booked", "changed", "cancelled", "no flight found", "no reservation". The user intent "book" could lead to the agent action "booked" or "no flight found", while both "change" and "cancel" may lead to "no reservation". However, the user intent "change" could be successfully achieved, and result in the agent action "changed". Similarly, "cancel" could lead to "cancelled".
|
| 303 |
+
|
| 304 |
+
We show several samples according to the agent's action. First, Table 5 shows the two samples of the agent action "booked". We see that the user tends to provide the destination and return airport codes spontaneously, followed by the agent requiring the travel dates. After the ticket is found, the agent informs the user about the flight details, which is a human-like behaviour. Finally, the ticket is confirmed by the user, and both the user and agent ends the dialogue through the thankfulness.
|
| 305 |
+
|
| 306 |
+
Table 6 shows the samples for the action "changed". At the beginning, the user and the agent greets with each other. Then, the user not only expresses the intent to change the flight, but also gives a reason for changing. We see that the agent learns to judge whether the user has provided his/her name. In the first, or say upper, sample, the user mentioned his/her name right after greeting, and hence the agent goes through to check the KBs. However, in the second, or say lower, sample, the agent identified that the user hasn't told his/her name yet, so the agent requires the name before querying the KBs.
|
| 307 |
+
|
| 308 |
+
For the action "cancelled", samples are provided in Table 7. We observe similar patterns to the action "changed". The user first describes the need to cancel the ticket, and followed by the agent asking the name if necessary. Lastly, the agent found the ticket and confirm the cancellation with the user.
|
| 309 |
+
|
| 310 |
+
Table 8 provides the samples of the action "no flight found". Similar to the samples of "booked", the user describes the travel constraints and ask to book a ticket. The difference is that the agent could not find a matched flight, and thus responds with no flight available. One thing special is that
|
| 311 |
+
|
| 312 |
+
<table><tr><td>feature</td><td>dep./ret.city</td><td>dep./ret. month</td><td>dep./ret. day</td><td>dep./ret. time</td></tr><tr><td>range</td><td>categorical</td><td>1-12</td><td>1-31</td><td>00-23</td></tr><tr><td>prob.</td><td>uniform</td><td>uniform</td><td>uniform</td><td>uniform</td></tr><tr><td>feature</td><td>class</td><td>price</td><td>connections</td><td>airline</td></tr><tr><td>range</td><td>business,economy</td><td>0-5000</td><td>0,1,2</td><td>categorical</td></tr><tr><td></td><td>economy (7%)</td><td>≤200 (25%)</td><td></td><td></td></tr><tr><td></td><td>business (3%)</td><td>≤500 (25%)</td><td>0 (7%) 1 (90%)</td><td>standard fare (5%)</td></tr><tr><td>prob.</td><td>any (90%)</td><td>≤1000 (25%)</td><td>any (3%)</td><td>UA, Delta</td></tr><tr><td></td><td></td><td>any (25%)</td><td></td><td>AA, Hawaiian any (95%)</td></tr></table>
|
| 313 |
+
|
| 314 |
+
Table 4: Flight features of the AirDialogue dataset.
|
| 315 |
+
|
| 316 |
+
<table><tr><td colspan="2">Samples of dialogues with state “booked”.</td></tr><tr><td>Usr</td><td>Hello.</td></tr><tr><td>Agt</td><td>Hello. How can I help you?</td></tr><tr><td>Usr</td><td>I am Dennis Carter. I am planning to visit Detroit, can you please book me a ticket from AUS to DTW?</td></tr><tr><td>Agt</td><td>Sure, I will help you to book a flight ticket. Can you please share me your travelling dates?</td></tr><tr><td>Usr</td><td>My preferable journey dates are from 03/19 to 03/21.</td></tr><tr><td>Agt</td><td>Ok, give me a moment.</td></tr><tr><td>Usr</td><td>Sure, take your own time.</td></tr><tr><td>Agt</td><td>Here, I found a flight for you with single connection. flight number is <fl_1020>.</td></tr><tr><td>Usr</td><td>Please book my ticket with this airlines.</td></tr><tr><td>Agt</td><td>Your ticket is booked.</td></tr><tr><td>Usr</td><td>Thank you for booking my ticket.</td></tr><tr><td>Agt</td><td>Welcome.</td></tr><tr><td>Usr</td><td>Hello.</td></tr><tr><td>Agt</td><td>Hello, how may I help you?</td></tr><tr><td>Usr</td><td>I am Andrew Young. I want to book a flight ticket from DCA to MSP. Could you help me in booking?</td></tr><tr><td>Agt</td><td>Sure, may I know your trip dates please?</td></tr><tr><td>Usr</td><td>My travelling dates are from May 10 - 12.</td></tr><tr><td>Agt</td><td>Do you have any other requirements?</td></tr><tr><td>Usr</td><td>I need a connecting flight.</td></tr><tr><td>Agt</td><td>Ok, please wait. Ok, please wait.</td></tr><tr><td>Usr</td><td>Sure.</td></tr><tr><td>Agt</td><td>Thank you for waiting and we have found a flight in economy class with 1 connection in Spirit airlines and the fare of your flight will be 200. Do you like me to proceed your booking with this flight? flight number is <fl_1026>.</td></tr><tr><td>Usr</td><td>Please proceed.</td></tr><tr><td>Agt</td><td>Ok, your ticket is booked.</td></tr><tr><td>Usr</td><td>Thank you.</td></tr><tr><td>Agt</td><td>Most welcome.</td></tr></table>
|
| 317 |
+
|
| 318 |
+
Table 5: Samples of dialogues with state "booked".
|
| 319 |
+
|
| 320 |
+
<table><tr><td colspan="2">Samples of dialogue with state “changed”.</td></tr><tr><td>Usr</td><td>Hello .</td></tr><tr><td>Agt</td><td>Hello . How can I help you ?</td></tr><tr><td>Usr</td><td>I am Jeffrey Miller . I want to change my existing reservation due to postpone of my exams . Can you please help me with this ?</td></tr><tr><td>Agt</td><td>Sure , please wait for a while .</td></tr><tr><td>Usr</td><td>sure .</td></tr><tr><td>Agt</td><td>I found your reservation , can you please share your travelling dates ?</td></tr><tr><td>Usr</td><td>My travelling dates are July 28 and July 30 .</td></tr><tr><td>Agt</td><td>Can I know your airport codes ?</td></tr><tr><td>Usr</td><td>My airport codes are PHL and MSP .</td></tr><tr><td>Agt</td><td>Do you have any other specifications ?</td></tr><tr><td>Usr</td><td>No .</td></tr><tr><td>Agt</td><td>Ok , please wait for a while .</td></tr><tr><td>Usr</td><td>Sure .</td></tr><tr><td>Agt</td><td>There is a flight with single connection in UA airlines and the ticket fare is 200 only . Can I book the ticket for you ? flight number is <fl_1025> .</td></tr><tr><td>Usr</td><td>You can proceed .</td></tr><tr><td>Agt</td><td>Your ticket is confirmed in economy class .</td></tr><tr><td>Usr</td><td>Thank you .</td></tr><tr><td>Agt</td><td>Thank you for choosing us .</td></tr><tr><td>Usr</td><td>Hello .</td></tr><tr><td>Agt</td><td>Hello . How can I help you ?</td></tr><tr><td>Usr</td><td>I want to change my existing reservation due to postpone of my exams . Can you please help me with this ?</td></tr><tr><td>Agt</td><td>Sure , I will help you to change your reservation .</td></tr><tr><td>Usr</td><td>Thank you .</td></tr><tr><td>Agt</td><td>May I know your name please ?</td></tr><tr><td>Usr</td><td>My name is David Martinez .</td></tr><tr><td>Agt</td><td>Ok , please wait a moment .</td></tr><tr><td>Usr</td><td>Sure .</td></tr><tr><td>Agt</td><td>Thank you for waiting , I have found your reservation and cancelled it successfully .</td></tr><tr><td>Usr</td><td>Thank you for your help .</td></tr><tr><td>Agt</td><td>Thank you for reaching us .</td></tr></table>
|
| 321 |
+
|
| 322 |
+
Table 6: Samples of dialogue with state "changed".
|
| 323 |
+
|
| 324 |
+
<table><tr><td colspan="2">Samples of dialogue with state “cancelled”.</td></tr><tr><td>Usr</td><td>Hello .</td></tr><tr><td>Agt</td><td>Hello . How can I help you ?</td></tr><tr><td>Usr</td><td>I am Amy White . I want to cancel my recent reservation due to sudden cancellation of my trip . Can you help me ?</td></tr><tr><td>Agt</td><td>Sure , I will help you to cancel your reservation .</td></tr><tr><td>Usr</td><td>Thank you .</td></tr><tr><td>Agt</td><td>Happy to inform you that we found a reservation on your name . Shall I cancel it ?</td></tr><tr><td>Usr</td><td>Yes , please cancel it .</td></tr><tr><td>Agt</td><td>Your ticket has been cancelled .</td></tr><tr><td>Usr</td><td>Thank you for your help .</td></tr><tr><td>Agt</td><td>Welcome .</td></tr><tr><td>Usr</td><td>Hello .</td></tr><tr><td>Agt</td><td>Hello . How can I help you ?</td></tr><tr><td>Usr</td><td>I am Jacob Clark . I want to cancel my recent reservation due to sudden cancellation of my trip . Can you help me ?</td></tr><tr><td>Agt</td><td>Sure , I will help you to cancel your reservation .</td></tr><tr><td>Usr</td><td>Thank you .</td></tr><tr><td>Agt</td><td>Please give me a moment .</td></tr><tr><td>Usr</td><td>Sure .</td></tr><tr><td>Agt</td><td>I found your reservation and I have successfully cancelled your reservation .</td></tr><tr><td>Usr</td><td>Thank you for your help .</td></tr><tr><td>Agt</td><td>Welcome .</td></tr></table>
|
| 325 |
+
|
| 326 |
+
Table 7: Samples of dialogue with state "cancelled".
|
| 327 |
+
|
| 328 |
+
<table><tr><td colspan="2">Samples of dialogue with state “no flight found”.</td></tr><tr><td>Usr</td><td>Hello .</td></tr><tr><td>Agt</td><td>Hello , how may I help you ?</td></tr><tr><td>Usr</td><td>I am Paul Phillips . I am planning to attend music concert at CLT , can you please help me with booking a flight ticket from BOS ?</td></tr><tr><td>Agt</td><td>Sure , may I know your trip dates please ?</td></tr><tr><td>Usr</td><td>Sure , I would like to take off on Mar 10 and return on Mar 12 .</td></tr><tr><td>Agt</td><td>Do you have any other requirements ?</td></tr><tr><td>Usr</td><td>Yes , I need a connecting flight .</td></tr><tr><td>Agt</td><td>Please wait .</td></tr><tr><td>Usr</td><td>Sure , take your time .</td></tr><tr><td>Agt</td><td>Sorry , there are no flights available on your preferred dates .</td></tr><tr><td>Usr</td><td>Ok , no issues . Thank you for checking .</td></tr><tr><td>Agt</td><td>Thank you for choosing us .</td></tr><tr><td>Usr</td><td>Hello .</td></tr><tr><td>Agt</td><td>Hello , how may I help you ?</td></tr><tr><td>Usr</td><td>I am Robert Lee . I am planning to visit Atlanta , will you help me in booking the flight ticket from PHL to ATL ?</td></tr><tr><td>Agt</td><td>Sure , can I know your travelling dates ?</td></tr><tr><td>Usr</td><td>My travelling dates are 10/03 and 10/05 .</td></tr><tr><td>Agt</td><td>Sorry , there is no flight available on your route .</td></tr><tr><td>Usr</td><td>That ’s ok , thank you for checking .</td></tr><tr><td>Agt</td><td>Most welcome .</td></tr></table>
|
| 329 |
+
|
| 330 |
+
Table 8: Samples of dialogue with state "no flight found".
|
| 331 |
+
|
| 332 |
+
<table><tr><td colspan="2">Samples of dialogue with state “no reservation”.</td></tr><tr><td>Usr</td><td>Hello .</td></tr><tr><td>Agt</td><td>Hello . How can I help you ?</td></tr><tr><td>Usr</td><td>I am Steven Allen . I want to change my existing reservation due to some health issues , can you please help me with that ?</td></tr><tr><td>Agt</td><td>Sure , I will help you to change your reservation .</td></tr><tr><td>Usr</td><td>Thank you .</td></tr><tr><td>Agt</td><td>Please give me a moment .</td></tr><tr><td>Usr</td><td>Sure .</td></tr><tr><td>Agt</td><td>Sorry , there is no reservation found on your name .</td></tr><tr><td>Usr</td><td>Ok , no problem . Thank you for your information .</td></tr><tr><td>Agt</td><td>Welcome .</td></tr><tr><td>Usr</td><td>Hello .</td></tr><tr><td>Agt</td><td>Hello , how may I help you ?</td></tr><tr><td>Usr</td><td>I am Karen Gonzalez . I want to cancel my recent reservation due to sudden cancellation of my trip . Can you help me ?</td></tr><tr><td>Agt</td><td>Sure , please wait for a moment .</td></tr><tr><td>Usr</td><td>Ok .</td></tr><tr><td>Agt</td><td>Sorry , there is no reservation found on your name .</td></tr><tr><td>Usr</td><td>No problem , thank you for the information .</td></tr><tr><td>Agt</td><td>Thank you for reaching us .</td></tr></table>
|
| 333 |
+
|
| 334 |
+
Table 9: Samples of dialogue with state "no reservation".
|
| 335 |
+
|
| 336 |
+
the agent responds no matching flight along with a reason. For instance, the agent in the upper sample mentions that no matching flights found is due to the mismatching dates.
|
| 337 |
+
|
| 338 |
+
For "no reservation", Table 9 shows the corresponding samples, where the upper sample is with the user intent "change" and the lower sample is with the intent "cancel". We see similar patterns to samples of "changed" and "cancelled". At the beginning, the user says the intent of changing, or cancelling, the ticket with some reason. The agent asks for the name if needed, and confirm the action of changing, or cancel, with the user.
|
airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:55d9daa1fc92bc79fa3cebf708acaeae32e8d15b15919ae4a7cfaed982af2bb2
|
| 3 |
+
size 907854
|
airconciergegeneratingtaskorienteddialogueviaefficientlargescaleknowledgeretrieval/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8a07d7c7804814a241d920703c48df03750215076c82ff6cd68d4f74309852d2
|
| 3 |
+
size 388940
|
anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/e742f1db-5949-48df-99bd-dcbcbc58dffb_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:173540ff5eb9e081f4388a716310306bbf3c7ac3852d432487bcef9d98dfbf16
|
| 3 |
+
size 66841
|
anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/e742f1db-5949-48df-99bd-dcbcbc58dffb_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b1448bd5f3484447f145cf56e1d5c7622686006cabf0fdf402fa5a9c3da41eee
|
| 3 |
+
size 83607
|
anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/e742f1db-5949-48df-99bd-dcbcbc58dffb_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b932144571eaec0afebd8d366268fd31941ca0e9151838870a5f13d3b568accb
|
| 3 |
+
size 1365948
|
anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/full.md
ADDED
|
@@ -0,0 +1,341 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# An Attentive Recurrent Model for Incremental Prediction of Sentence-final Verbs
|
| 2 |
+
|
| 3 |
+
Wenyan Li
|
| 4 |
+
|
| 5 |
+
Comcast AI Research Lab
|
| 6 |
+
|
| 7 |
+
wenyan19562@gmail.com
|
| 8 |
+
|
| 9 |
+
Alvin Grissom II
|
| 10 |
+
|
| 11 |
+
Haverford College
|
| 12 |
+
|
| 13 |
+
agrissom@haverford.edu
|
| 14 |
+
|
| 15 |
+
Jordan Boyd-Graber
|
| 16 |
+
|
| 17 |
+
University of Maryland
|
| 18 |
+
|
| 19 |
+
jbg@umiacs.umd.edu
|
| 20 |
+
|
| 21 |
+
# Abstract
|
| 22 |
+
|
| 23 |
+
Verb prediction is important for understanding human processing of verb-final languages, with practical applications to real-time simultaneous interpretation from verb-final to verb-medial languages. While previous approaches use classical statistical models, we introduce an attention-based neural model to incrementally predict final verbs on incomplete sentences in Japanese and German SOV sentences. To offer flexibility to the model, we further incorporate synonym awareness. Our approach both better predicts the final verbs in Japanese and German and provides more interpretable explanations of why those verbs are selected.
|
| 24 |
+
|
| 25 |
+
# 1 Introduction
|
| 26 |
+
|
| 27 |
+
Final verb prediction is fundamental to human language processing in languages with subject-object-verb (SOV) word order, such as German<sup>1</sup> and Japanese, (Kamide et al., 2003; Momma et al., 2014; Chow et al., 2018) particularly for simultaneous interpretation, where an interpreter generates a translation in real time. Instead of waiting until the entire sentence is completed, simultaneous interpretation requires translation of the source text units while the interlocutor is speaking.
|
| 28 |
+
|
| 29 |
+
When human simultaneous interpreters translate from an sov language to an SVO one incrementally—without waiting for the final verb at the end of a sentence—they must use strategies to reduce the lag, or delay, between the time they hear the source words and the time they translate them (Wilss, 1978; He et al., 2016). One strategy is final verb prediction: since the verb comes late in the source sentence but early in the target translation, if the verb is predicted in advance, it can be translated before it is heard, allowing for a more
|
| 30 |
+
|
| 31 |
+
German Cazeneuve Dankte dort den Männern und sagte, ohne deren kühlen Kopf hatte es vielleicht ein “furchtbares Drama” gegeben.
|
| 32 |
+
|
| 33 |
+
English Cazeneuve thanked the men there and said that without their cool heads there might have been a "terrible drama".
|
| 34 |
+
|
| 35 |
+
Japanese 1
|
| 36 |
+
|
| 37 |
+
English It also said that he was acquainted with a secret lodging accommodation in Katsuragiyama in Nara Prefecture of Yamato.
|
| 38 |
+
|
| 39 |
+
Figure 1: An example of the verb position difference between sov and svo languages, where the final verb in German and Japanese is expected much earlier in their English translation.
|
| 40 |
+
|
| 41 |
+
“simultaneous” (or monotonic) translation (Jörg, 1997; Bevilacqua, 2009; He et al., 2015). Furthermore, Chernov et al. (2004) argue that simultaneous interpreters' probability estimates and predictions of the verbal and semantic structure of preceding messages facilitate simultaneity in human simultaneous interpretation.
|
| 42 |
+
|
| 43 |
+
Like for human translation, simultaneous machine translation (SMT), becomes more monotonic for SOV-SVO with better verb prediction (Grissom II et al., 2014; Gu et al., 2017; Alinejad et al., 2018). Earlier work used pattern-matching rules (Matsubara et al., 2000), $n$ -gram language models (Grissom II et al., 2014), or a logistic regression with linguistic features (Grissom II et al., 2016). Recent neural simultaneous translation systems have integrated prediction into the encoder-decoder model or argued that these predictions, including verb predictions, are made implicitly by such models (Gu et al., 2017; Alinejad et al., 2018), but they have not systematically studied the late-occurring verb predictions themselves.
|
| 44 |
+
|
| 45 |
+
German
|
| 46 |
+
Auch die deutschen Skispringer können sich Hoffnungen auf ihre erstige Medaille bei den Winterspielen in Vancouver [machen, schaffen, tun].
|
| 47 |
+
|
| 48 |
+
English The German ski jumpers can also hope for their first medal at the Winter Games in Vancouver.
|
| 49 |
+
|
| 50 |
+
Figure 2: An example of alternatives of final verbs ("machen", "schaffen", "tun") that preserve same general meaning in German and do not influence its translation in English.
|
| 51 |
+
|
| 52 |
+
While neural models can identify complex patterns from feature-rich datasets (Goldberg, 2017), less research has gone into problem of long-distance prediction, particularly for sentence-final verbs, where predictions must be made with incomplete information. We introduce a neural model, Attentive Neural Verb Inference for Incremental Language (ANVIIL) for verb prediction, which predicts verbs earlier and with higher accuracy. Moreover, we make ANVIIL's predictions more flexible by introducing synonym awareness. Self-attention also allows visualizes why a certain verb is selected and how it relates to specific tokens in the observed subsentence.
|
| 53 |
+
|
| 54 |
+
# 2 The Problem of Verb Prediction
|
| 55 |
+
|
| 56 |
+
Given an SOV sentence, we want to predict the final verb as soon as possible in an incremental setting. For example, in Figure 1, the final verb, "gegeben", in German is expected to be translated together with "hätte es" as "there would have been" in the middle of the English translation.
|
| 57 |
+
|
| 58 |
+
Human interpreters will often predict a related verb rather than the exact verb in a reference translation, while preserving the same general meaning, since predicting the exact verb in a reference translation is difficult (Jörg, 1997). For instance, in Figure 2, besides "machen", verbs such as "schaffen" and "tun" also offen pair with "Hoffnungen" to express "hope for" in English. We therefore include two verb prediction tasks: first, we learn to predict the exact verb; second, we learn to predict verbs semantically similar to the exact reference verb. We describe these two tasks below.
|
| 59 |
+
|
| 60 |
+
# 2.1 Exact Prediction
|
| 61 |
+
|
| 62 |
+
We follow Grissom II et al. (2016), who formulate final verb prediction as sequential classification: a
|
| 63 |
+
|
| 64 |
+
sentence is revealed to the classifier incrementally, and the classifier predicts the exact verb at each time step. While Grissom II et al. (2016) use logistic regression with engineered linguistic features, we use a recurrent neural model with self-attention, which learns embeddings $^{2}$ and a context representation that captures relations between tokens, regardless of the distance. Verbs are predicted by classifying on the learned representation of incomplete sentences.
|
| 65 |
+
|
| 66 |
+
# 2.2 Synonym-aware Prediction
|
| 67 |
+
|
| 68 |
+
We also extend the idea in Section 2.1 to allow for synonym-aware predictions: for example, the verb synonym "give", used in place of "provide", preserves the intended meaning in most circumstances and can be considered a successful prediction. Instead of training the model to focus on one fixed verb for each input, we encourage the model to be confident about a set of verb candidates which are generally correct in the context.
|
| 69 |
+
|
| 70 |
+
# 3 A Neural Model for Verb Prediction
|
| 71 |
+
|
| 72 |
+
This section describes ANVIIIL's structure. Gated recurrent neural networks (RNNs), such as LSTMs (Hochreiter and Schmidhuber, 1997) and gated recurrent units (Cho et al., 2014, GRUs), can capture long-range dependencies in text, which we need for effective verb prediction.
|
| 73 |
+
|
| 74 |
+
We construct an RNN-based classifier with self-attention (Lin et al., 2017) for predicting sentence-final verbs (Figure 3). This is a natural encoding of the problem, as it explicitly models how interpreters might receive information and update their verb predictions. The hidden states of the sequence model can be either at the word or character level.
|
| 75 |
+
|
| 76 |
+
# 3.1 BigRU Sequence Encoder
|
| 77 |
+
|
| 78 |
+
Following Yang et al. (2016), we encode input sequences using the bidirectional GRU (BiGRU). Given an incomplete sentence prefix $\pmb{x} = (x_{1}, x_{2}, \dots, x_{l})$ of length $l$ , BiGRU takes as input the embeddings $(\pmb{w}_{1}, \pmb{w}_{2}, \dots, \pmb{w}_{l})$ , where $\pmb{w}_{i}$ is the $d$ -dimensional embedding vector of $x_{i}$ . At time
|
| 79 |
+
|
| 80 |
+

|
| 81 |
+
Figure 3: ANVIL. Token sequences at the input layer are mapped to embeddings, which go to the GRU. The dot product of attention weights and hidden states pass through a dense layer to predict the verb.
|
| 82 |
+
|
| 83 |
+
step $t$ , the forward and backward hidden states are:
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\overrightarrow {\boldsymbol {h} _ {t}} = \overrightarrow {\operatorname {G R U}} \left(\boldsymbol {w} _ {t}, \overrightarrow {\boldsymbol {h} _ {t - 1}}\right) \quad \leftarrow \quad \leftarrow \quad \leftarrow \tag {1}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\overleftarrow {\boldsymbol {h}} _ {t} = \overleftarrow {\mathrm {G R U}} (\boldsymbol {w} _ {t}, \overleftarrow {\boldsymbol {h}} _ {t + 1}).
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
These are concatenated as $\pmb{h}_t = [\overrightarrow{\pmb{h}}_t; \overleftarrow{\pmb{h}}_t]$ and we represent the input sequence as
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
H = \left(\boldsymbol {h} _ {1}, \boldsymbol {h} _ {2}, \dots , \boldsymbol {h} _ {l}\right). \tag {2}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
As we only use a prefix of the sentence as input for prediction, we won't be able to see backward messages from unrevealed. However, once we see those words, later words in the prefix do change the internal representation of earlier words in $H$ , creating a more powerful overall representation that uses more of the available context.
|
| 100 |
+
|
| 101 |
+
Embedding vectors for the input can be word embeddings or character embeddings, yielding a word-based or a character-based model; we try both in Section 4.
|
| 102 |
+
|
| 103 |
+
# 3.2 Structured Self-attention
|
| 104 |
+
|
| 105 |
+
Following Lin et al. (2017), we apply self-attention with multiple views of the input sequence to obtain a weighted context vector $\pmb{v}$ . By viewing the sequence multiple times, it allows different attention to be assigned at each time. Using a two layer multilayer perceptron (MLP) without bias and a softmax function over the sequence length, we have an $r$ -by- $l$ attention matrix $A$ , which includes $r$ attention vectors extracted from $r$ views of $\pmb{x}$ :
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
A = \operatorname {s o f t m a x} \left(W _ {s _ {2}} \tanh \left(W _ {s _ {1}} H ^ {T}\right)\right) \tag {3}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
We sum over all $r$ attention vectors and normalize, yielding a single attention vector $\mathbf{a}$ with normalized
|
| 112 |
+
|
| 113 |
+
weights (Figure 3). By assigning each hidden state its attention $a_{t}$ , we acquire an overall representation of the sequence:
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\boldsymbol {v} = \sum_ {t = 1} ^ {l} a _ {t} \boldsymbol {h} _ {t}. \tag {4}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
# 3.3 Verb Predictor
|
| 120 |
+
|
| 121 |
+
For an incomplete input prefix $\pmb{x}$ , the target verb is $y \in \mathcal{Y} = \{1,2,\dots ,K\}$ . Based on the high-level representation $\pmb{v}$ of the input sequence, we compute the probability of each verb $k$ and select the one with the highest probability as the predicted verb:
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
p (y \mid \boldsymbol {v}) = \frac {e ^ {f _ {y} (\boldsymbol {v})}}{\sum_ {k = 1} ^ {K} e ^ {f _ {k} (\boldsymbol {v})}} \tag {5}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
where $f_{k}(\pmb{v})$ is the logit from the dense layer.
|
| 128 |
+
|
| 129 |
+
# 3.3.1 Exact Verb Prediction
|
| 130 |
+
|
| 131 |
+
As there is only one ground-truth verb $y$ for the input, we maximize the log-likelihood of the correct verb with cross-entropy loss:
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\mathcal {L} = - \sum_ {k = 1} ^ {K} q (k \mid \boldsymbol {v}) \log p (k \mid \boldsymbol {v}) \tag {6}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
where $q(k \mid v)$ is the ground-truth distribution over the verbs, which equals 1 if $k = y$ , or 0 otherwise.
|
| 138 |
+
|
| 139 |
+
# 3.3.2 Synonym-aware Verb Prediction
|
| 140 |
+
|
| 141 |
+
In addition to the exact verb $y$ , we add verbs that are of similar meaning to $y$ in to a synonym set $\mathcal{Y}^{\prime}\subset Y$ , creating a verb candidate pool for each input sample. Instead of maximizing the log-likelihood of the fixed verb $y$ , we maximize the log-likelihood of the most probable verb candidate $y^\prime \in \mathcal{Y}^\prime$ dynamically through training:
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
\mathcal {L} = - \sum_ {k = 1} ^ {K} q ^ {\prime} (k \mid \boldsymbol {v}) \log p (k \mid \boldsymbol {v}) \tag {7}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
where
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
q ^ {\prime} (k \mid \boldsymbol {v}) = \left\{ \begin{array}{l l} 1, & \text {i f} k = \underset {k \in \mathcal {Y} ^ {\prime}} {\operatorname {a r g m a x}} p (k \mid \boldsymbol {v}) \\ 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {8}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
As the candidate can be different in each step, overall the likelihood of any verb candidate in the synonym set is maximized in the training process.
|
| 154 |
+
|
| 155 |
+
<table><tr><td>Most Frequent
|
| 156 |
+
Verbs</td><td></td><td>Thousand
|
| 157 |
+
of Verbs</td><td>Coverage (%)</td></tr><tr><td rowspan="3">DE
|
| 158 |
+
(Inflected)</td><td>100</td><td>1286.7</td><td>16.0</td></tr><tr><td>200</td><td>2243.7</td><td>28.0</td></tr><tr><td>300</td><td>2577.3</td><td>32.2</td></tr><tr><td rowspan="3">JA
|
| 159 |
+
(Normalized)</td><td>100</td><td>70.2</td><td>56.8</td></tr><tr><td>200</td><td>85.2</td><td>68.9</td></tr><tr><td>300</td><td>93.2</td><td>75.4</td></tr></table>
|
| 160 |
+
|
| 161 |
+
Table 1: Dataset for final-verb prediction. We extract sentences with the most frequent 100–300 verbs in German and Japanese verb final sentences. Using normalized Japanese verbs reduces the sparsity of the verbs and improves coverage of sentences.
|
| 162 |
+
|
| 163 |
+
# 4 Exact Prediction Experiments
|
| 164 |
+
|
| 165 |
+
We first test exact prediction on both Japanese and German verb-final sentences with both word-based and character-based models.
|
| 166 |
+
|
| 167 |
+
# 4.1 Datasets
|
| 168 |
+
|
| 169 |
+
We use German and Japanese verb-final sentences between ten and fifty tokens (Table 1) that end in the 100 to 300 most common verbs (Wolfel et al., 2008). For each sentence, the extracted final verb becomes the label; the token sequence preceding it (the preverb) is the input. We split sentences into train $(64\%)$ , evaluation $(16\%)$ and test $(20\%)$ sets.
|
| 170 |
+
|
| 171 |
+
For Japanese, we use the Kyoto Free Translation Task (KFT) corpus of Wikipedia articles. Since Japanese is unsegmented, we use the morphological analyzer MeCab (Kudo, 2005) for tokenization. Like Grissom II et al. (2016), we strip out post-verbal copulas and normalize verb forms to the dictionary ru (non-past tense) form. We also consider suru light verb constructions a single unit.
|
| 172 |
+
|
| 173 |
+
For German, we use the Wortschatz Leipzig news corpus from 1995 to 2015 (Goldhahn et al., 2012). German sentences ending with a verb (we throw out verb medial sentences) are tokenized and POS-tagged with TreeTagger (Schmid, 1995). Since German sentences may end with two verbs—for example, a verb followed by ist, we only predict the content verb, i.e., the first verb in the two-verb sequence. Unlike Japanese, we leave German verbs inflected, as there is less variation (usually past participle or infinitive form).
|
| 174 |
+
|
| 175 |
+
# 4.2 Training Data Representation
|
| 176 |
+
|
| 177 |
+
Because we predict from partial input, we train on incrementally longer preverb subsequences. Each
|
| 178 |
+
|
| 179 |
+
subsequence is an independent input sample during training, and each preverb is truncated into five progressively longer subsentences: $30\%$ , $50\%$ , $70\%$ , $90\%$ , and $100\%$ .<sup>4</sup>
|
| 180 |
+
|
| 181 |
+
# 4.3 Training Details
|
| 182 |
+
|
| 183 |
+
We train both word- and character-based models for German and Japanese verb prediction. We use the dev sets to manually tune hyperparameters for accuracy—word embedding size, hidden layer size, dropout rates and learning rate.
|
| 184 |
+
|
| 185 |
+
Character-based Model For input character sequences, we learn 64-dimensional embeddings and encode them with a two-layer BiGRU of 256 hidden units. The embeddings are randomly initialized with PyTorch defaults and updated during training jointly with other parameters. Mini-batch sizes are 256 for German but 128 for Japanese's smaller corpus. We use the evaluation set for tuning and set the embedding dropout rate as 0.6 and the RNN dropout rate as 0.2 while averaging from five views for attention vectors. We optimize with Adam (Kingma and Ba, 2015) with an initial learning rate of $10^{-4}$ , decaying by 0.1 when loss increases. Training takes approximately two (Japanese) and four (German) hours on one 6GB GTX1060 GPU.
|
| 186 |
+
|
| 187 |
+
Word-based Model We use a vocabulary of 50,000 for German and Japanese; we use the $<UNK>$ token for out-of-vocabulary tokens. The embedding size is 300. We encode the input embeddings with a two-layer BiGRU with 512 hidden units. Other hyperparameters are unchanged from the character-based model.
|
| 188 |
+
|
| 189 |
+
# 4.4 Results
|
| 190 |
+
|
| 191 |
+
We compare ANVIIIL to the logistic regression model in Grissom II et al. (2016) on the 100 most frequent verbs in the corpus (Figure 4). For both languages, ANVIIIL has higher accuracy than previous work (Figure 5), especially early in the sentence. While word-based models work best for German, character-based models work best for Japanese, perhaps because it is agglutinative.
|
| 192 |
+
|
| 193 |
+
Figure 6 compares other encodings of preverbs (at a character level) in Japanese. In general, ANVIL has higher accuracy on verb prediction tasks.
|
| 194 |
+
|
| 195 |
+

|
| 196 |
+
Figure 4: Comparing word and character representations for German (inflected) and Japanese (normalized) verb prediction. ANVIIIL consistently has higher accuracy than LogReg from Grissom II et al. (2016), and word-based prediction is slightly better for German but worse for Japanese.
|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
Figure 5: Accuracy when classifying among the most common 100, 200, and 300 verbs. ANVIIL consistently outperforms the best-performing model described in Grissom II et al. (2016), especially early in the sentences.
|
| 200 |
+
|
| 201 |
+
# 5 Synonym-aware Prediction
|
| 202 |
+
|
| 203 |
+
We now describe synonym-aware verb prediction (Section 4). We use 2,214,523 German sentences ending with 100 most frequent lemmatized verbs. For each sentence, we extract the preverb as in Section 4.1, but in this case, the target is not just a
|
| 204 |
+
|
| 205 |
+
single verb. For each lemmatized verb, we extract its synonyms among the 100 verbs using Germanet synsets (Hamp and Feldweg, 1997; Henrich and Hinrichs, 2010). If synonyms exist, we include them all in a list as candidate target verbs for the input as in Figure 2. Synonyms exist for $40.79\%$
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
Figure 6: ANVIIL's BiGRU with self-attention outperforms other most settings on predicting the 100 most common verbs in Japanese.
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
Figure 7: Accuracy across time on exact/synonym-aware match with exact/synonym-aware training. Accuracy increases slightly with the addition of the synonym-aware matching.
|
| 212 |
+
|
| 213 |
+
of the sentences in the dataset.
|
| 214 |
+
|
| 215 |
+
Similarly, we train incrementally on subsequences of the preverb as in Section 4.3. We learn high-level representations of the preverb using word-level embeddings and use the same training parameters as in Section 4.3
|
| 216 |
+
|
| 217 |
+
During training, instead of maximizing the exact verb's log-likelihood, we maximize the log-likelihood of any verb in the synonym-set, encouraging the model to be confident about any verb that fits in the context.
|
| 218 |
+
|
| 219 |
+
# 5.1 Verb Prediction Results
|
| 220 |
+
|
| 221 |
+
We compare accuracy for predicting exact and synonym-aware verbs with different objects in training. In synonym-aware prediction, we consider the prediction successful if it is one of the candidate verbs. Compared to predicting the exact verb, while being less focused on the fixed verb, synonym-aware prediction further improves the predication accuracy (Figure 7), but only slightly. ANVIIL clearly outperforms the feature engineering linear models on Japanese across the entire sentence, even when the number of verbs to choose from is larger; and on German, ANVIIL outperforms previous models when the number of verbs to choose from is the same (Figure 4). This is may be due to the long-range dependencies which are not captured in the logistic regression model.
|
| 222 |
+
|
| 223 |
+
# 6 Visualization and Analysis
|
| 224 |
+
|
| 225 |
+
We now analyze our model's predictions. While previous work (Grissom II et al., 2016) examines the contribution of features by examining the model itself, our approach does not rely on feature engineering. To examine our model, we instead use a heatmap to visualize the time course attention values in sentences, allowing us to see on what the model focuses when predicting.
|
| 226 |
+
|
| 227 |
+
# 6.1 Visualization of the Prediction Process
|
| 228 |
+
|
| 229 |
+
We visualize how our model makes its predictions in Figure 8 and Figure 9. In both languages, the model not only focuses on the most recent revealed word, but also focuses attention to relevant long-distance dependencies.
|
| 230 |
+
|
| 231 |
+
Predictions are, as expected, also more confident and accurate when approaching the end of the preverb. This is consistent with the verb prediction process for human interpreters (Wilss, 1978) and with previous work (Grissom II et al., 2016). With increasing information, the number of possible alternatives gradually declines. Figure 10 visualizes how the model makes synonym-aware predictions.
|
| 232 |
+
|
| 233 |
+
# 6.2 Character-based versus Word-based
|
| 234 |
+
|
| 235 |
+
As described in Section 4.3, we implement both character-based and word-based models for verb prediction. For Japanese final-verb prediction, the
|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
Figure 8: Attention during German verb prediction. The model usually attends to the most recent word, but focuses on "es", which can be used as the subject of an existential phrase (Joseph, 2000) in combination with the verb "geben". Thus, it focuses on an interpretation of "es" as the subject, consistently attends to "es" throughout the sentence, and correctly predicts "geben" (for consistency with the Japanese examples, we show the model that predicts the normalized—infinitive—form of the verb).
|
| 239 |
+
|
| 240 |
+

|
| 241 |
+
Figure 9: Attention during Japanese verb prediction. Attention and prediction transition through time on a Japanese sentence. The genitive case marker no, in bright yellow, has a high attention weight, as do the characters making in the noun before it. Case marker-adjacent nouns, including before the genitive no (twice) and the accusative wo have slightly less. Toward the end of the sentence, attention shifts to the quotative particle to, which significantly limits possible completions.
|
| 242 |
+
|
| 243 |
+
character-based model has higher prediction accuracy. Unlike the word-based model, it does not require use of a morphological analyzer and has a smaller vocabulary size. The word-based model, however, works better for German verb prediction and word-based heatmaps are more interpretable than character-based ones for German. We show word-based heatmaps for exact prediction in Figure 8 and Figure 11.
|
| 244 |
+
|
| 245 |
+
# 6.3 Synonym-aware versus Exact Prediction
|
| 246 |
+
|
| 247 |
+
We show an example of how synonym-aware prediction can make the task easier in Figure 12. By providing synonyms during training, the model makes an alternative prediction "zeigen" (present, show) for the original verb "einetzen" (use).
|
| 248 |
+
|
| 249 |
+
# 6.4 Case Markers
|
| 250 |
+
|
| 251 |
+
Previous work suggests that case markers play a key role in both human and machine verb prediction for Japanese (Grissom II et al., 2016). Japanese
|
| 252 |
+
|
| 253 |
+
has explicit postposition case markers which mark the roles of the words in a sentence. By examining the accuracy of predictions when the most recent token is a case marker, we can gain insight into their contributions to the predictions.
|
| 254 |
+
|
| 255 |
+
Figure 13 considers the instances where the most recent token observed is the given case marker; in these situations, the accuracy of predicting one of the 100 most frequent verbs is much higher than in general. It is unsurprising that the quotative particles have higher accuracy at the end of the sentence, since the set of verbs that follow them is highly constrained—e.g., say, think, announce, etc. Quotative particles for the entire sentence occur immediately before to final verb. More general particles, such as ga (NOM) and wo (ACC) show a smaller increase in accuracy.
|
| 256 |
+
|
| 257 |
+
# 7 Related Work
|
| 258 |
+
|
| 259 |
+
This section examines previous work on prediction in humans, simultaneous interpretation, and
|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
Figure 10: Attention during German synonym-aware verb prediction. The model constantly focuses on "skispringer" (ski jumpers), which is the subject of the verb and predicts "machen" and "schaffen" from three of the verb candidates.
|
| 263 |
+
|
| 264 |
+

|
| 265 |
+
Figure 11: Progression of attention weights of a word-based model on a German sentence. The model successfully captures the passive voice in the sentence where "wird erwartet" is often translated together as "is expected". Full translation of the example is: Chancellor Merkel is expected to speak in London next week.
|
| 266 |
+
|
| 267 |
+
simultaneous machine translation.
|
| 268 |
+
|
| 269 |
+
Psycholinguistics has examined argument structure using verb-final $b\check{a}$ -construction sentences in Chinese (Chow et al., 2015, 2018). Kamide et al. (2003) find that case markers facilitate verb predictions for humans, likely because they provide clues about the semantic roles of the marked words in sentences. In sentence production, Momma et al. (2015) suggest that humans plan verbs after selecting a subject but before objects.
|
| 270 |
+
|
| 271 |
+
Empirical work on German verb prediction first investigated German-English simultaneous interpreters in Jörg (1997): professional interpreters often predict verbs. Matsubara et al. (2000) introduce early verb prediction into Japanese-English SMT
|
| 272 |
+
|
| 273 |
+
by predicting verbs in the target language. Grissom II et al. (2014) and Gu et al. (2017) use verb prediction in the source language and learn when to trust the predictions with reinforcement learning, while Oda et al. (2015) predict syntactic constituents and do the same. Grissom II et al. (2016) predict verbs with linear classifiers and compare the predictions to human performance. We extend that approach with a modern model that explains which cues the model uses to predict verbs.
|
| 274 |
+
|
| 275 |
+
In interactive translation (Peris et al., 2017) and simultaneous translation (Alinejad et al., 2018; Ma et al., 2019) systems, neural methods for next word prediction improve translation. BERT (Devlin et al., 2019) uses masked deep bidirectional language
|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
Figure 12: Imperfect synonym-aware prediction process on a German sentence. The predicted synonym "zeigen" (show/appear) in context is not a perfect replacement for the correct verb "einsenetzen" (put in place), but it better preserves the general meaning of the sentence: "This money had been made available to the country for the process of EU membership and should now appear for refugee assistance."
|
| 279 |
+
|
| 280 |
+

|
| 281 |
+
Figure 13: Case markers correlate with improved verb prediction compared to overall verb prediction (Figure 4). Some case markers, such as to, have large jumps in accuracy toward the end, while others, such as wo do not. We examine nominative (NOM), instructive (INS), accusative (ACC), dative (DAT), quotative (QUOT), and essive (ESS) markers.
|
| 282 |
+
|
| 283 |
+
models and contextualized representations (Peters et al., 2018) for pretraining and gain improvements in word prediction and classification. We incorporate bidirectional encoding to verb prediction.
|
| 284 |
+
|
| 285 |
+
Existing neural attention models for sequential classification are commonly trained on complete input (Yang et al., 2016; Shen and Lee, 2016; Bahdanau et al., 2014). Classification on incomplete sequences and long-distance sentence-final verb prediction remains difficult and under-explored.
|
| 286 |
+
|
| 287 |
+
# 8 Conclusion
|
| 288 |
+
|
| 289 |
+
We present a synonym-aware neural model for incremental verb prediction using BiGRU with self-attention. It outperforms existing models in predicting the most frequent sentence-final verbs in both Japanese and German. As we predict the verbs incrementally, our method can be directly applied to solve real-time sequential classification or prediction problems. SMT systems for SOV to SVO simultaneous MT can also benefit from our work to reduce translation latency. We show that larger datasets always help with predicting the sentence-final verbs, suggesting that larger corpora will further improve results.
|
| 290 |
+
|
| 291 |
+
# Acknowledgements
|
| 292 |
+
|
| 293 |
+
This material is based upon work supported by the National Science Foundation under Grant No. 1748663 (UMD). The views expressed in this paper are our own. We thank Graham Neubig and Hal Daumé III for useful feedback.
|
| 294 |
+
|
| 295 |
+
# References
|
| 296 |
+
|
| 297 |
+
Ashkan Alinejad, Maryam Siahbani, and Anoop Sarkar. 2018. Prediction improves simultaneous neural machine translation. In Conference of Empirical Methods in Natural Language Processing, pages 3022-3027.
|
| 298 |
+
Emmon Bach. 1962. The order of elements in a transformational grammar of German. Language, 38(3):263-269.
|
| 299 |
+
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv e-prints.
|
| 300 |
+
|
| 301 |
+
Lorenzo Bevilacqua. 2009. The position of the verb in Germanic languages and simultaneous interpretation. The Interpreters' Newsletter, 14:1-31.
|
| 302 |
+
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.
|
| 303 |
+
G.V. Chernov, R. Setton, and A. Hild. 2004. Inference and Anticipation in Simultaneous Interpreting: A Probability-prediction Model. Benjamins translation library. J. Benjamins Publishing Company.
|
| 304 |
+
Kyunghyun Cho, Bart van Merrienboer, Caglar Gülcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Conference of Empirical Methods in Natural Language Processing.
|
| 305 |
+
Wing-Yee Chow, Ellen Lau, Suiping Wang, and Colin Phillips. 2018. Wait a second! delayed impact of argument roles on on-line verb prediction. Language, Cognition and Neuroscience, 33(7):803-828.
|
| 306 |
+
Wing-Yee Chow, Cybelle Smith, Ellen Lau, and Colin Phillips. 2015. A "bag-of-arguments" mechanism for initial verb predictions. Language, Cognition and Neuroscience, pages 1-20.
|
| 307 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Association for Computational Linguistics.
|
| 308 |
+
Patrick Doetsch, Pavel Golik, and Hermann Ney. 2017. A comprehensive study of batch construction strategies for recurrent neural networks in MXNet. IEEE International Conference on Acoustics, Speech, and Signal Processing.
|
| 309 |
+
Yoav Goldberg. 2017. Neural Network Methods for Natural Language Processing. Synthesis Lectures on Human Language Technologies.
|
| 310 |
+
Dirk Goldhahn, Thomas Eckart, and Uwe Quasthoff. 2012. Building large monolingual dictionaries at the Leipzig corpora collection: From 100 to 200 languages. In International Language Resources and Evaluation.
|
| 311 |
+
Alvin Grissom II, Naho Orita, and Jordan Boyd-Graber. 2016. Incremental prediction of sentence-final verbs: Humans versus machines. In Conference on Computational Natural Language Learning, pages 95-104.
|
| 312 |
+
Alvin C. Grissom II, He He, Jordan Boyd-Graber, John Morgan, and Hal Daumé III. 2014. Don't until the final verb wait: Reinforcement learning for simultaneous machine translation. In *Conference of Empirical Methods in Natural Language Processing*.
|
| 313 |
+
|
| 314 |
+
Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor O.K. Li. 2017. Learning to translate in real-time with neural machine translation. European Chapter of the Association for Computational Linguistics.
|
| 315 |
+
Birgit Hamp and Helmut Feldweg. 1997. Germanet-a lexical-semantic net for german. Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications.
|
| 316 |
+
He He, Jordan Boyd-Graber, and Hal Daumé III. 2016. Interpretese vs. translationese: The uniqueness of human strategies in simultaneous interpretation. In Conference of the North American Chapter of the Association for Computational Linguistics.
|
| 317 |
+
He He, Alvin Grissom II, Jordan Boyd-Graber, and Hal Daume III. 2015. Syntax-based rewriting for simultaneous machine translation. In *Conference of Empirical Methods in Natural Language Processing*.
|
| 318 |
+
Verena Henrich and Erhard Hinrichs. 2010. GernEdit—the GermaNet editing tool. In International Language Resources and Evaluation. European Languages Resources Association (ELRA).
|
| 319 |
+
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
|
| 320 |
+
Udo Jorg. 1997. Bridging the gap: Verb anticipation in German-English simultaneous interpreting. In M. Snell-Hornby, Z. Jettmarova, and K. Kaiindl, editors, Translation as Intercultural Communication: Selected Papers from the EST Congress, Prague 1995.
|
| 321 |
+
Brian Joseph. 2000. What gives with es gibt? American Journal of Germanic Linguistics and Literatures, 12:243-265.
|
| 322 |
+
Yuki Kamide, Gerry Altmann, and Sarah L Haywood. 2003. The time-course of prediction in incremental sentence processing: Evidence from anticipatory eye movements. Journal of Memory and Language, 49(1):133-156.
|
| 323 |
+
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations.
|
| 324 |
+
Jan Koster. 1975. Dutch as an SOV language. Linguistic analysis, 1(2):111-136.
|
| 325 |
+
T. Kudo. 2005. Mecab: Yet another part-of-speech and morphological analyzer. http://mecab.sourceforge.net/.
|
| 326 |
+
Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In Proceedings of the International Conference on Learning Representations.
|
| 327 |
+
|
| 328 |
+
Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2019. STACL: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework.
|
| 329 |
+
Shigeki Matsubara, Keiichi Iwashima, Nobuo Kawaguchi, Katsuhiko Toyama, and Yasuyoshi Inagaki. 2000. Simultaneous Japanese-English interpretation based on early predation of English verb. In Symposium on Natural Language Processing.
|
| 330 |
+
Shota Momma, L Robert Slevc, and Colin Phillips. 2015. The timing of verb selection in japanese sentence production. Journal of experimental psychology. Learning, memory, and cognition.
|
| 331 |
+
Shota Momma, Robert Slevc, and Colin Phillips. 2014. The timing of verb selection in english active and passive sentences.
|
| 332 |
+
Makoto Morishita, Yusuke Oda, Graham Neubig, Koichiro Yoshino, Katsuhito Sudoh, and Satoshi Nakamura. 2017. An empirical study of mini-batch creation strategies for neural machine translation. In The First Workshop on Neural Machine Translation.
|
| 333 |
+
Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Syntax-based simultaneous translation through prediction of unseen syntactic constituents. Proceedings of the Association for Computational Linguistics.
|
| 334 |
+
Ivaro Peris, Miguel Domingo, and Francisco Casacuberta. 2017. Interactive neural machine translation. Computer Speech and Language, 45:201-220.
|
| 335 |
+
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the North American Chapter of the Association for Computational Linguistics.
|
| 336 |
+
Helmut Schmid. 1995. Improvements in part-of-speech tagging with an application to german. In Proceedings of the ACL SIGDAT-Workshop.
|
| 337 |
+
Sheng-syun Shen and Hung-yi Lee. 2016. Neural attention models for sequence classification: Analysis and application to key term extraction and dialogue act detection. In Conference of the International Speech Communication Association.
|
| 338 |
+
Wolfram Wilss. 1978. Syntactic anticipation in German-English simultaneous interpreting. In Language Interpretation and Communication.
|
| 339 |
+
M. Wolfel, M. Kolss, F. Kraft, J. Niehues, M. Paulik, and A. Waibel. 2008. Simultaneous machine translation of German lectures into English: Inspecting research challenges for the future. In *IEEE Spoken Language Technology Workshop*.
|
| 340 |
+
|
| 341 |
+
Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the North American Chapter of the Association for Computational Linguistics.
|
anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:767315fb43705fa26faad69a2844e8d59b0482622926823d966e82426171ef03
|
| 3 |
+
size 499266
|
anattentiverecurrentmodelforincrementalpredictionofsentencefinalverbs/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fbbf2fd971ce8a2aa66e900045cde637609eb84bc1ddd1642dca931afd27655b
|
| 3 |
+
size 331153
|
anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/cd0643f5-f58d-44c5-9e38-398e4e5d85a2_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c2cad5f611b02f635a0064bf8e681dec5fc78d66d373fd66977c4702db190953
|
| 3 |
+
size 91936
|
anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/cd0643f5-f58d-44c5-9e38-398e4e5d85a2_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1513c0542a7c237e7c7dc65f646a33cb980cee9c595c37664532ac4cf4ae8d33
|
| 3 |
+
size 107856
|
anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/cd0643f5-f58d-44c5-9e38-398e4e5d85a2_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:29a7e35a4cd181796c37aa88d697feab611c6deda3866cfc6efb65bc5e496f6c
|
| 3 |
+
size 713420
|
anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/full.md
ADDED
|
@@ -0,0 +1,364 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# An Empirical Exploration of Local Ordering Pre-training for Structured Prediction
|
| 2 |
+
|
| 3 |
+
Zhisong Zhang, Xiang Kong, Lori Levin, Eduard Hovy
|
| 4 |
+
Language Technologies Institute, Carnegie Mellon University
|
| 5 |
+
{zhisongz, xiangk, lsl, hovy}@cs.cmu.edu
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Recently, pre-training contextualized encoders with language model (LM) objectives has been shown an effective semi-supervised method for structured prediction. In this work, we empirically explore an alternative pre-training method for contextualized encoders. Instead of predicting words in LMs, we "mask out" and predict word order information, with a local ordering strategy and word-selecting objectives. With evaluations on three typical structured prediction tasks (dependency parsing, POS tagging, and NER) over four languages (English, Finnish, Czech, and Italian), we show that our method is consistently beneficial. We further conduct detailed error analysis, including one that examines a specific type of parsing error where the head is misidentified. The results show that pre-trained contextual encoders can bring improvements in a structured way, suggesting that they may be able to capture higher-order patterns and feature combinations from unlabeled data.
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
Recently, pre-trained contextualized encoders (Peters et al., 2018; Radford et al., 2019; Devlin et al., 2019) have been shown to be beneficial for NLP tasks, including structured prediction (Kulmizev et al., 2019; Kondratyuk and Straka, 2019). Most of the pre-training objectives are based on variants of language models (LM), that is, the model is trained to predict lexical items with partial inputs. Masked Language Model (MaskLM) is a typical example, popularized by BERT (Devlin et al., 2019), which masks out lexical tokens in the input sequences and predicts their identities. Since natural sentences contain not only lexical tokens but also their linearized word orders, it is a natural question if we can perform pre-training by "masking out" and recovering word order information.
|
| 14 |
+
|
| 15 |
+
Word order is an important method of grammatical encoding (Dryer, 2007), and can play an important role in predicting basic sentence structures (Naseem et al., 2012; Tackström et al., 2013; Ammar et al., 2016; Ahmad et al., 2019). Recently, Wang et al. (2018) pre-train an explicit word reordering model and show that its contextualized representations improve dependency parsing.
|
| 16 |
+
|
| 17 |
+
In this work, we explore a local ordering pretraining strategy with word-selection objectives. Instead of completely discarding original word order information, we segment the input sentence into local bags of words and keep the ordering of these bags. Inside each bag, we discard all the local word orders and train the model to recover them. Furthermore, we simplify the training objectives: instead of training explicit word linearizers which require extra unidirectional decoders, we only ask the model to select original neighboring words. This scheme simplifies the pre-training procedure and enhances the encoder since it can take information from the whole sentence.
|
| 18 |
+
|
| 19 |
+
A similar idea is explored in StructBERT (Wang et al., 2020), which adopts a word structural objective by shuffling and re-predicting randomly selected subsets of trigrams. Our method is different in that we make local bags of words instead of shuffling and we adopt simpler and cheaper word-selection objectives. Moreover, we focus on empirical experiments and error analysis on structured prediction tasks.
|
| 20 |
+
|
| 21 |
+
We evaluate on three structured prediction tasks (dependency parsing, part-of-speech (POS) tagging, and Named Entity Recognition (NER)) over four languages (English, Finnish, Czech, Italian). The highlights of our findings are:
|
| 22 |
+
|
| 23 |
+
- For local ordering pre-training, the best performance is obtained when partially masking out information in a suitable degree. (§3.2.1)
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
Left Selection:
|
| 27 |
+
Words:
|
| 28 |
+
Positions:
|
| 29 |
+
Figure 1: Illustration of the local ordering pre-training strategy. We segment the input sentence into local bags (bag size is fixed to three here) and discard word order information inside each bag by assigning same position indexes. Training objectives are to select original neighboring words. Here, we only show the scenario for direct left-neighbor selection, while selections for other positions will be similar.
|
| 30 |
+
|
| 31 |
+
- Even when pre-trained with a small amount of data (1M Wikipedia sentences), our method can improve the performances of structured predictors in a consistent way. Our method performs comparably to MaskLM and there can be further improvements when combining the two objectives, especially for parsing, which is the most structured task we explore. (§3.2.2, §3.3)
|
| 32 |
+
- The pre-trained models make fewer structured errors, suggesting that they may be able to capture higher-order patterns and feature combinations from unlabeled data. (§3.4)
|
| 33 |
+
|
| 34 |
+
# 2 Local Ordering Pre-training
|
| 35 |
+
|
| 36 |
+
Word reordering or linearization itself is an interesting task, aiming to arrange a bag of words into a natural sentence (Liu et al., 2015; Zhang and Clark, 2015; Schmaltz et al., 2016). Wang et al. (2018) show that representations from an explicit reordering model can benefit dependency parsing. However, there may be two issues with an explicit reordering model for pre-training. Firstly, the input is a bag of words without any positional information. This could discard too much information, leading to relatively large discrepancies between pre-training and fine-tuning. Moreover, training explicit reordering models requires unidirectional decoders, which are only aware of contexts from one direction and cannot make full use of the bidirectional information at one time.
|
| 37 |
+
|
| 38 |
+
To mitigate these issues, we explore a local ordering pre-training strategy with word-selection objectives. Inspired by MaskLM, where only some of the tokens are masked out, we "mask out" par
|
| 39 |
+
|
| 40 |
+
tial ordering information by segmenting the input sentence into multiple local bags of words, and only discarding word orders inside each bag (§2.1). Moreover, we adopt simpler training objectives of selecting original neighboring words, which avoids the need of unidirectional decoders and focuses the pre-training on the encoder (§2.2).
|
| 41 |
+
|
| 42 |
+
# 2.1 Local Bags of Words
|
| 43 |
+
|
| 44 |
+
Instead of discarding all positional information, we keep the overall ordering and only discard local word orders. This is achieved by segmenting the input sentence into a sequence of local bags of words. In this way, the model is not aware of the local word orders inside each bag, but the overall ordering of the bags is kept. Figure 1 provides a simplified example to illustrate this scheme. We specify special positional encodings to "mask out" local word orders: inside each local bag, all the tokens get the same positional indexes. For example, the position indexes in the first bag {There, is, a} are all set to 0, while in the second bag {cat, on, the}, the position indexes are all casted to 3.
|
| 45 |
+
|
| 46 |
+
The above example illustrates a simplified scheme, whereas in actual pre-training, we adopt several variations to make it more flexible. 1) First, for the position indexes inside each bag, we do not fix them to the index of the first token, but randomly pick a representative token and adopt its index. For example, in the second bag, we randomly choose a representative index from $\{3,4,5\}$ , and then set all position indexes to this value. 2) Moreover, for each local bag, we randomly sample its bag size from a pre-defined range, instead of using a fixed size. 3) In addition, we randomly pick half of the bags and keep the original position indexes in them, which is another way of retaining partial ordering information.
|
| 47 |
+
|
| 48 |
+
# 2.2 Word-selection Objectives
|
| 49 |
+
|
| 50 |
+
Since the aim of pre-training is not the pre-training task itself but the encoder, we do not need an explicit word reordering model, which may require unidirectional decoders. In some way, an explicit reordering model can be regarded as a LM which constrains candidate words to come from the input sentence. Therefore, it may suffer from the same problem as unidirectional LMs: at one time, contexts from only one direction can be utilized instead of from both directions. This is the bias of unidirectional decoders and we replace them with simpler word selectors.
|
| 51 |
+
|
| 52 |
+
Specifically, we only ask the model to select original neighbors for each word that loses its local word order information. Figure 1 illustrates the case for left-neighbor selection. This task is nontrivial since the model is unaware of word orders inside each bag. In many scenarios, it needs to capture certain global sentence structures. For example, in the second bag {cat, on, the}, if looking only locally, we may pick "the" as the left neighbor of "cat". However, if we notice that there is another determiner "a" in the first bag, then "the" will not be the only choice.
|
| 53 |
+
|
| 54 |
+
In actual running, we adopt four classification tasks corresponding to different original offsets: two for the selection of the original left neighbor (-1) and the left of the left neighbor (-2) and two for the right ones. Each word selector gets its own parameters. Since the word selection task is similar to dependency parsing (Zhang et al., 2017), we adopt the biaffine scorer (Dozat and Manning, 2017). The training objectives are negative log likelihoods on selecting the correct words.
|
| 55 |
+
|
| 56 |
+
Formally, assume that we have an input sequence of $w_0, w_1, \ldots, w_{n-1}$ , and we generate their corrupted positions $p_0, p_1, \ldots, p_{n-1}$ with our local bag strategy. For a specific word $w_i$ (where $p_i \neq i$ ) and a specific selection offset $\delta$ ( $\delta \in \{-2, -1, 1, 2\}$ ), its loss objective will be (for brevity, we omit the conditions on the inputs):
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
\ell_ {w _ {i}, \delta} = - \log \frac {\exp \mathbf {S c o r e} _ {\delta} (w _ {i} , w _ {i + \delta})}{\sum_ {j} \exp \mathbf {S c o r e} _ {\delta} (w _ {i} , w _ {j})}
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
Here, $\mathrm{Score}_{\delta}$ denotes the scores of two tokens having positional differences $\delta$ .
|
| 63 |
+
|
| 64 |
+
Notice that the simplified tasks are not necessarily easier than the explicit reordering task, since we can recover the original word order if we know all the local neighboring information. The word-selection objectives get rid of the explicit decoder as well as its unidirectional bias. At the same time, the model is still as efficient as word reordering models, since we only need to select among the words that appear in the input sentence, and there is no need to do the computationally expensive normalizations over the whole vocabulary as in LMs.
|
| 65 |
+
|
| 66 |
+
# 2.3 Hybrid Training
|
| 67 |
+
|
| 68 |
+
We further perform multi-task hybrid training, including both ordering and MaskLM objectives. Actually, our local ordering strategy can be integrated with MaskLM in a natural way. Since half of the local bags preserve the original position indexes, we
|
| 69 |
+
|
| 70 |
+

|
| 71 |
+
Figure 2: Illustration of the overall training scheme. The encoder is pre-trained in the pre-training stage with the unlabeled data. Later, the task-specific decoder is stacked and both modules are further fine-tuned with task-specific labeled data.
|
| 72 |
+
|
| 73 |
+
randomly select words inside those bags to mask and predict. This scheme is nearly as effective as the original one because we can segment local bags and mask words at the same time and thus there is no need to run through the encoder twice. The encoder produces one set of contextualized representations, which we can feed to the corresponding modules of the two tasks. We adopt equal weights (both set to 0.5) for the two objectives.
|
| 74 |
+
|
| 75 |
+
# 3 Experiments
|
| 76 |
+
|
| 77 |
+
# 3.1 Settings
|
| 78 |
+
|
| 79 |
+
In this sub-section, we briefly describe our main experiment settings<sup>1</sup>. Please refer to the Appendix for more details.
|
| 80 |
+
|
| 81 |
+
Scheme Figure 2 shows our overall training scheme. We take a two-step approach: pre-training plus fine-tuning. First, the encoder is pre-trained using a relatively large unlabeled corpus, then the task-specific decoders are stacked upon the pretrained encoder and all the modules are fine-tuned with task-specific labeled data, which is much smaller than the pre-training data.
|
| 82 |
+
|
| 83 |
+
Data We explore four languages to evaluate our pre-training strategy: English (en), Finnish (fi), Czech (cs), and Italian (it). For the unlabeled data in pre-training, we collect Wikipedia corpora from the 2018-Fall Wiki-dump. Due to limitation of computational resources, we sample 1M sentences for each language. For POS tagging and dependency parsing, we utilize Universal Dependencies (UD) v2.4 (Nivre et al., 2019). For NER, we utilize CoNLL03 (Tjong Kim Sang and De Meulder, 2003) for English, Digitoday (Ruokolainen et al., 2019) for Finnish, Czech Named Entity Corpus (Ševčíková et al., 2007) for Czech and
|
| 84 |
+
|
| 85 |
+
EVALITA 2009 (Speranza, 2009) for Italian. We mainly follow the default dataset splittings, except for the training sets. To investigate middle- and low-resource scenarios, we explore three settings of different training sizes, sampling 1k, 5k and 10k sentences from the original training set. We adopt standard evaluation criteria: accuracies for POS tagging, first-level (language-independent) Labeled Attachment Score (LAS) for dependency parsing, and F1 score for NER.
|
| 86 |
+
|
| 87 |
+
Encoders We adopt encoders with the same architecture: a 6-layer Transformer, whose head number, model dimension and feed-forward hidden dimension are set to 8, 512 and 1024, respectively. In addition, we adopt relative positional encodings (Shaw et al., 2018; Dai et al., 2019) within the Transformer, since in preliminary experiments we find this helpful for target tasks. In contrast to BERT, we adopt words $^2$ as basic input and modeling units. We further include a character-level Convolutional Neural Network (CNN) to capture internal structures of words.
|
| 88 |
+
|
| 89 |
+
Decoders For the decoders of specific tasks, we adopt typical solutions. For dependency parsing, we adopt the biaffine graph-based decoder (Dozat and Manning, 2017). For POS tagging, we simply add a single-layer classifier over all tags (Yang et al., 2018). For NER, we adopt a standard CRF layer (Lafferty et al., 2001).
|
| 90 |
+
|
| 91 |
+
Training For model training, we adopt the Adam optimizer (Kingma and Ba, 2014) with a warming-up styled learning rate schedule. In pre-training, each mini-batch includes 480 sentences and we train the model for 200k steps, in which the first 5k steps are specified for linearly increasing the learning rate towards 4e-4. The pre-training stage takes around 3 days with one RTX 2080 Ti GPU. In task-specific training, we adopt a mini-batch size of 80 sentences and train the model for maximally 250 epochs over the training set, which generally takes several hours using a single GPU.
|
| 92 |
+
|
| 93 |
+
# 3.2 Effects of Pre-training Strategies
|
| 94 |
+
|
| 95 |
+
In this sub-section, we explore the effects of pretraining strategies. Here, we take the English dependency parsing dataset for development.
|
| 96 |
+
|
| 97 |
+
<table><tr><td>R</td><td>3</td><td>5</td><td>7</td><td>9</td><td>11</td><td>∞</td></tr><tr><td>10k</td><td>86.83</td><td>87.72</td><td>87.75</td><td>87.91</td><td>87.64</td><td>86.98</td></tr><tr><td>5k</td><td>85.61</td><td>86.54</td><td>86.70</td><td>86.70</td><td>86.38</td><td>85.64</td></tr><tr><td>1k</td><td>80.87</td><td>82.07</td><td>82.25</td><td>81.91</td><td>82.17</td><td>79.06</td></tr></table>
|
| 98 |
+
|
| 99 |
+
Table 1: Comparisons of bag size ranges $\left[\frac{\mathrm{R} + 1}{2},\mathrm{R}\right]$ for the local ordering strategy. “ $\mathbf{R} = \infty$ ” indicates that all words from one input sentence fall into one bag. Evaluations are performed with the English dependency parsing task (LAS on development set). Each row represents different (target task) training sizes.
|
| 100 |
+
|
| 101 |
+
# 3.2.1 Bag Size Range
|
| 102 |
+
|
| 103 |
+
As described in §2.1, we adopt variable bag sizes for the ordering pre-training. The aim is to make the model more flexible and prevent it from always seeing the same patterns associated with fixed bag sizes. The neighbor selection process is not affected by this since it does not care about the bag boundaries, and selects among all the input tokens. The bag size range is a major setting in this strategy. To reduce the number of hyper-parameters, we specify a maximum bag size $R$ , and set the bag size range to $[\frac{R + 1}{2}, R]$ . For example, if $R$ is set to 7, then for each bag, its size is randomly selected from 4 to 7. We also include a setting where $R$ is $\infty$ , which corresponds to the case where all words fall into one global bag, as in the full word reordering model.
|
| 104 |
+
|
| 105 |
+
The results are shown in Table 1. Firstly, in the case of $R = \infty$ , the model generally performs worse than those with local bags. This shows the effectiveness of keeping partial ordering information for pre-training, which may possibly reduce the discrepancies between pre-training and fine-tuning, matching our intuition of the local ordering strategy. Furthermore, when the bag size is too small as in the case of $R = 3$ , the performances are also worse, possibly because the task becomes so simple that the model learns little in pre-training. Among the middle-ranged settings of $R$ , which partially mask out information in suitable degrees, the results do not differ too much. In the following experiments, we fix $R$ to 7, which performs well overall.
|
| 106 |
+
|
| 107 |
+
# 3.2.2 Comparisons
|
| 108 |
+
|
| 109 |
+
We compare various pre-training strategies and show the results in Table 2. As split in this table, we arrange the models into three groups:
|
| 110 |
+
|
| 111 |
+
(1) The first group includes models without pretrained encoders. "Random" gets random initialization, and "fastText" gets its word lookup table
|
| 112 |
+
|
| 113 |
+
<table><tr><td></td><td>Random</td><td>fastText</td><td>BiLM</td><td>MaskLM</td><td>LBag</td><td>Hybrid</td><td>BERT</td></tr><tr><td>10k</td><td>83.70±0.36</td><td>86.00±0.10</td><td>87.28±0.16</td><td>87.96±0.09</td><td>87.75±0.13</td><td>88.27±0.11</td><td>89.60±0.10</td></tr><tr><td>5k</td><td>80.75±0.35</td><td>83.17±0.24</td><td>86.16±0.03</td><td>87.09±0.10</td><td>86.70±0.13</td><td>87.35±0.10</td><td>88.47±0.11</td></tr><tr><td>1k</td><td>69.93±0.32</td><td>72.84±0.25</td><td>80.75±0.03</td><td>82.65±0.04</td><td>82.25±0.07</td><td>83.28±0.26</td><td>84.62±0.28</td></tr></table>
|
| 114 |
+
|
| 115 |
+
Table 2: Comparisons of different pre-training strategies with the English dependency parsing task (LAS on development set, averaged over three runs). Each row represents different (target task) training sizes.
|
| 116 |
+
|
| 117 |
+
initialized from static fastText embeddings $^3$ .
|
| 118 |
+
|
| 119 |
+
(2) The second group includes models whose encoders are pre-trained with the same settings on the 1M Wiki corpus. "BiLM" denotes Elmo-styled (Peters et al., 2018) Bidirectional LM (BiLM), where we train left-to-right and right-to-left language models with causality attention masks. "MaskLM" means the BERT-styled MaskLM, where $15\%$ of the words are masked out and predicted. "LBag" denotes our Local-Bag based ordering strategy and "Hybrid" is the multi-task hybrid model trained with both ordering and MaskLM objectives.
|
| 120 |
+
(3) The third group only contains "BERT", which directly utilizes pre-trained $\mathrm{BERT}^4$ .
|
| 121 |
+
|
| 122 |
+
In the first group, where there are no pre-trained encoders, the performances drop drastically in low-resource cases. The pre-trained static word embeddings help in some way, but its degree of performance drop is very similar to the baseline: there are performance gaps of nearly 14 points between $10\mathrm{k}$ and $1\mathrm{k}$ training sizes. If we adopt pre-trained encoders, as in the second and third group, the performance clearly improves for all training sizes. Particularly, in the low-resource (1k) settings, the performance drops from the $10\mathrm{k}$ settings are much smaller than those in the first group.
|
| 123 |
+
|
| 124 |
+
The more interesting comparisons are among those in the second group, where the settings are kept the same except for pre-training strategies. Firstly, BiLM performs worst in this group. The reason may be that BiLM contains unidirectional decoders, which cannot make full use of the inputs. The performance of our local ordering strategy (LBag) is very close to those of the MaskLM, with performance gaps of only 0.2 to 0.4 in LAS. Furthermore, if we combine the ordering and MaskLM objectives as in the Hybrid model, there can be further improvements. This suggests that local or
|
| 125 |
+
|
| 126 |
+
dering pre-training may capture orthogonal information from MaskLM. Overall, the model performances in the second group do not differ too much, suggesting that the effectiveness of contextualized pre-training can be realized as long as the model is capable enough.
|
| 127 |
+
|
| 128 |
+
Unsurprisingly, BERT performs the best, possibly due to its larger model and training corpus. Nevertheless, if calculating the gaps between the second group and BERT, we can find that they are relatively consistent as training sizes get smaller. In contrast, the gaps between the first group and BERT obviously get larger in lower-resource settings. This again suggests the effectiveness of contextualized pre-training.
|
| 129 |
+
|
| 130 |
+
For the pre-trained models in the following experiments, we focus on three strategies: MaskLM, LBag and Hybrid, since they are the ones that we are most interested to compare.
|
| 131 |
+
|
| 132 |
+
# 3.3 Main Results
|
| 133 |
+
|
| 134 |
+
Figure 3 shows the main results on the test sets. The patterns are very similar to the development results. Pre-trained BERT obtains the best results, while our smaller pre-trained models lag behind by small gaps, which are relatively consistent across different training sizes. Those without pre-trained encoders mostly get worse results, especially in low-resource cases. For the parsing task, our local ordering strategy can get comparable results to those of MaskLM and overall there can be further improvements by combining the two objectives. For the other two sequence labeling tasks, the results are mixed, possibly because in these cases the lexical information may be more important, and the LM-styled pre-training may be better at capturing them. Nevertheless, our strategy still generally obtains comparable results to MaskLM.
|
| 135 |
+
|
| 136 |
+
# 3.4 Analysis
|
| 137 |
+
|
| 138 |
+
It is not surprising that contextualized pre-training can help structured prediction, since pre-trained encoders may have already captured structured patterns from unlabeled data. We perform detailed
|
| 139 |
+
|
| 140 |
+

|
| 141 |
+
|
| 142 |
+

|
| 143 |
+
|
| 144 |
+

|
| 145 |
+
|
| 146 |
+

|
| 147 |
+
|
| 148 |
+

|
| 149 |
+
|
| 150 |
+

|
| 151 |
+
|
| 152 |
+

|
| 153 |
+
|
| 154 |
+

|
| 155 |
+
|
| 156 |
+

|
| 157 |
+
Figure 3: Test results for dependency parsing (LAS), POS tagging (Accuracy%) and NER (F1 score).
|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
|
| 161 |
+

|
| 162 |
+
|
| 163 |
+

|
| 164 |
+
|
| 165 |
+
analysis to investigate in what aspects pre-training are helpful. We select low-resource dependency parsing (with 1k training size) as the analyzing task, since parsing is the most structurally complex task we explore and there may be more obvious patterns in low-resource scenarios. For error analysis of parsing, Kulmizev et al. (2019) provide detailed error breakdowns on various factors, along the lines of (McDonald and Nivre, 2007, 2011). In this work, we explore different aspects, especially focusing on the structured nature of the task.
|
| 166 |
+
|
| 167 |
+
# 3.4.1 On Word Frequencies
|
| 168 |
+
|
| 169 |
+
Since pre-training is performed on a much larger corpus than the task-specific training set, we would
|
| 170 |
+
|
| 171 |
+
expect that pre-trained models perform better on out-of-vocabulary (OOV) and rare words, since they would be seen more often in pre-training.
|
| 172 |
+
|
| 173 |
+
To investigate this, we split the words of the development set into four bins according to their frequency ranking in the (target task) training vocabulary. Except for the OOV bin where words do not appear in training, the other three bins get the same number of running word counts.
|
| 174 |
+
|
| 175 |
+
Figure 4 shows a breakdown of the results. First, if comparing fastText against the Random baseline, we can find that overall, the most improvements come from low frequency and OOV words. For words with high and middle frequency, static em
|
| 176 |
+
|
| 177 |
+

|
| 178 |
+
Figure 4: Performance breakdown of dependency parsing (LAS on development sets, trained with 1k sentences) on word frequencies. Non-OOV words are evenly divided into the first three bins according to frequency ranking in (target task) training vocabularies.
|
| 179 |
+
|
| 180 |
+
beddings provide less or sometimes even no obvious improvements. With pre-trained encoders, not only do the results on rare and OOV words get much better, but even high frequent words improve by a large margin. This suggests that the benefits of pre-training include not just that each individual word is known better, which may also be captured by static embeddings, but also that contextualized pre-training may be able to identify higher-order structured patterns.
|
| 181 |
+
|
| 182 |
+
When comparing the models with pre-trained encoders, the trends are very similar to the overall LAS scores. A slightly surprising phenomenon is that, although our models are trained on much less data than BERT, the performance gaps are still relatively consistent across different frequency bins. This may suggest that even for rare or OOV words, their contexts can be signals that are strong enough for syntax prediction.
|
| 183 |
+
|
| 184 |
+
# 3.4.2 On Higher-order Matches
|
| 185 |
+
|
| 186 |
+
A dependency tree is a collection of dependency edges, which are not individual but interact with each other, forming higher-order structures. To investigate how pre-trained encoders help predicting higher-order structures, we specify some frame patterns and calculate the higher-order matching accuracies. Here, we use "frame" to denote a collection of dependency edges which form a pre-defined pattern. Accuracy is calculated by counting how many times all the dependency edges in the specific frame are correctly predicted.
|
| 187 |
+
|
| 188 |
+
We investigate five frame patterns: 1) pred: all edges connecting a predicate and its core argu
|
| 189 |
+
|
| 190 |
+
ment children, 2) mwe: all multi-word expression (MWE) edges connected to the head word of an MWE phrase, 3) conj: all edges related to a conjunction, 4) expl: an expletive edge and its core argument siblings, 5) acl: an adjectival clause modifier and all its core argument children. Please refer to the Appendix for examples and more detail about the extraction of these higher-order patterns.
|
| 191 |
+
|
| 192 |
+
Figure 5 shows the results. We can again observe that static word embeddings improve higher-order accuracies very limitedly, while pre-trained encoders give totally different stories. For the "pred" patterns, the trends are very similar to the overall LAS results, where LBag is slightly worse than MaskLM and Hybrid is better. The interesting cases are "mwe" and "conj", where LBag mostly performs better than MaskLM. The reason might be that these patterns are more fixed in aspects of word order, which may be captured better by ordering pre-training. For the last two types, the results are mixed for different languages. Nevertheless, the ordering pre-trained models can still achieve comparable or sometimes better results than MaskLM.
|
| 193 |
+
|
| 194 |
+
# 3.4.3 On Head Errors
|
| 195 |
+
|
| 196 |
+
Finally, we investigate a special error pattern in dependency parsing, for which Figure 6 shows an example. Here, all the predicted edges are wrong, but there seems to be only one head selection error: "Epic" is an apposition modifier of "movie", but the model picks "Epic" as the head, leading to all other errors. In constituency trees, an attachment error may lead to multiple wrong brackets (Kummerfeld et al., 2012). In contrast, in dependency trees, a
|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
Figure 5: Comparisons of higher-order matching accuracies on dependency parsing (on development sets, with 1k training). There are no results for "fi-expl" since in the Finnish (TDT) Treebank we adopt, "expl" is not used.
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
Figure 6: An example of head error. Here, the edges above the tokens are gold ones and the edges below are predictions. The red edge indicates the back edge, which is directly reversed in this case.
|
| 203 |
+
|
| 204 |
+
pure attachment error may influence no other edges, but head errors may lead to multiple related errors.
|
| 205 |
+
|
| 206 |
+
In the pattern of head errors, the predicted edge that forms a back edge in the original gold tree can usually be the signature. The prediction of a back edge indicates that a word is wrongly attached to one of its descendants in the gold tree. In addition to the wrongly predicted back edge itself, there must be at least another error, since loops are not allowed in trees. The example in Figure 6 shows a special case where the back edge is a directly reversed one, where the head and the modifier are reversely predicted. This type of 1-step back edges usually indicates local head errors, while there can be back edges involving multiple steps, which usually suggest more complex structured errors.
|
| 207 |
+
|
| 208 |
+
Figure 8 shows the results on back edges. Firstly,
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
Figure 7: Illustration of multi-step back edge. Here, the edges above the tokens are gold ones (Notice that in actual sequence, the tokens do not necessarily appear in left-to-right order). The red edge below indicates a $n$ -step back edge for the gold tree.
|
| 212 |
+
|
| 213 |
+
as the trends in previous analyses, the pre-trained models obviously predict fewer back edges and thus make fewer head errors, again suggesting structural improvements. Moreover, comparing the 1-step back-edge percentages, the pre-trained models also have higher rates, indicating that their head errors are more local. Further comparing different pre-training strategies, we can see that, except for Finnish, the MaskLM predicts fewer back edges and makes more local head errors (indicated by higher 1-step back edge percentages) than LBag. This suggests that, LM pre-training, which directly predicts lexical items, may be more sensitive to the information of head words.
|
| 214 |
+
|
| 215 |
+
We further investigate errors<sup>5</sup> that might be related with head errors. We adopt a relatively simple
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
Figure 8: Results on back edges (on development sets, with 1k training). The light bars indicate the number of all back edges, while the darker and shaded parts represent the number of 1-step back edges. The numbers on the $x$ -axis indicate the percentage of 1-step back edges (which indicate more local errors) among all back edges.
|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
|
| 222 |
+

|
| 223 |
+
|
| 224 |
+

|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
Figure 9: Results on head-error related errors (on development sets, with 1k training). The light bars indicate the number of total erroneous edges, while the darker and shaded parts represent the number of the ones that are related with head errors. The numbers on the $x$ -axis indicate the relatedness rates: the percentage of head-error related erroneous edges among all erroneous edges.
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
|
| 231 |
+

|
| 232 |
+
|
| 233 |
+

|
| 234 |
+
|
| 235 |
+
strategy: first identify all back edges, and then include other erroneous edges that might be related with any head error. We use the diagram in Figure 7 to illustrate our criterion for relatedness. We mark three types of erroneous edges as head-error related: 1) the back edge itself $(h_n \to h_0)$ , 2) any wrongly predicted children of $h_n$ whose gold head should be one of $[h_0, h_1, \dots, h_{n-1}]$ , 3) any errors for the head prediction of the tokens $[h_0, h_1, \dots, h_{n-1}]$ . This criterion may miss or over-predict related errors, nevertheless we find it a reasonable approximation.
|
| 236 |
+
|
| 237 |
+
Figure 9 shows the results. First, as in Figure 8, the pre-trained models are less influenced by head errors, again suggesting structural improvements. Further comparing different pre-training strategies, generally MaskLM is less influenced by head errors, as shown by either lower head-error related error counts or relatedness rates.
|
| 238 |
+
|
| 239 |
+
# 4 Conclusion
|
| 240 |
+
|
| 241 |
+
In this work, we empirically explore an alternative pre-training strategy for contextualized encoders. Instead of training variants of language models, we adopt a local word ordering strategy, which segments the inputs into local bags of words, together with order-based word-selection objectives. Evaluated on typical structured prediction tasks, we show the effectiveness of this method. With further analysis on one typical structured task, we show that pre-trained encoders can bring improvements in a structured way. We hope this empirical work can shed some light and inspire future work on exploring how pre-trained contextualized encoders capture language structures.
|
| 242 |
+
|
| 243 |
+
# References
|
| 244 |
+
|
| 245 |
+
Wasi Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2019. On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2440-2452, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 246 |
+
Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2016. Many languages, one parser. Transactions of the Association for Computational Linguistics, 4:431-444.
|
| 247 |
+
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978-2988, Florence, Italy. Association for Computational Linguistics.
|
| 248 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 249 |
+
Timothy Dozat and Christopher D. Manning. 2017. Deep bioaffine attention for neural dependency parsing. In ICLR.
|
| 250 |
+
Matthew S Dryer. 2007. Word order. Language typology and syntactic description, 1:61-131.
|
| 251 |
+
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
|
| 252 |
+
Dan Kondratyuk and Milan Straka. 2019. 75 languages, 1 model: Parsing universal dependencies universally. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2779-2795, Hong Kong, China. Association for Computational Linguistics.
|
| 253 |
+
Artur Kulmizev, Miryam de Lhoneux, Johannes Gontrum, Elena Fano, and Joakim Nivre. 2019. Deep contextualized word embeddings in transition-based and graph-based dependency parsing - a tale of two parsers revisited. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2755-2768, Hong Kong, China. Association for Computational Linguistics.
|
| 254 |
+
|
| 255 |
+
Jonathan K. Kummerfeld, David Hall, James R. Curran, and Dan Klein. 2012. Parser showdown at the wall street corral: An empirical investigation of error types in parser output. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1048-1059, Jeju Island, Korea. Association for Computational Linguistics.
|
| 256 |
+
John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282-289.
|
| 257 |
+
Yijia Liu, Yue Zhang, Wanxiang Che, and Bing Qin. 2015. Transition-based syntactic linearization. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 113-122, Denver, Colorado. Association for Computational Linguistics.
|
| 258 |
+
Ryan McDonald and Joakim Nivre. 2007. Characterizing the errors of data-driven dependency parsing models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 122-131, Prague, Czech Republic. Association for Computational Linguistics.
|
| 259 |
+
Ryan McDonald and Joakim Nivre. 2011. Analyzing and integrating dependency parsers. Computational Linguistics, 37(1):197-230.
|
| 260 |
+
Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 629-637, Jeju Island, Korea. Association for Computational Linguistics.
|
| 261 |
+
Joakim Nivre, Mitchell Abrams, Željko Agić, and et al. 2019. Universal dependencies 2.4. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (UFAL), Faculty of Mathematics and Physics, Charles University.
|
| 262 |
+
Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 263 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.
|
| 264 |
+
|
| 265 |
+
Teemu Ruokolainen, Pekka Kauppinen, Miikka Silfverberg, and Krister Lindén. 2019. A finnish news corpus for named entity recognition. arXiv preprint arXiv:1908.04212.
|
| 266 |
+
Allen Schmaltz, Alexander M. Rush, and Stuart Shieber. 2016. Word ordering without syntax. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2319-2324, Austin, Texas. Association for Computational Linguistics.
|
| 267 |
+
Magda Ševčíková, Zdeněk Žabokrtský, and Oldřich Krůza. 2007. Named entities in czech: Annotating data and developing NE tagger. In Lecture Notes in Artificial Intelligence, Proceedings of the 10th International Conference on Text, Speech and Dialogue, Lecture Notes in Computer Science, pages 188-195, Berlin / Heidelberg. Springer.
|
| 268 |
+
Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464-468, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 269 |
+
Manuela Speranza. 2009. The named entity recognition task at evalita 2009. In EVALITA 2009.
|
| 270 |
+
Oscar Tackstrom, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discriminative transfer parsers. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1061-1071, Atlanta, Georgia. Association for Computational Linguistics.
|
| 271 |
+
Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147.
|
| 272 |
+
Wei Wang, Bin Bi, Ming Yan, Chen Wu, Jiangnan Xia, Zuyi Bao, Liwei Peng, and Luo Si. 2020. Structbert: Incorporating language structures into pre-training for deep language understanding. In International Conference on Learning Representations.
|
| 273 |
+
Wenhui Wang, Baobao Chang, and Mairgup Mansur. 2018. Improved dependency parsing using implicit word connections learned from unlabeled data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2857-2863, Brussels, Belgium. Association for Computational Linguistics.
|
| 274 |
+
Jie Yang, Shuailong Liang, and Yue Zhang. 2018. Design challenges and misconceptions in neural sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics,
|
| 275 |
+
|
| 276 |
+
pages 3879-3889, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
|
| 277 |
+
Xingxing Zhang, Jianpeng Cheng, and Mirella Lapata. 2017. Dependency parsing as head selection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 665-676, Valencia, Spain. Association for Computational Linguistics.
|
| 278 |
+
Yue Zhang and Stephen Clark. 2015. Discriminative syntax-based word ordering for text generation. Computational Linguistics, 41(3):503-538.
|
| 279 |
+
|
| 280 |
+
# Appendices
|
| 281 |
+
|
| 282 |
+
# A Detailed Experiment Settings
|
| 283 |
+
|
| 284 |
+
In this subsection, we describe the details of our experiment settings, mainly including datasets and hyper-parameter settings.
|
| 285 |
+
|
| 286 |
+
# A.1 Datasets
|
| 287 |
+
|
| 288 |
+
Languages In this work, we explore four languages from different language family subdivisions: English (Germanic), Finnish (Uralic), Czech (Slavic) and Italian (Romance). It may be interesting to see how the effects of pre-training are influenced by specific language characteristics, for example, the agglutination in Finnish and relatively free word order in Czech. We would like to include more languages in future work, especially those in different language families.
|
| 289 |
+
|
| 290 |
+
Unlabeled data For pre-training, we use the unlabeled data collected from the 2018-Fall Wiki-dump<sup>6</sup>. We extract raw texts using WikiExtractor<sup>7</sup> and then do sentence-splitting and tokenization using UDPipe<sup>8</sup>. Due to the limitation of computational resources, for each language, we sample 1M sentences whose length is between 5 and 80 for the purpose of pre-training. Our empirical results show that for the basic structured prediction tasks explored in this work, such relative small amount of unlabeled data is already enough to bring obvious improvements.
|
| 291 |
+
|
| 292 |
+
Vocabularies Except for models that directly use pre-trained BERT, all models regard words as the basic inputting and modeling units. Therefore, for pre-trained encoders, we collect vocabularies from the unlabeled corpus, filtering out rare words that appear less than five times. Table 4 summaries
|
| 293 |
+
|
| 294 |
+
<table><tr><td>Lang.</td><td>Train</td><td>NER Dev</td><td>Test</td><td>Train</td><td>Parsing/POS Dev</td><td>Test</td></tr><tr><td>en</td><td>15.0k/203.6k</td><td>3.5k/51.4k</td><td>3.7k/46.4k</td><td>12.5k/204.6k</td><td>2.0k/25.1k</td><td>2.1k/25.1k</td></tr><tr><td>fi</td><td>13.5k/180.2k</td><td>1.0k/13.6k</td><td>3.5k/46.4k</td><td>12.2k/162.8k</td><td>1.4k/18.3k</td><td>1.6k/21.1k</td></tr><tr><td>cs</td><td>7.2k/160.0k</td><td>0.9k/20.0k</td><td>0.9k/20.1k</td><td>68.5k/1.2m</td><td>9.3k/159.3k</td><td>10.1k/173.9k</td></tr><tr><td>it</td><td>10.0k/189.1k</td><td>1.2k/23.4k</td><td>4.1k/86.4k</td><td>13.1k/276.0k</td><td>0.6k/11.9k</td><td>0.5k/10.4k</td></tr></table>
|
| 295 |
+
|
| 296 |
+
Table 3: Statistics (#Sent./#Token) of the original Parsing/POS and NER datasets. In our experiments, we adopt the original development and test sets, but sample training sets with different sizes from the original training sets.
|
| 297 |
+
|
| 298 |
+
<table><tr><td>Lang.</td><td>#Sent.</td><td>#Token</td><td>#Vocab</td><td>OOV%</td></tr><tr><td>en</td><td>1M</td><td>23.6M</td><td>103k</td><td>2.7%</td></tr><tr><td>fi</td><td>1M</td><td>14.1M</td><td>177k</td><td>10.9%</td></tr><tr><td>cs</td><td>1M</td><td>19.2M</td><td>175k</td><td>5.1%</td></tr><tr><td>it</td><td>1M</td><td>25.3M</td><td>128k</td><td>2.6%</td></tr></table>
|
| 299 |
+
|
| 300 |
+
the related statistics. We adopt word-level inputs mainly to follow the conventions of the target tasks explored in this work and to compare with baseline models without pre-trained encoders. It will be interesting to explore other input schemes (such as sub-words as in BERT) in future work, which is orthogonal to the main focus of this work.
|
| 301 |
+
|
| 302 |
+
Target tasks We explore three typical structured prediction tasks: dependency parsing, part-of-speech (POS) tagging and Named Entity Recognition (NER). For the tagging and parsing tasks, we utilize annotations from UDv2.4<sup>9</sup>. Specifically, we use the following treebanks: "English-EWT", "Finnish-TDT", "Czech-PDT" and "Italian-ISDT". For NER, we utilize various datasets, including CoNLL03<sup>10</sup> (Tjong Kim Sang and De Meulder, 2003) for English, Digitoday<sup>11</sup> (Ruokolainen et al., 2019) for Finnish, Czech Named Entity Corpus<sup>12</sup> (Ševčíková et al., 2007) for Czech and EVALITA 2009<sup>13</sup> (Speranza, 2009) for Italian. We only adopt simple settings for the NER tasks, specifically, ignoring nested annotations for Finnish NER and considering Supertypes for Czech NER. For it-NER, we take the first 10k sentences as training set and the rest 1.2k as development set. Table 3 lists the
|
| 303 |
+
|
| 304 |
+
Table 4: Statistics of the unlabeled Wiki corpus for pretraining. For each language (Lang.), we sample 1M sentences ("#Sent"). "#Token" indicates the number of tokens (words), "#Vocab" denotes the vocabulary size after rare words filtering. The final column represents the out-of-vocabulary (OOV) rate over the 1M corpus.
|
| 305 |
+
|
| 306 |
+
<table><tr><td rowspan="3">Embeddings</td><td>demb</td><td>300</td></tr><tr><td>dchar</td><td>50</td></tr><tr><td>dproj.</td><td>512</td></tr><tr><td rowspan="4">Encoder</td><td>Nlayer</td><td>6</td></tr><tr><td>dmodel</td><td>512</td></tr><tr><td>df</td><td>1024</td></tr><tr><td>position-encoding</td><td>Relative</td></tr><tr><td rowspan="5">PreTrain</td><td>optimizer</td><td>Adam</td></tr><tr><td>learning-rate</td><td>4e-4</td></tr><tr><td>warmup-steps</td><td>5k</td></tr><tr><td>total-steps</td><td>200k</td></tr><tr><td>batch-size</td><td>480</td></tr><tr><td rowspan="3">Decoding</td><td>POS</td><td>Enumeration</td></tr><tr><td>Parsing</td><td>Graph-based(o1)</td></tr><tr><td>NER</td><td>CRF</td></tr><tr><td rowspan="4">FineTune</td><td>optimizer</td><td>Adam</td></tr><tr><td>learning-rate</td><td>2e-4</td></tr><tr><td>total-epochs</td><td>250</td></tr><tr><td>batch-size</td><td>80</td></tr></table>
|
| 307 |
+
|
| 308 |
+
Table 5: Hyper-parameter settings of the model and training.
|
| 309 |
+
|
| 310 |
+
statistics of the original datasets.
|
| 311 |
+
|
| 312 |
+
We mainly follow the default dataset splittings, but for the training set, we explore three different training sizes by sampling 1k, 5k and 10k sentences<sup>14</sup>. These settings aim at exploring how pre-trained encoders can improve the structured learners in middle- and low-resource settings. For evaluations, POS tagging is evaluated by tagging accuracies and NER is evaluated by the standard F1 scores. For dependency parsing, we report first-level Labeled Attachment Scores (LAS) over all tokens including punctuation.
|
| 313 |
+
|
| 314 |
+
# A.2 Hyper-parameter Settings
|
| 315 |
+
|
| 316 |
+
Table 5 lists our main hyper-parameter settings.
|
| 317 |
+
|
| 318 |
+
Encoder Throughout our experiments, we adopt Transformer encoders with almost the same architecture. For the inputting parts of the encoder, we include representations of words and characters. Word representations are from a randomly initial-
|
| 319 |
+
|
| 320 |
+

|
| 321 |
+
|
| 322 |
+

|
| 323 |
+
|
| 324 |
+

|
| 325 |
+
|
| 326 |
+

|
| 327 |
+
|
| 328 |
+

|
| 329 |
+
Figure 10: Examples of the higher-order frame patterns. The red solid edges are included, while others (black dotted ones) are not included.
|
| 330 |
+
|
| 331 |
+
ized word lookup table, while character representations are from a character-level CNN. Further, a linear layer is added to project these input features to the model dimension. Notice that there are no other input factors, since these are the ones that are directly available from the unlabeled corpus.
|
| 332 |
+
|
| 333 |
+
Pre-training We adopt almost identical pretraining schemes for all pre-training strategies, including optimizer, learning rate scheme and batch sizes. We employ one RTX 2080 Ti GPU for the pre-training. To fit the GPU memory, we split one mini-batch into multiple pieces and do gradient accumulation. The pre-training stage takes around 3 days for the MaskLM, LBag and Hybrid strategies, while the BiLM requires around 5 days.
|
| 334 |
+
|
| 335 |
+
Decoders For specific target tasks, we specify corresponding decoders. Since our main focus is not on decoders, we adopt the standard choices for these tasks. For dependency parsing, we adopt non-projective first-order (o1) graph-based decoder. For POS tagging, we do simple enumeration and select the maximally scored POS tag for each word. Since dependency parsing and POS tagging share the same datasets, we apply simple multi-task learning and train one joint model for these two tasks. For NER, we adopt a standard CRF layer and perform decoding with the Viterbi algorithm.
|
| 336 |
+
|
| 337 |
+
Fine-tuning For the training or fine-tuning of the target tasks, we also adopt similar schemes. In addition, the learning rate is decreased by a decay rate of 0.75 every 8 epochs when there are no improvements on the development set, which is also utilized for model selection. The training on target tasks usually takes several hours, depending on
|
| 338 |
+
|
| 339 |
+
training sizes.
|
| 340 |
+
|
| 341 |
+
# B Details of Analysis
|
| 342 |
+
|
| 343 |
+
# B.1 Details on Higher-order Matches
|
| 344 |
+
|
| 345 |
+
We provide extraction details and examples for the five patterns we explore. We first define several groupings of dependency relations according to the UD documentation<sup>15</sup>:
|
| 346 |
+
|
| 347 |
+
- $\mathbf{PRED} = \{ \text{csubj, ccomp, xcomp, advel, acl, root} \}$ . This set denotes dependency relations where the modifier is usually a clausal predicate.
|
| 348 |
+
- $\mathbf{CORE} = \{\mathrm{nsubj, obj, iobj, csubj, ccomp, xcomp}\}$ . This set includes the core arguments of predicates.
|
| 349 |
+
- MWE=\{fixed, flat, compound\}. This set includes the Multi-Word Expression (MWE) dependency relations.
|
| 350 |
+
|
| 351 |
+
To extract the specified patterns, we go through each word $w$ and apply a filter to decide whether there is a frame which we are looking for. If there is, then we apply the extractor to obtain all the related dependency edges, forming the frame that we want to extract. Table 6 describes the extraction rules (the filters and extractors) and Figure 10 further provides some examples.
|
| 352 |
+
|
| 353 |
+
# C Extra Results
|
| 354 |
+
|
| 355 |
+
# C.1 Results on Development Sets
|
| 356 |
+
|
| 357 |
+
Figure 11 shows the results on development sets, whose patterns are similar to those of the test sets as shown in the main contents.
|
| 358 |
+
|
| 359 |
+
<table><tr><td>Pattern</td><td>Filter</td><td>Extractor</td></tr><tr><td>pred</td><td>lambda w: w.label in PRED</td><td>[c for c in w.children if c.label in CORE]</td></tr><tr><td>mwe</td><td>lambda w: any(c.label in MWE for c in w.children)</td><td>[c for c in w.children if c.label in MWE]</td></tr><tr><td>conj</td><td>lambda w: any(c.label=="conj" for c in w.children)</td><td>[c for c in w.children if c.label=="conj"]+[g for g in w.grandchildren if g.label=="cc"]</td></tr><tr><td>expl</td><td>lambda w: any(c.label=="expl" for c in w.Children)</td><td>[c for c in w.Children if c.label=="expl"]+[c for c in w.Children if c.label in CORE]</td></tr><tr><td>acl</td><td>lambda w: w.label=="acl"</td><td>[w]+[c for c in w.Children if c.label in CORE]</td></tr></table>
|
| 360 |
+
|
| 361 |
+
Table 6: Filter and extractor functions for the frame pattern extraction (in Python-styled pseudocode). We go through each word $w$ and apply the filter. If the filter returns True, then the extractor is applied to extract all related dependency edges, forming the desired frame.
|
| 362 |
+
|
| 363 |
+

|
| 364 |
+
Figure 11: Development results for dependency parsing (LAS), POS tagging (Accuracy%) and NER (F1 score).
|
anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:355e9e34e74792e9dfc8d49809ed0274ca9a4f6cb5ce8ce1ba9087d084cdeb0c
|
| 3 |
+
size 1197091
|
anempiricalexplorationoflocalorderingpretrainingforstructuredprediction/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:19a04f210386fd04a2a2b7847cf2c8fd848484f5074108240d143ae378b2f360
|
| 3 |
+
size 397119
|
anempiricalinvestigationofbeamawaretraininginsupertagging/0f0d87d3-a9dc-4432-b09a-5f988a3b0d5f_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7ddc2f2a3853f852be77a118bc55f96bab0de56142831adacc88bc42c8fdd4d8
|
| 3 |
+
size 65345
|
anempiricalinvestigationofbeamawaretraininginsupertagging/0f0d87d3-a9dc-4432-b09a-5f988a3b0d5f_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bc24a53ccafe764e841a8453d4b3bda3fb4ae7857d72631431078a5b6c9d988b
|
| 3 |
+
size 76717
|
anempiricalinvestigationofbeamawaretraininginsupertagging/0f0d87d3-a9dc-4432-b09a-5f988a3b0d5f_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f7bd0b6ae761176c0e0907f0222e77c82bfc26c7e8a7e584d48634e2c21ec357
|
| 3 |
+
size 974741
|
anempiricalinvestigationofbeamawaretraininginsupertagging/full.md
ADDED
|
@@ -0,0 +1,265 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# An Empirical Investigation of Beam-Aware Training in Supertagging
|
| 2 |
+
|
| 3 |
+
Renato Negrinho<sup>1</sup>
|
| 4 |
+
|
| 5 |
+
Matthew R. Gormley<sup>1</sup>
|
| 6 |
+
|
| 7 |
+
Geoffrey J. Gordon<sup>1,2</sup>
|
| 8 |
+
|
| 9 |
+
Carnegie Mellon University<sup>1</sup>, MSR Montreal<sup>2</sup>
|
| 10 |
+
|
| 11 |
+
{negrinho,mgormley,ggordon}@cs.cmu.edu
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Structured prediction is often approached by training a locally normalized model with maximum likelihood and decoding approximately with beam search. This approach leads to mismatches as, during training, the model is not exposed to its mistakes and does not use beam search. Beam-aware training aims to address these problems, but unfortunately, it is not yet widely used due to a lack of understanding about how it impacts performance, when it is most useful, and whether it is stable. Recently, Negrinho et al. (2018) proposed a meta-algorithm that captures beam-aware training algorithms and suggests new ones, but unfortunately did not provide empirical results. In this paper, we begin an empirical investigation: we train the supertagging model of Vaswani et al. (2016) and a simpler model with instantiations of the meta-algorithm. We explore the influence of various design choices and make recommendations for choosing them. We observe that beam-aware training improves performance for both models, with large improvements for the simpler model which must effectively manage uncertainty during decoding. Our results suggest that a model must be learned with search to maximize its effectiveness.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Structured prediction often relies on models that train on maximum likelihood and use beam search for approximate decoding. This procedure leads to two significant mismatches between the training and testing settings: the model is trained on oracle trajectories and therefore does not learn about its own mistakes; the model is trained without beam search and therefore does not learn how to use the beam effectively for search.
|
| 20 |
+
|
| 21 |
+
Previous algorithms have addressed one or the other of these mismatches. For example,
|
| 22 |
+
|
| 23 |
+
DAgger (Ross et al., 2011) and scheduled sampling (Bengio et al., 2015) use the learned model to visit non-oracle states at training time, but do not use beam search (i.e., they keep a single hypothesis). Early update (Collins and Roark, 2004), LaSO (Daumé and Marcu, 2005), and BSO (Wiseman and Rush, 2016) are trained with beam search, but do not expose the model to beams without a gold hypothesis (i.e., they either stop or reset to beams with a gold hypothesis).
|
| 24 |
+
|
| 25 |
+
Recently, Negrinho et al. (2018) proposed a meta-algorithm that instantiates beam-aware algorithms as a result of choices for the surrogate loss (i.e., which training loss to incur at each visited beam) and data collection strategy (i.e., which beams to visit during training). A specific instantiation of their meta-algorithm addresses both mismatches by relying on an insight on how to induce training losses for beams without the gold hypothesis: for any beam, its lowest cost neighbor should be scored sufficiently high to be kept in the successor beam. To induce these training losses it is sufficient to be able to compute the best neighbor of any state (often called a dynamic oracle (Goldberg and Nivre, 2012)). Unfortunately, Negrinho et al. (2018) do not provide empirical results, leaving open questions such as whether instances can be trained robustly, when is beam-aware training most useful, and what is the impact on performance of the design choices.
|
| 26 |
+
|
| 27 |
+
Contributions We empirically study beam-aware algorithms instantiated through the meta-algorithm of Negrinho et al. (2018). We tackle supertagging as it is a sequence labelling task with an easy-to-compute dynamic oracle and a moderately-sized label set (approximately 1000) which may require more effective search. We examine two supertagging models (one from Vaswani et al. (2016) and a simplified version designed to be heavily
|
| 28 |
+
|
| 29 |
+
reliant on search) and train them with instantiations of the meta-algorithm. We explore how design choices influence performance, and give recommendations based on our empirical findings. For example, we find that perceptron losses perform consistently worse than margin and log losses. We observe that beam-aware training can have a large impact on performance, particularly when the model must use the beam to manage uncertainty during prediction. Code for reproducing all results in this paper is available at https://github.com/negrinho/beam Learn_supertagging.
|
| 30 |
+
|
| 31 |
+
# 2 Background on learning to search and beam-aware methods
|
| 32 |
+
|
| 33 |
+
For convenience, we reuse notation introduced in Negrinho et al. (2018) to describe their metaalgorithm and its components (e.g., scoring function, surrogate loss, and data collection strategy). See Figure 1 and Figure 2 for an overview of the notation. When relevant, we instantiate notation for left-to-right sequence labelling under the Hamming cost, which supertagging is a special case of.
|
| 34 |
+
|
| 35 |
+
Input and output spaces Given an input structure $x \in \mathcal{X}$ , the output structure $y \in \mathcal{Y}_x$ , is generated through a sequence of incremental decisions. An example $x \in \mathcal{X}$ induces a tree $G_x = (V_x, E_x)$ encoding the sequential generation of elements in $\mathcal{Y}_x$ , where $V_x$ is the set of nodes and $E_x$ is the set of edges. The leaves of $G_x$ correspond to elements of $\mathcal{Y}_x$ and the internal nodes correspond to incomplete outputs. For left-to-right sequence labelling, for a sequence $x \in \mathcal{X}$ , each decision assigns a label to the current position of $x$ and the nodes of tree encode labelled prefixes of $x$ , with terminal nodes encoding complete labellings of $x$ .
|
| 36 |
+
|
| 37 |
+
Cost functions Given a golden pair $(x,y)\in \mathcal{X}\times$ $\mathcal{V}$ , the cost function $c_{x,y}:\mathcal{Y}_x\to \mathbb{R}$ measures how bad the prediction $\hat{y}\in \mathcal{V}_x$ is relative to the target output structure $y\in \mathcal{V}_x$ . Using $c_{x,y}:\mathcal{V}_x\rightarrow$ $\mathbb{R}$ , we define a cost function $c_{x,y}^{*}:V_{x}\to \mathbb{R}$ for partial outputs by assigning to each node $v\in V_x$ the cost of its best reachable complete output, i.e., $c_{x,y}^{*}(v) = \min_{v^{\prime}\in T_{v}}c_{x,y}(v^{\prime})$ , where $T_{v}\subseteq \mathcal{V}_{x}$ is the set of complete outputs reachable from $v$ . For a left-to-right search space for sequence labelling, if $c_{x,y}:\mathcal{V}_x\to \mathbb{R}$ is Hamming cost, the optimal completion cost $c_{x,y}^{*}:\mathcal{V}_{x}\to \mathbb{R}$ is the number of mistakes in the prefix as the optimal completion matches the remaining suffix of the target output.
|
| 38 |
+
|
| 39 |
+
Dynamic oracles An oracle state is one for which the target output structure can be reached. Often optimal actions can only be computed for oracle states. Dynamic oracles compute optimal actions even for non-oracle states. Evaluations of $c_{x,y}^* : V_x \to \mathbb{R}$ for arbitrary states allows us to induce the dynamic oracle—at a state $v \in V_x$ , the optimal action is to transition to the neighbor $v' \in N_v$ with the lowest completion cost. For sequence labelling, this picks the transition that assigns the correct label. For other tasks and metrics, more complex dynamic oracles may exist, e.g., in dependency parsing (Goldberg and Nivre, 2012, 2013). For notational brevity, from now on, we omit the dependency of the search spaces and cost function on $x \in \mathcal{X}, y \in \mathcal{Y}$ , or both.
|
| 40 |
+
|
| 41 |
+
Beam search space Given a search space $G = (V, E)$ , the beam search space $G_{k} = (V_{k}, E_{k})$ is induced by choosing a beam size $k \in \mathbb{N}$ and a strategy for generating the successor beam out of the current beam and its neighbors. In this paper, we expand all the elements in the beam and score the neighbors simultaneously. The highest scoring $k$ neighbors are used to form the successor beam. For $k = 1$ , we recover the greedy search space $G$ .
|
| 42 |
+
|
| 43 |
+
Beam cost functions The natural cost function $c^{*}: V_{k} \to \mathbb{R}$ for $G_{k}$ is created from the element-wise cost function on $G$ , and assigns to each beam the cost of its best element, i.e., for $b \in V_{k}$ , $c^{*}(b) = \min_{v \in b} c^{*}(v)$ . For a transition $(b, b') \in E_{k}$ , we define the transition cost $c(b, b') = c^{*}(b') - c^{*}(b)$ , where $b' \in N_{b}$ , i.e., $b'$ can be formed from the neighbors of the elements in $b$ . A cost increase happens when $c(b, b') > 0$ , i.e., the best complete output reachable in $b$ is no longer reachable in $b'$ .
|
| 44 |
+
|
| 45 |
+
Policies Policies operate in beam search space $G_{k}$ and are induced through a learned scoring function $s(\cdot ,\theta):V\to \mathbb{R}$ which scores elements in the original space $G$ . A policy $\pi :V_{k}\rightarrow \Delta (V_{k})$ , i.e., mapping states (i.e., beams) to distributions over next states. We only use deterministic policies where the successor beam is computed by sorting the neighbors in decreasing order of score and taking the top $k$ .
|
| 46 |
+
|
| 47 |
+
Scoring function In the non-beam-aware case, the scoring function arises from the way probabilities of complete sequences are computed with the locally normalized model, namely $p(y|x,\theta) = \prod_{j=1}^{h} p(y_i | y_{1:i-1}, x,\theta)$ , where we assume that all
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
Figure 1: Beam $b$ has neighborhood $A_{b}$ , where $k = |b| = |b^{\prime}| = 3$ and $n = |A_{b}| = 5$ . Edges from elements in $b$ to elements in $A_{b}$ encode neighborhood relationships, e.g., $v_{3}$ has a single neighbor $v_{5}^{\prime}$ . Permutation $\hat{\sigma} : [n] \to [n]$ sorts hypotheses in decreasing order of score, and permutation $\sigma^{*} : [n] \to [n]$ sorts them in increasing order of cost, i.e., $v_{\sigma^{*}(1)}^{\prime}$ is the lowest cost neighbor and $v_{\hat{\sigma}(1)}^{\prime}$ is the highest scoring neighbor. The successor beam $b^{\prime}$ keeps the neighbor states in $A_{b}$ with highest score according to vector $s$ , or equivalently highest rank according to $\hat{\sigma}$ .
|
| 51 |
+
|
| 52 |
+

|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
|
| 56 |
+

|
| 57 |
+
Figure 2: Sampling a trajectory through the beam search space at training time. A loss $\ell(b_i, \theta)$ is incurred at each visited beam $b_i$ , $i \in [h-1]$ , resulting in total accumulated loss $\ell(b_{1:h}, \theta)$ for beam trajectory $b_{1:h}$ . The terminal beam $b_h$ corresponds to a complete output $y(b_h) \in \mathcal{V}$ . Transitions between beams are sampled according to a data collection policy $\pi': V_k \to \Delta(V_k)$ . We consider $\pi'$ induced by a scoring function $s(\cdot, \theta): V \to \mathbb{R}$ or cost function $c^*: V \to \mathbb{R}$ . Parameters $\theta$ parametrize the scoring function of the model. Losses $\ell(b_i, \theta)$ are low if the scores of the neighbors of $b_i$ comfortably keep the lowest cost elements in the successor beam (see Section 3.2), and high otherwise. See Figure 1 for the notation to describe the surrogate loss $\ell(b_i, \theta)$ at each beam $b_i$ .
|
| 58 |
+
|
| 59 |
+
outputs for $x \in \mathcal{X}$ have $h$ steps. For sequence labelling, $h$ is the length of the sentence. The resulting scoring function $s(\cdot, \theta): V \to \mathbb{R}$ is $s(v, \theta) = \sum_{i=1}^{j} \log p(y_i | y_{1:i-1}, x, \theta)$ , where $v = y_{1:j}$ and $j \in [h]$ . Similarly, the scoring function that we learn in the beam-aware case is $s(v, \theta) = \sum_{i=1}^{j} \tilde{s}(v, \theta)$ , where $x$ has been omitted, $v = y_{1:j}$ , and $\tilde{s}(\cdot, \theta): V \to \mathbb{R}$ is the learned incremental scoring function. In Section 4.6, we observe that this cumulative version performs uniformly better than the non-cumulative one.
|
| 60 |
+
|
| 61 |
+
# 3 Meta-algorithm for learning beam search policies
|
| 62 |
+
|
| 63 |
+
We refer the reader to Negrinho et al. (2018) for a discussion of how specific choices for the meta-algorithm recover algorithms from the literature.
|
| 64 |
+
|
| 65 |
+
# 3.1 Data collection strategies
|
| 66 |
+
|
| 67 |
+
The data collection strategy determines which beams are visited at training time (see Figure 2).
|
| 68 |
+
|
| 69 |
+
Strategies that use the learned model differ on how they compute the successor beam $b' \in N_b$ when $s(\cdot, \theta)$ leads to a beam without the gold hypothesis, i.e., $c(b, b') > 0$ , where $b' = \{v_{\hat{\sigma}(1)}, \ldots, v_{\hat{\sigma}(k)}\} \subset A_b$ and $A_b = \{v_1, \ldots, v_n\} = \cup_{v \in b} N_v$ . We explore several data collection strategies:
|
| 70 |
+
|
| 71 |
+
stop If the successor beam does not contain the gold hypothesis, stop collecting the trajectory. Structured perceptron training with early update (Collins and Roark, 2004) use this strategy.
|
| 72 |
+
|
| 73 |
+
reset If the successor beam does not contain the gold hypothesis, reset to a beam with only the gold hypothesis<sup>1</sup>. LaSO (Daumé and Marcu, 2005) use this strategy. For $k = 1$ , we recover teacher forcing as only the oracle hypothesis is kept in the beam.
|
| 74 |
+
|
| 75 |
+
continue Ignore cost increases, always using the successor beam. DAgger (Ross et al., 2011) take this strategy, but does not use beam search. Negrinho et al. (2018) suggest this for beam-aware training but do not provide empirical results.
|
| 76 |
+
|
| 77 |
+
reset (multiple) Similar to reset, but keep $k - 1$ hypothesis from the transition, i.e., $b' = \{v_{\sigma^{*}(1)}, v_{\hat{\sigma}(1)}, \ldots, v_{\hat{\sigma}(k-1)}\}$ . We might expect this data collection strategy to be closer to continue as a large fraction of the elements of the successor beam are induced by the learned model.
|
| 78 |
+
|
| 79 |
+
oracle Always transition to the beam induced by $\sigma^{*}:[n]\to [n]$ , i.e., the one obtained by sorting the costs in increasing order. For $k = 1$ , this recovers teacher forcing. In Section 4.4, we observe that oracle dramatically degrades performance due to increased exposure bias with increased $k$ .
|
| 80 |
+
|
| 81 |
+
# 3.2 Surrogate losses
|
| 82 |
+
|
| 83 |
+
Surrogate losses encode that the scores produced by the model for the neighbors must score the best neighbor sufficiently high for it to be kept comfortably in the successor beam. For $k = 1$ , many of these losses reduce to losses used in non-beam-aware training. Given scores $s \in \mathbb{R}^n$ and costs $c \in \mathbb{R}^n$ over neighbors in $A_{b} = \{v_{1},\ldots ,v_{n}\}$ , we define permutations $\hat{\sigma} : [n] \to [n]$ and $\sigma^{*} : [n] \to [n]$ that sort the elements in $A_{b}$ in decreasing order of scores and increasing order of costs, respectively, i.e., $s_{\hat{\sigma}(1)} \geq \dots \geq s_{\hat{\sigma}(n)}$ and $c_{\sigma^{*}(1)} \leq \dots \leq c_{\sigma^{*}(n)}$ . See Figure 1 for a description of the notation used to describe surrogate losses. Our experiments compare the following surrogate losses:
|
| 84 |
+
|
| 85 |
+
perceptron (first) Penalize failing to score the best neighbor at the top of the beam (regardless of it falling out of the beam or not).
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\ell (s, c) = \max \left(0, s _ {\hat {\sigma} (1)} - s _ {\sigma^ {*} (1)}\right).
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
perceptron (last) If this loss is positive at a beam, the successor beam induced by the scores does not contain the golden hypothesis.
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\ell (s, c) = \max \left(0, s _ {\hat {\sigma} (k)} - s _ {\sigma^ {*} (1)}\right).
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
margin (last) Penalize margin violations of the best neighbor of the hypothesis in the current beam. Compares the correct neighbor $s_{\sigma^{*}(1)}$ with the neighbor $v_{\hat{\sigma} (k)}$ last in the beam.
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\ell (s, c) = \max \left(0, s _ {\hat {\sigma} (k)} - s _ {\sigma^ {*} (1)} + 1\right)
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
cost-sensitive margin (last) Same as margin (last) but weighted by the cost difference of the pair. Wiseman and Rush (2016) use this loss.
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\ell (s, c) = c _ {\hat {\sigma} (k), \sigma^ {*} (1)} \max (0, s _ {\hat {\sigma} (k)} - s _ {\sigma^ {*} (1)} + 1),
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $c_{\hat{\sigma}(k), \sigma^*(1)} = c_{\hat{\sigma}(k)} - c_{\sigma^*(1)}$ .
|
| 110 |
+
|
| 111 |
+
log loss (neighbors) Normalizes over all elements in $A_{b}$ . For beam size $k = 1$ , it reduces to the usual log loss.
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\ell (s, c) = - s _ {\sigma^ {*} (1)} + \log \left(\sum_ {i = 1} ^ {n} \exp (s _ {i})\right)
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
log loss (beam) Normalizes only over the top $k$ neighbors of a beam according to the scores $s$ .
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
\ell (s, c) = - s _ {\sigma^ {*} (1)} + \log \left(\sum_ {i \in I} \exp (s _ {i})\right),
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
where $I = \{\sigma^{*}(1),\hat{\sigma} (1),\ldots ,\hat{\sigma} (k)\}$ . The normalization is only over the golden hypothesis $v_{\sigma^{*}(1)}$ and the elements included in the beam. Andor et al. (2016) use this loss.
|
| 124 |
+
|
| 125 |
+
# 3.3 Training
|
| 126 |
+
|
| 127 |
+
The meta-algorithm of Negrinho et al. (2018) is instantiated by choosing a surrogate loss, data collection strategy, and beam size. Training proceeds by sampling an example $(x,y)\in \mathcal{X}\times \mathcal{Y}$ from the training set. A trajectory through the beam search space $G_{k}$ is collected using the chosen data collection strategy. A surrogate loss is induced at each non-terminal beam in the trajectory (see Figure 2). Parameter updates are computed based on the gradient of the sum of the losses of the visited beams.
|
| 128 |
+
|
| 129 |
+
# 4 Experiments
|
| 130 |
+
|
| 131 |
+
We explore different configurations of the design choices of the meta-algorithm to understand their impact on training behavior and performance.
|
| 132 |
+
|
| 133 |
+
# 4.1 Task details
|
| 134 |
+
|
| 135 |
+
We train our models for supertagging, a sequence labelling where accuracy is the performance metric of interest. Supertagging is a good task for exploring beam-aware training, as contrary to other sequence labelling datasets such as named-entity recognition (Tjong Kim Sang and De Meulder, 2003), chunking (Sang and Buchholz, 2000), and part-of-speech tagging (Marcus et al., 1993), has a moderate number of labels and therefore it is
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
Figure 3: High-level structure of the two models used in the experiments. The model on the left is from Vaswani et al. (2016). The model on the right is a simplification of the one on the left, namely, it does not have an encoding of the complete sentence at the start of prediction.
|
| 139 |
+
|
| 140 |
+

|
| 141 |
+
|
| 142 |
+
likely to require effective search to achieve high performances. We used the standard splits for CCGBank (Hockenmaier and Steedman, 2007): the training and development sets have, respectively, 39604 and 1913 examples. Models were trained on the training set and used the development set to compute validation accuracy at the end of each epoch to keep the best model. As we are performing an empirical study, similarly to Vaswani et al. (2016), we report validation accuracies. Each configuration is ran three times with different random seeds and the mean and standard deviation are reported. We replace the words that appear at most once in the training set by UNK. By contrast, no tokenization was done for the training supertags.
|
| 143 |
+
|
| 144 |
+
# 4.2 Model details
|
| 145 |
+
|
| 146 |
+
We have implemented the model of Vaswani et al. (2016) and a simpler model designed by removing some of its components. The two main differences between our implementation and theirs are that we do not use pretrained embeddings (we train the embeddings from scratch) and we use the gold POS tags (they use only the pretrained embeddings).
|
| 147 |
+
|
| 148 |
+
Main model For the model of Vaswani et al. (2016) (see Figure 3, left), we use 64, 16, and 64 for the dimensions of the word, part-of-speech, and supertag embeddings, respectively. All LSTMs (forward, backward, and LM) have hidden dimension 256. We refer the reader to Vaswani et al. (2016) for the exact description of the model. Briefly, embeddings for the words and part-of-speech tags are concatenated and fed to a bi-directional LSTM, the outputs of both directions are then fed into a
|
| 149 |
+
|
| 150 |
+
combiner (dimension-preserving linear transformations applied independently to both inputs, added together, and passed through a ReLU non-linearity). The output of the combiner and the output of the LM LSTM (which tracks the supertag prefix up to a prediction point) is then passed to another combiner that generates scores over supertags.
|
| 151 |
+
|
| 152 |
+
Simplified model We also consider a simplified model that drops the bi-LSTM encoder and the corresponding combiner (see Figure 3, right). The concatenated embeddings are fed directly into the second combiner with the LM LSTM output. Values for the hyperparameters are the same when possible. This model must leverage the beam effectively as it does not encode the sentence with a bi-LSTM. Instead, only the embeddings for the current position are available, giving a larger role to the LM LSTM over supertags. While supertagging can be tackled with a stronger model, this restriction is relevant for real-time tasks, e.g., the complete input might not be known upfront.
|
| 153 |
+
|
| 154 |
+
Training details Models are trained for 16 epochs with SGD with batch size 1 and cosine learning rate schedule (Loshchilov and Hutter, 2016), starting at $10^{-1}$ and ending at $10^{-5}$ . No weight decay or dropout was used. Training examples are shuffled after each epoch. Results are reported for the model with the best validation performance across all epochs. We use 16 epochs for all models for simplicity and fairness. This number was sufficient, e.g., we replicated Table 2 by training with 32 epochs and observed minor performance differences (see Table 6).
|
| 155 |
+
|
| 156 |
+
<table><tr><td></td><td>1</td><td>2</td><td>4</td><td>8</td></tr><tr><td>oracle reset</td><td>93.780.12</td><td>93.810.11</td><td>93.820.10</td><td>93.820.10</td></tr><tr><td>continue</td><td>94.040.07</td><td>94.050.07</td><td>94.050.07</td><td>94.060.07</td></tr><tr><td>stop</td><td>93.860.09</td><td>93.900.07</td><td>93.900.07</td><td>93.910.07</td></tr><tr><td>oracle reset</td><td>73.200.31</td><td>76.550.24</td><td>77.420.27</td><td>77.540.22</td></tr><tr><td>continue</td><td>81.990.04</td><td>82.300.03</td><td>82.370.08</td><td>82.410.08</td></tr><tr><td>stop</td><td>74.350.23</td><td>77.060.14</td><td>77.730.13</td><td>77.820.09</td></tr></table>
|
| 157 |
+
|
| 158 |
+
Table 1: Development accuracies for models trained with different data collection strategies in a non-beam-aware way (i.e., $k = 1$ ) and decoded with beam search with varying beam size. continue performs best, showing the importance of exposing the model to its mistakes. Differences are larger for the simplified model.
|
| 159 |
+
|
| 160 |
+
# 4.3 Non-beam-aware training
|
| 161 |
+
|
| 162 |
+
We first train the models with $k = 1$ and then use beam search to decode. Crucially, the model does not train with a beam and therefore does not learn to use it effectively. We vary the data collection strategy. The results are presented in Table 1 and should be used as a reference when reading the other tables to evaluate the impact of beam-aware training. Tables are formatted such that the first and second horizontal halves contain the results for the main model and simplified model, respectively. Each position contains the mean and the standard deviation of running that configuration three times. We use this format in all tables presented.
|
| 163 |
+
|
| 164 |
+
The continue data collection strategy (i.e., DAgger for $k = 1$ ) results in better models than training on the oracle trajectories. Beam search results in small gains for these settings. In this experiment, training with oracle is the same as training with reset as the beam always contains only the oracle hypothesis. The performance differences are small for the main model but much larger for the simplified model, underscoring the importance of beam search when there is greater uncertainty about predictions. For the stronger model, the encoding of the left and right contexts with the bi-LSTM provides enough information at each position to predict greedily, i.e., without search. This difference appears consistently in all experiments, with larger gains for the weaker model.
|
| 165 |
+
|
| 166 |
+
The gains achieved by the main model by decoding with beam search post-training are very small (from 0.02 to 0.05). This suggests that training the model in a non-beam-aware fashion and then using beam search does not guarantee improvements. The model must be learned with search to improve on these results. For the simpler model, larger im
|
| 167 |
+
|
| 168 |
+
<table><tr><td></td><td>1</td><td>2</td><td>4</td><td>8</td></tr><tr><td>oracle</td><td>94.100.08</td><td>92.980.07</td><td>91.660.22</td><td>85.950.79</td></tr><tr><td>reset</td><td>94.200.11</td><td>94.340.06</td><td>94.330.01</td><td>94.420.04</td></tr><tr><td>reset (mult.)</td><td>94.150.07</td><td>93.980.08</td><td>94.060.06</td><td>94.160.05</td></tr><tr><td>continue</td><td>94.150.02</td><td>94.350.05</td><td>94.370.04</td><td>94.330.04</td></tr><tr><td>stop</td><td>93.950.09</td><td>94.110.05</td><td>94.240.07</td><td>94.250.06</td></tr><tr><td>oracle</td><td>75.090.17</td><td>80.670.40</td><td>78.691.27</td><td>47.381.79</td></tr><tr><td>reset</td><td>75.060.16</td><td>87.210.14</td><td>91.240.02</td><td>92.460.09</td></tr><tr><td>reset (mult.)</td><td>75.040.18</td><td>86.190.12</td><td>90.760.11</td><td>92.160.03</td></tr><tr><td>continue</td><td>82.010.06</td><td>89.170.08</td><td>91.800.12</td><td>92.690.01</td></tr><tr><td>stop</td><td>75.080.54</td><td>87.160.08</td><td>90.980.13</td><td>92.180.06</td></tr></table>
|
| 169 |
+
|
| 170 |
+
Table 2: Development accuracies for beam-aware training with varying data collection strategies.
|
| 171 |
+
|
| 172 |
+
provements are observed (from 0.42 to 4.34). Despite the gains with beam search for reset and stop, they are not sufficient to beat the greedy model trained on its own trajectories, yielding 81.99 for continue with $k = 1$ versus 77.54 for oracle and 77.82 for reset, both with $k = 8$ . These results show the importance of the data collection strategy, even when the model is not trained in a beam-aware fashion. These gains are eclipsed by beam-aware training, namely, compare Table 1 with Table 2. See Figure 4 for the evolution of the validation and training accuracies with epochs.
|
| 173 |
+
|
| 174 |
+
# 4.4 Comparing data collection strategies
|
| 175 |
+
|
| 176 |
+
We train both models using the log loss (neighbors), described in Section 3.2, and vary the data collection strategy, described in Section 3.1, and beam size. Results are presented in Table 2 Contrary to Section 4.3, these models are trained to use beam search rather than it being an artifact of approximate decoding. Beam-aware training under oracle worsens performance with increasing beam size (due to increasing exposure bias). During training, the model learns to pick the best neighbors for beams containing only close to optimal hypotheses, which are likely very different from the beams encountered when decoding. The results for the simplified model are similar—with increasing beam size, performance first improves but then degrades. For the main model, we observe modest but consistent improvements with larger beam sizes across all data collection strategies except oracle. By comparing the results with those in the first row of Table 1, we see that we improve on the model trained with maximum likelihood and decoded with beam search.
|
| 177 |
+
|
| 178 |
+
The data collection strategy has a larger impact on performance for the simplified model. continue
|
| 179 |
+
|
| 180 |
+

|
| 181 |
+
Figure 4: Validation and training accuracies for non-beam-aware training (i.e., $k = 1$ ) with different data collection strategies for the main (left half) and simplified (right half) models. continue achieves higher accuracies.
|
| 182 |
+
|
| 183 |
+

|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
|
| 189 |
+

|
| 190 |
+
Figure 5: Validation and training accuracies for beam-aware training with different data collection strategies and beam sizes for the main (left half) and simplified (right half) models. Larger beam sizes achieve higher performances while overfitting less, and are crucial for the simplified model to achieve higher training and validation accuracies. For smaller beams continue performs better than reset. All models can be trained stably from scratch. Three runs were aggregated by showing the mean and the standard deviation for each epoch.
|
| 191 |
+
|
| 192 |
+

|
| 193 |
+
|
| 194 |
+

|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
|
| 198 |
+
<table><tr><td></td><td>1</td><td>2</td><td>4</td><td>8</td></tr><tr><td>percep. (first)</td><td>92.810.06</td><td>93.220.04</td><td>93.440.02</td><td>93.520.06</td></tr><tr><td>percep. (last)</td><td>92.840.11</td><td>93.570.06</td><td>93.860.09</td><td>93.770.04</td></tr><tr><td>m. (last)</td><td>94.100.07</td><td>94.290.07</td><td>94.270.03</td><td>94.430.04</td></tr><tr><td>cost-s. m. (last)</td><td>93.980.03</td><td>94.320.10</td><td>94.370.03</td><td>94.330.13</td></tr><tr><td>log loss (beam)</td><td>92.290.07</td><td>92.090.11</td><td>94.240.08</td><td>94.320.02</td></tr><tr><td>log loss (neig.)</td><td>94.220.00</td><td>94.290.03</td><td>94.270.06</td><td>94.380.01</td></tr><tr><td>percep. (first)</td><td>77.620.14</td><td>86.320.05</td><td>89.830.05</td><td>91.000.07</td></tr><tr><td>percep. (last)</td><td>77.670.07</td><td>87.620.03</td><td>90.820.16</td><td>91.980.11</td></tr><tr><td>m. (last)</td><td>81.750.04</td><td>88.800.02</td><td>91.910.05</td><td>92.810.05</td></tr><tr><td>cost-s. m. (last)</td><td>81.760.05</td><td>88.920.06</td><td>91.810.03</td><td>92.810.03</td></tr><tr><td>log loss (beam)</td><td>77.500.07</td><td>88.250.08</td><td>91.460.06</td><td>92.560.11</td></tr><tr><td>log loss (neig.)</td><td>81.940.02</td><td>89.010.10</td><td>91.750.03</td><td>92.600.03</td></tr></table>
|
| 199 |
+
|
| 200 |
+
Table 3: Development accuracies for the loss functions in Section 3.2.
|
| 201 |
+
|
| 202 |
+
achieves the best performance. Compare these performances with those for the simplified model in Table 1. For larger beams, the improvements achieved by beam-aware training are much larger than those achieved by non-beam-aware ones. For example, 92.69 versus 82.41 for continue with $k = 8$ , where in the first case it is trained in a beam-aware manner ( $k = 8$ for both training and decoding), while in the second case, beam search is used only during decoding ( $k = 1$ during training but $k = 8$ during decoding). This shows the importance of training with beam search and exposing the model to its mistakes. Without beam-aware training, the model is unable to learn to use the beam effectively. Check Figure 5 for the evolution of the training and validation accuracies with training epoch for
|
| 203 |
+
|
| 204 |
+
beam-aware training.
|
| 205 |
+
|
| 206 |
+
# 4.5 Comparing surrogate losses
|
| 207 |
+
|
| 208 |
+
We train both models with continue and vary the surrogate loss and beam size. Results are presented in Table 3.2. Perceptron losses (e.g., perceptron (first) and perceptron (last)) performed worse than their margin-based counterparts (e.g., margin (last) and cost-sensitive margin (last)). log loss (beam) yields poor performances for small beam sizes (e.g., $k = 1$ and $k = 2$ ). This is expected due to small contrastive sets (i.e., at most $k + 1$ elements are used in log loss (beam)). For larger beams, the results are comparable with log loss (neighbors).
|
| 209 |
+
|
| 210 |
+
# 4.6 Additional design choices
|
| 211 |
+
|
| 212 |
+
Score accumulation The scoring function was introduced as a sum of prefix terms. A natural alternative is to produce the score for a neighbor without adding it to a running sum, i.e., $s(y_{1:j}, \theta) = \tilde{s}(y_{1:j}, \theta)$ rather than $s(y_{1:j}, \theta) = \sum_{i=1}^{j} \tilde{s}(y_{1:i}, \theta)$ . Surprisingly, score accumulation performs uniformly better across all configurations. For the main model, beam-aware training degraded performance with increasing beam size. For the simplified model, beam-aware training improved on the results in Table 1, but gains were smaller than those with score accumulation. We observed that the LM LSTM failed to keep track of differences earlier in the supertag sequence, leading to similar scores over their neighbors. Accumulating the scores is a
|
| 213 |
+
|
| 214 |
+
simple memory mechanism that does not require the LM LSTM to learn to propagate long-range information. This performance gap may not exist for models that access information more directly (e.g., transformers (Vaswani et al., 2017) and other attention-based models (Bahdanau et al., 2014)). See the appendix for Table 4 which compares configurations with and without score accumulation. Performance differences range from 1 to 5 absolute percentage points.
|
| 215 |
+
|
| 216 |
+
Update on all beams The meta-algorithm of Negrinho et al. (2018) suggests inducing losses on every visited beam as there is always a correct action captured by appropriately scoring the neighbors. This leads to updating the parameters on every beam. By contrast, other beam-aware work updates only on beams where the transition leads to increased cost (e.g., Daumé and Marcu (2005) and Andor et al. (2016)). We observe that always updating leads to improved performance, similar to the results in Table 3 for perceptron losses. We therefore recommend inducing losses on every visited beam. See the appendix for Table 5, which compares configurations trained with and without updating on every beam.
|
| 217 |
+
|
| 218 |
+
# 5 Related work
|
| 219 |
+
|
| 220 |
+
Related work uses either imitation learning (often called learning to search when applied to structured prediction) or beam-aware training. Learning to search (Daumé et al., 2009; Chang et al., 2015; Goldberg and Nivre, 2012; Bengio et al., 2015; Negrinho et al., 2018) is a popular approach for structured prediction. This literature is closely related to imitation learning (Ross and Bagnell, 2010; Ross et al., 2011; Ross and Bagnell, 2014). Ross et al. (2011) addresses exposure bias by collecting data with the learned policy at training time. Collins and Roark (2004) proposes a structured perceptron variant that trains with beam search, updating the model parameters when the correct hypothesis falls out of the beam. Huang et al. (2012) introduces a theoretical framework to analyze the convergence of early update. Zhang and Clark (2008) develops a beam-aware algorithm for dependency parsing that uses early update and dynamic oracles. Goldberg and Nivre (2012, 2013) introduce dynamic oracles for dependency parsing. Ballesteros et al. (2016) observes that exposing the model to mistakes during training improves a dependency parser. Bengio et al. (2015) makes a similar observation and
|
| 221 |
+
|
| 222 |
+
present results on image captioning, constituency parsing, and speech recognition. Beam-aware training has also been used for speech recognition (Collobert et al., 2019; Baskar et al., 2019). Andor et al. (2016) proposes an early update style algorithm for learning models with a beam, but use a log loss rather than a perceptron loss as in Collins and Roark (2004). Parameters are updated when the golden hypothesis falls out of the beam or when the model terminates with the golden hypothesis in the beam. Wiseman and Rush (2016) use a similar algorithm to Andor et al. (2016) but they use a margin-based loss and reset to a beam with the golden hypothesis when it falls out of the beam. Edunov et al. (2017) use beam search to find a contrastive set to define sequence-level losses. Goyal et al. (2018, 2019) propose a beam-aware training algorithm that relies on a continuous approximation of beam search. Negrinho et al. (2018) introduces a meta-algorithm that instantiates beam-aware algorithms based on choices for beam size, surrogate loss function, and data collection strategy. They propose a DAgger-like algorithm for beam search.
|
| 223 |
+
|
| 224 |
+
# 6 Conclusions
|
| 225 |
+
|
| 226 |
+
Maximum likelihood training of locally normalized models with beam search decoding is the default approach for structured prediction. Unfortunately, it suffers from exposure bias and does not learn to use the beam effectively. Beam-aware training promises to address some of these issues, but is not yet widely used due to being poorly understood. In this work, we explored instantiations of the meta-algorithm of Negrinho et al. (2018) to understand how design choices affect performance. We show that beam-aware training is most useful when substantial uncertainty must be managed during prediction. We make recommendations for instantiating beam-aware algorithms based on the meta-algorithm, such as inducing losses at every beam, using log losses (rather than perceptron-style ones), and preferring the continue data collection strategy (or reset if necessary). We hope that this work provides evidence that beam-aware training can greatly impact performance and be trained stably, leading to their wider adoption.
|
| 227 |
+
|
| 228 |
+
# Acknowledgements
|
| 229 |
+
|
| 230 |
+
We gratefully acknowledge support from 3M — M*Modal. This work used the Bridges system, which is supported by NSF award number ACI-
|
| 231 |
+
|
| 232 |
+
1445606, at the Pittsburgh Supercomputing Center (PSC).
|
| 233 |
+
|
| 234 |
+
# References
|
| 235 |
+
|
| 236 |
+
Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. ACL.
|
| 237 |
+
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473.
|
| 238 |
+
Miguel Ballesteros, Yoav Goldberg, Chris Dyer, and Noah A Smith. 2016. Training with exploration improves a greedy stack-LSTM parser. arXiv:1603.03793.
|
| 239 |
+
Murali Karthick Baskar, Lukáš Burget, Shinji Watanabe, Martin Karafiát, Takaaki Hori, and Jan Honza Cernocký. 2019. Promising accurate prefix boosting for sequence-to-sequence asr. In ICASSP.
|
| 240 |
+
Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. NeurIPS.
|
| 241 |
+
Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daumé, and John Langford. 2015. Learning to search better than your teacher. ICML.
|
| 242 |
+
Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. ACL.
|
| 243 |
+
Ronan Collobert, Awni Hannun, and Gabriel Synnaeve. 2019. A fully differentiable beam search decoder. arXiv:1902.06022.
|
| 244 |
+
Hal Daumé, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine learning.
|
| 245 |
+
Hal Daumé and Daniel Marcu. 2005. Learning as search optimization: Approximate large margin methods for structured prediction. ICML.
|
| 246 |
+
Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2017. Classical structured prediction losses for sequence to sequence learning. arXiv:1711.04956.
|
| 247 |
+
Yoav Goldberg and Joakim Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. _COLING_ 2012.
|
| 248 |
+
Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. TACL.
|
| 249 |
+
Kartik Goyal, Chris Dyer, and Taylor Berg-Kirkpatrick. 2019. An empirical investigation of global and local normalization for recurrent neural sequence models using a continuous relaxation to beam search. In *NAACL*.
|
| 250 |
+
|
| 251 |
+
Kartik Goyal, Graham Neubig, Chris Dyer, and Taylor Berg-Kirkpatrick. 2018. A continuous relaxation of beam search for end-to-end training of neural sequence models. AAAI.
|
| 252 |
+
Julia Hockenmaier and Mark Steedman. 2007. CCGbank: a corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Computational Linguistics, 33(3).
|
| 253 |
+
Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. *NAACL*.
|
| 254 |
+
Ilya Loshchilov and Frank Hutter. 2016. SGDR: Stochastic gradient descent with warm restarts. arXiv:1608.03983.
|
| 255 |
+
Mitchell Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational linguistics.
|
| 256 |
+
Renato Negrinho, Matthew Gormley, and Geoffrey J Gordon. 2018. Learning beam search policies via imitation learning. In NeurIPS.
|
| 257 |
+
Stéphane Ross and Andrew Bagnell. 2014. Reinforcement and imitation learning via interactive no-regret learning. arXiv:1406.5979.
|
| 258 |
+
Stéphane Ross and Drew Bagnell. 2010. Efficient reductions for imitation learning. AISTATS.
|
| 259 |
+
Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. *AIS-TATS*.
|
| 260 |
+
Erik F Sang and Sabine Buchholz. 2000. Introduction to the conll-2000 shared task: Chunking. cs/0009008.
|
| 261 |
+
Erik Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. NAACL.
|
| 262 |
+
Ashish Vaswani, Yonatan Bisk, Kenji Sagae, and Ryan Musa. 2016. Supertagging with LSTMs. In ACL.
|
| 263 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.
|
| 264 |
+
Sam Wiseman and Alexander Rush. 2016. Sequenceto-sequence learning as beam-search optimization. ACL.
|
| 265 |
+
Yue Zhang and Stephen Clark. 2008. A tale of two parsers: investigating and combining graph-based and transition-based dependency parsing using beam-search. In EMNLP.
|
anempiricalinvestigationofbeamawaretraininginsupertagging/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9d9dd53e7464745f23c325346144f5ded797614029d1e0cf044cd3cdb9df67c1
|
| 3 |
+
size 314162
|
anempiricalinvestigationofbeamawaretraininginsupertagging/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:81823c9d87a4e8a9b7d94d3080662436bb13d9b7ab46922d86a5ba1e5973c9b8
|
| 3 |
+
size 373284
|
anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/484069f8-b429-4daa-8c1f-fbd61dcb9856_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b7342bcb0b21aeecd6235970dcf3756b29837bc0957985226bb5101dfd7c17e5
|
| 3 |
+
size 38715
|
anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/484069f8-b429-4daa-8c1f-fbd61dcb9856_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e98b7aaf378c87fe9e9c0801b4a0cffe39d5df57d488b5c53de4ae9b7ed17b5c
|
| 3 |
+
size 46983
|
anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/484069f8-b429-4daa-8c1f-fbd61dcb9856_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:67d7f607fe8b4804804310f18145f303eeed9af2156168f1bfb632f20aa73d7e
|
| 3 |
+
size 200049
|
anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/full.md
ADDED
|
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# An Empirical Methodology for Detecting and Prioritizing Needs during Crisis Events
|
| 2 |
+
|
| 3 |
+
M. Janina Sarol, Ly Dinh, Rezvaneh Rezapour, Chieh-Li Chin, Pingjing Yang, Jana Diesner
|
| 4 |
+
|
| 5 |
+
University of Illinois at Urbana-Champaign, IL, USA
|
| 6 |
+
|
| 7 |
+
{mjsarol,dinh4,rezapou2,cchin6,py2,jdiesner}@illinois.edu
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
In times of crisis, identifying essential needs is crucial to providing appropriate resources and services to affected entities. Social media platforms such as Twitter contain a vast amount of information about the general public's needs. However, the sparsity of information and the amount of noisy content present a challenge for practitioners to effectively identify relevant information on these platforms. This study proposes two novel methods for two needs detection tasks: 1) extracting a list of needed resources, such as masks and ventilators, and 2) detecting sentences that specify who-needs-what resources (e.g., we need testing). We evaluate our methods on a set of tweets about the COVID-19 crisis. For extracting a list of needs, we compare our results against two official lists of resources, achieving 0.64 precision. For detecting who-needs-what sentences, we compared our results against a set of 1,000 annotated tweets and achieved a 0.68 F1-score.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
During crises, substantial amounts of information are shared and discussed on social media (Palen and Anderson, 2016; Reuter et al., 2018). Some of these posts may contain relevant information about the needs of affected and at-risk populations (Basu et al., 2018; Dutt et al., 2019; Purohit et al., 2014). The recent COVID-19 virus outbreak is no exception; online platforms such as Twitter have been crucial means for sharing information about the impact of the outbreak (Singh et al., 2020), personal accounts from infected individuals (Jimenez-Sotomayor et al., 2020), and updates from medical professionals (Rosenberg et al., 2020). Crisis responders and practitioners have also turned to online platforms to obtain actionable information that could aid them in response planning (Vieweg et al., 2010; Zade et al., 2018). In particular, scholars in crisis informatics have provided solutions
|
| 16 |
+
|
| 17 |
+
to detect relevant Twitter messages that express resource needs and availabilities related to crisis events, e.g., during the 2015 Nepal earthquake (Basu et al., 2017; Dutt et al., 2019) and the 2015 Chennai floods (Sarkar et al., 2019). This paper builds upon and extends prior literature by proposing two needs detection tasks and applying needs detection to data about the COVID-19 crisis. In particular, we (1) extract a list of needs by using word embeddings to identify closest terms to needs and supplies with respect to their cosine similarity, and (2) detect who-needs-what sentences to determine social entities who need particular resources.
|
| 18 |
+
|
| 19 |
+
This study makes two contributions. First, we propose a method for identifying and prioritizing resource needs during a crisis. Second, we present a set of heuristics to determine the social entities that need specific resources. Overall, our study provides a reliable set of methods that might help response professionals identify immediate types of needs in the general population quickly and make effective decisions accordingly.
|
| 20 |
+
|
| 21 |
+
# 2 Related Work
|
| 22 |
+
|
| 23 |
+
A large body of literature from the field of crisis informatics has used natural language processing and machine learning methods to extract relevant situational awareness content from large text corpora (Vieweg et al., 2010; Verma et al., 2011). One of several categories of situational awareness content is needs expressed by (affected) individuals and communities (Imran et al., 2016; Purohit et al., 2014; Varga et al., 2013; Temnikova et al., 2015). Imran, Mitra, and Castillo (2016) analyzed tweets about eight major natural disaster events and found that about $21.7\%$ of all tweets contained crucial information about urgent needs for shelter, donations, and essential supplies, such as medical aid, clothing, food, and water. Varga and colleagues (Varga
|
| 24 |
+
|
| 25 |
+
et al., 2013) leveraged machine learning models to match tweets, indicating problems with aid being offered to minimize the waste of resources during a crisis. Similarly, Purohit and colleagues (2014) classified tweets based on requests and offers of resources, and further matched requests with offers using regular expressions. Temnikova, Castillo, and Vieweg (2015) developed a lexical resource that contained 23 categories of situational awareness, most of which are based on needs requested and resources available (e.g., clean water, shelter material), as well as services (e.g., rescue workers, relief work) to meet the needs. Basu and colleagues (2017; 2019) identified need and availability tweets, and matched the identified needs with availabilities (Basu et al., 2018). Our paper builds upon this prior work that has primarily focused on classifying need/non-need tweets. More specifically, we propose methods that identify a general overview of the needs and specify where and by whom these resources are needed.
|
| 26 |
+
|
| 27 |
+
# 3 Data
|
| 28 |
+
|
| 29 |
+
We collected 665,667 tweets posted between February 28, 2020 and May 8, 2020, with a maximum of 10,000 samples for each day using Coronavirus $\mathsf{Hexagon}^1$ . Each tweet contains at least one of the following hashtags: #COVID19, #COVID-19, #coronavirusoutbreak, #WuhanCoronavirus, #2019nCoV, #CCPvirus, #coronavirus, #CoronavirusPandemic, #SARS-CoV-2, #coronavirus, #wuhanflu, #kungflu, #chineseviruscorona, #ChinaVirus19, #chinesevirus. Our sample includes only tweets from users in the United States and tweets written in English.
|
| 30 |
+
|
| 31 |
+
# 4 Methodology
|
| 32 |
+
|
| 33 |
+
# 4.1 Extracting a List of Needs
|
| 34 |
+
|
| 35 |
+
For detecting needs, we trained an embedding model on the dataset and identified the terms closest to the seed terms needs and supplies with respect to their cosine similarity. Specifically, we performed the following steps:
|
| 36 |
+
|
| 37 |
+
1. Detect phrases using AutoPhrase (Shang et al., 2018), setting the threshold for salient phrases to 0.8, and annotate dataset with phrases.
|
| 38 |
+
2. Split tweets into sentences and tokens using the NLTK (Loper and Bird, 2002) sentence and tweet tokenizer, respectively.
|
| 39 |
+
|
| 40 |
+
3. Run word2vec (Mikolov et al., 2013) on the tokenized sentences.
|
| 41 |
+
4. Select the top 100 nouns closest to the word embeddings of needs and supplies. These nouns are representative of the needed resources.
|
| 42 |
+
|
| 43 |
+
To identify nouns, we ran the NLTK part-of-speech (POS) tagger on the tweets (before phrase annotation). We considered nouns as words whose most frequent POS tag is a noun, and a phrase as a noun if its final token is a noun (e.g., testing-capacity is a noun as capacity is a noun).
|
| 44 |
+
|
| 45 |
+
# 4.2 Detecting Who-Needs-What Sentences
|
| 46 |
+
|
| 47 |
+
We developed a rule-based method to identify who-needs-what sentences, where who is an entity (noun or pronoun) and what is a resource or an item (noun). We leveraged the grammatical structure of sentences for this purpose by using a dependency parser to identify sentences containing this triple. We developed two simple rules to identify these types of sentences.
|
| 48 |
+
|
| 49 |
+
The first rule considers the occurrence of the word need as a verb (as per its POS) in a sentence. This is a straightforward application of the who-needs-what format. We identified sentences where who is the subject and what is the direct object. After identifying that need (or its other word forms) is used as a verb, we selected sentences where the left child of need in the dependency parse tree is a nominal subject (nsubj), and the right child is a direct object (dobj). Figure 1 shows an example sentence that follows this rule and its dependency parse tree. The second rule considers the use of the word need as a noun (as per its POS). Our initial data exploration identified many sentences in the form of $X$ is in need of $Y$ , where, in the dependency parse tree, the who and what are not direct children of the term need. The who is a child of a copular verb (e.g., is), which is an ancestor of need. The term linking the copular verb and need is a preposition (i.e., the copular verb is the
|
| 50 |
+
|
| 51 |
+

|
| 52 |
+
Figure 1: Rule considering need as a verb
|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
Figure 2: Rule considering need as a noun
|
| 56 |
+
|
| 57 |
+
term's parent and need is its prepositional object (pobj). The what is a descendant of need, also linked through a preposition. Figure 2 shows an example sentence that conforms to this rule and its dependency parse tree.
|
| 58 |
+
|
| 59 |
+
Similar to the first needs detection task, we used the NLTK sentence and tweet tokenizer to split the tweets into sentences and tokens, respectively. We used spaCy (Honnibal and Montani, 2017) to generate the dependency parse trees. Our source code is available on GitHub<sup>2</sup>.
|
| 60 |
+
|
| 61 |
+
# 4.3 Evaluation
|
| 62 |
+
|
| 63 |
+
There is no single comprehensive list of resources needed by people in the U.S. for the COVID-19 crisis that could serve as ground truth for evaluation. We found two sets of sources that we deemed as proxies for such a list. First, the World Health Organization's (WHO) essential resource planning guidelines (2020) provide a set of forecasting tools and documents for calculating the required manpower, supplies, and equipment needed to respond to the virus adequately. Second, the U.S. Department of Health and Human Services (HHS) Office of Inspector General published the results of a survey conducted about hospitals' experiences in responding to the pandemic (Grimm, 2020). To evaluate our results for the first needs detection task, we counted the number of matches between the list we had generated and the resources mentioned in the WHO and HHS documents. This helps to capture precision. We report our results as precision@k, with k ranging from 10 to 100.
|
| 64 |
+
|
| 65 |
+
For the who-needs-what detection task, two annotators identified who-needs-what sentences from a random set of 1,000 sentences that contained any word form of need (i.e., need, needs, needing, and needed). Each annotator was assigned 600 sentences, where 200 sentences also appeared in the other annotator's list. Cohen's kappa was 0.91.
|
| 66 |
+
|
| 67 |
+
We report our results for the who-needs-what
|
| 68 |
+
|
| 69 |
+
detection task using precision, recall, and F1-score. We compare our work to the needs detection algorithm proposed by Basu and colleagues (Basu et al., 2017), who classified need vs. non-need tweets by ranking tweets based on their cosine similarities to the embeddings of the stemmed terms need and requir. We set the cut-off value of the need-related tweets to 250 and performed the same pre-processing steps outlined in (Basu et al., 2017). While their work is focused on identifying all need tweets, it is still the closest prior work to our task.
|
| 70 |
+
|
| 71 |
+
# 5 Results
|
| 72 |
+
|
| 73 |
+
Table 1 shows the top 10 resources generated by our first needs detection method. The full set of results is shown in Appendix A. Comparing them to the WHO guidelines, precision@10 is 0.8, and comparing them to the HHS survey, it is 0.9. When both WHO and HHS documents are considered, the precision@10 is 1. The top 13 terms (and 19 of the top 20 terms) appear in at least either one of the WHO or HHS documents. Overall, 41 of the top 100 terms appear in the WHO guidelines, 57 in the HHS survey, and 64 in at least one document.
|
| 74 |
+
|
| 75 |
+
Figure 3 shows the precision@k, where k is in increments of 10. There is a steep drop-off in the results when the cut-off is relaxed from 20 to 30, but the precision@k decreases at a more controlled rate after this drop-off. This indicates that the resources
|
| 76 |
+
|
| 77 |
+
<table><tr><td>Resource</td><td>WHO</td><td>HHS</td></tr><tr><td>medical-equipment</td><td>✓</td><td>✓</td></tr><tr><td>equipment</td><td>✓</td><td>✓</td></tr><tr><td>medical-supplies</td><td>✓</td><td>✓</td></tr><tr><td>protective-gear</td><td>✓</td><td>✓</td></tr><tr><td>stockpile</td><td>×</td><td>✓</td></tr><tr><td>protective-equipment</td><td>✓</td><td>✓</td></tr><tr><td>ppe</td><td>✓</td><td>✓</td></tr><tr><td>manufacturing</td><td>×</td><td>✓</td></tr><tr><td>personal-protective-equipment</td><td>✓</td><td>✓</td></tr><tr><td>medicines</td><td>✓</td><td>×</td></tr></table>
|
| 78 |
+
|
| 79 |
+
Table 1: Resources generated for COVID-19
|
| 80 |
+
|
| 81 |
+

|
| 82 |
+
Figure 3: Precision at different cutoffs
|
| 83 |
+
|
| 84 |
+
needed still appear lower in the list. High precision scores for lower k values suggest that our proposed method can identify resources needed and produce a rigorous ranking of needs.
|
| 85 |
+
|
| 86 |
+
For the who-needs-what detection task, our method produced a precision of 0.66, recall of 0.70, and F1-score of 0.68. Sentences that were incorrectly predicted as positive examples include those of the form if you need $x$ , then..., while false negatives include more complex sentences. Only using the first rule produces a precision of 0.66, recall of 0.68, and an F1-score of 0.67, indicating that most who-needs-what sentences follow this rule, where the who is the subject of the sentence or clause and the what is the direct object. Our baseline method, inspired by the work by Basu et al., 2017), performed poorly, achieving only 0.28 precision, 0.26 recall, and 0.27 F1-score.
|
| 87 |
+
|
| 88 |
+
# 6 Discussion
|
| 89 |
+
|
| 90 |
+
The first needs detection results vary in terms of specificity (e.g., equipment vs. medical equipment, personal protective equipment vs. respirators, funding vs. federal funding). Several retrieved terms that are not on the WHO and HHS lists are general terms such as goods, aid, efforts, programs, and assets. In addition, several terms are synonymous (e.g., personal protective equipment and PPE). These results suggest that clustering the terms may lead to a more distinct set of results.
|
| 91 |
+
|
| 92 |
+
It is not surprising that more of the terms we detected appeared in the HHS than in the WHO document because we collected our tweet data from the U.S., and the HHS document is from a survey of U.S. hospitals, while the WHO list is for a global audience. Overall, our results suggest two findings: 1) our needs detection method works, and 2) most COVID-19 needs mentioned on Twitter are either of medical or financial nature (see Appendix A).
|
| 93 |
+
|
| 94 |
+
Our who-needs-what detection results show that a simple rule-based method can retrieve sentences that mention entities needing resources and the resources needed (0.68 F1-score). This is an interesting finding with several implications. We can produce a simple white-box method for identifying who-needs-what sentences. While deep learning may increase the scores, our method requires no training data. Another implication of our findings is that mentioning needs on Twitter often follows a specific, uniform format, which could be due to the limited characters available per tweet. Testing the generalizability of this method on other crisis events is part of our future work.
|
| 95 |
+
|
| 96 |
+
While social media has been shown to be a valuable source of information during crises, finding useful information is still akin to finding a needle in a haystack. For our who-needs-what detection task, we only found 262 positive examples. Overall, our first needs detection method can generate a ranked set of needs for $600,000+$ tweets in less than 30 minutes. Running steps such as phrase detection and POS tagging in parallel may even improve this time. For the who-needs-what detection task, our method can classify 1,000 sentences in 8 seconds.
|
| 97 |
+
|
| 98 |
+
# 7 Conclusions and Future Work
|
| 99 |
+
|
| 100 |
+
In this paper, we presented two needs detection methods: one for extracting a list of needed resources during a crisis, and another one for detecting the who-needs-what sentences. We believe that these two methods are helpful in capturing the broad range of needs that emerges during crisis events. Specific to the COVID-19 crisis, our results suggest that the essential needs are protective equipment and financial assistance. Our methods can help detect the essential needs of the general population and affected stakeholders so they can properly plan and respond effectively.
|
| 101 |
+
|
| 102 |
+
In future work, we aim to expand our methodology to identify the availability of needs, if they have been met, and social entities who address them. In addition, we plan to differentiate between a more comprehensive set of requests, including hopes, wants, and wishes during a crisis.
|
| 103 |
+
|
| 104 |
+
# Acknowledgments
|
| 105 |
+
|
| 106 |
+
This work was supported in part by the U.S. Department of Homeland Security under Grant Award Number 2015-ST-061-CIRC01. The views and conclusions contained in this document are those
|
| 107 |
+
|
| 108 |
+
of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security.
|
| 109 |
+
|
| 110 |
+
# References
|
| 111 |
+
|
| 112 |
+
Moumita Basu, Kripabandhu Ghosh, Somenath Das, Ratnadeep Dey, Somprakash Bandyopadhyay, and Saptarshi Ghosh. 2017. Identifying post-disaster resource needs and availabilities from microblogs. In Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, pages 427-430.
|
| 113 |
+
Moumita Basu, Anurag Shandilya, Kripabandhu Ghosh, and Saptarshi Ghosh. 2018. Automatic matching of resource needs and availabilities in microblogs for post-disaster relief. In Companion Proceedings of the The Web Conference 2018, pages 25-26.
|
| 114 |
+
Moumita Basu, Anurag Shandilya, Prannay Khosla, Kripabandhu Ghosh, and Saptarshi Ghosh. 2019. Extracting resource needs and availabilities from microblogs for aiding post-disaster relief operations. IEEE Transactions on Computational Social Systems, 6(3):604-618.
|
| 115 |
+
Ritam Dutt, Moumita Basu, Kripabandhu Ghosh, and Saptarshi Ghosh. 2019. Utilizing microblogs for assisting post-disaster relief operations via matching resource needs and availabilities. Information Processing & Management, 56(5):1680-1697.
|
| 116 |
+
Christi A Grimm. 2020. Hospital experiences responding to the COVID-19 pandemic: Results of a national pulse survey march 23-27, 2020. Washington DC: Office of the Inspector General; April 3, 2020. Report no. OEI-06-20-00300.
|
| 117 |
+
Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing.
|
| 118 |
+
Muhammad Imran, Prasenjit Mitra, and Carlos Castillo. 2016. Twitter as a lifeline: Human-annotated twitter corpora for nlp of crisis-related messages. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 1638–1643, Paris, France. European Language Resources Association (ELRA).
|
| 119 |
+
Maria Renee Jimenez-Sotomayor, Carolina Gomez-Moreno, and Enrique Soto-Perez-de Celis. 2020. Coronavirus, ageism, and twitter: An evaluation of tweets about older adults and covid-19. Journal of the American Geriatrics Society, 68(8):1661-1665.
|
| 120 |
+
Edward Loper and Steven Bird. 2002. NLTK: the natural language toolkit. arXiv preprint cs/0205028.
|
| 121 |
+
|
| 122 |
+
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, pages 3111-3119.
|
| 123 |
+
World Health Organization. 2020. Coronavirus disease (COVID-19) technical guidance: Essential resource planning. Available at https://www.who.int/emergencies/diseases/novel-coronavirus-2019/technical-guidance/covid-19-critical-items.
|
| 124 |
+
Leysia Palen and Kenneth M Anderson. 2016. Crisis informatics—new data for extraordinary times. Science, 353(6296):224-225.
|
| 125 |
+
Hemant Purohit, Carlos Castillo, Fernando Diaz, Amit Sheth, and Patrick Meier. 2014. Emergency-relief coordination on social media: Automatically matching resource requests and offers. First Monday, 19(1).
|
| 126 |
+
Christian Reuter, Amanda Lee Hughes, and Marc-Andre Kaufhold. 2018. Social media in crisis management: An evaluation and analysis of crisis informatics research. International Journal of Human-Computer Interaction, 34(4):280-294.
|
| 127 |
+
Hans Rosenberg, Shahbaz Syed, and Salim Rezaie. 2020. The twitter pandemic: The critical role of twitter in the dissemination of medical information and misinformation during the COVID-19 pandemic. Canadian Journal of Emergency Medicine, 22(4):418-421.
|
| 128 |
+
Abhinav Sarkar, Swagata Roy, and Moumita Basu. 2019. Curating resource needs and availabilities from microblog during a natural disaster: A case study on the 2015 chennai floods. In Proceedings of the India Joint International Conference on Data Science and Management of Data, pages 338-341.
|
| 129 |
+
Jingbo Shang, Jialu Liu, Meng Jiang, Xiang Ren, Clare R Voss, and Jiawei Han. 2018. Automated phrase mining from massive text corpora. IEEE Transactions on Knowledge and Data Engineering, 30(10):1825-1837.
|
| 130 |
+
Lisa Singh, Shweta Bansal, Leticia Bode, Ceren Budak, Guangqing Chi, Kornraphop Kawintiranon, Colton Padden, Rebecca Vanarsdall, Emily Vraga, and Yanchen Wang. 2020. A first look at COVID-19 information sharing on twitter. arXiv preprint arXiv:2003.13907.
|
| 131 |
+
Irina P Temnikova, Carlos Castillo, and Sarah Vieweg. 2015. Emterms 1.0: A terminological resource for crisis tweets. In Proceedings of the ISCRAM 2015 Conference, pages 134-146.
|
| 132 |
+
István Varga, Motoki Sano, Kentaro Torisawa, Chikara Hashimoto, Kiyonori Ohtake, Takao Kawai, Jong-Hoon Oh, and Stijn De Saeger. 2013. Aid is out there: Looking for help from tweets during a large
|
| 133 |
+
|
| 134 |
+
scale disaster. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1619-1629, Sofia, Bulgaria. Association for Computational Linguistics.
|
| 135 |
+
|
| 136 |
+
Sudha Verma, Sarah Vieweg, William J Corvey, Leysia Palen, James H Martin, Martha Palmer, Aaron Schram, and Kenneth M Anderson. 2011. Natural language processing to the rescue? extracting "situational awareness" tweets during mass emergency. In Fifth International AAAI Conference on Weblogs and Social Media.
|
| 137 |
+
|
| 138 |
+
Sarah Vieweg, Amanda L Hughes, Kate Starbird, and Leysia Palen. 2010. Microblogging during two natural hazards events: what twitter may contribute to situational awareness. In Proceedings of the SIGCHI conference on human factors in computing systems, pages 1079-1088.
|
| 139 |
+
|
| 140 |
+
Himanshu Zade, Kushal Shah, Vaibhavi Rangarajan, Priyanka Kshirsagar, Muhammad Imran, and Kate Starbird. 2018. From situational awareness to actionability: Towards improving the utility of social media data for crisis response. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW).
|
| 141 |
+
|
| 142 |
+
# A Appendix: Resources generated for COVID-19
|
| 143 |
+
|
| 144 |
+
<table><tr><td>medical Equipment</td><td>materials</td><td>hand-sanitizer</td><td>grants</td></tr><tr><td>equipment</td><td>access</td><td>face-masks</td><td>relief</td></tr><tr><td>medical-supplies</td><td>demand</td><td>gloves</td><td>essential-workers</td></tr><tr><td>protective-gear</td><td>essential-goods</td><td>local-hospitals</td><td>capability</td></tr><tr><td>stockpile</td><td>production</td><td>respirators</td><td>groceries</td></tr><tr><td>protective Equipment</td><td>face-shields</td><td>healthcare-workers</td><td>devices</td></tr><tr><td>ppe</td><td>personnel</td><td>recipients</td><td>pharmacies</td></tr><tr><td>manufacturing</td><td>federal-funding</td><td>refused</td><td>flexibility</td></tr><tr><td>personal-protective Equipment</td><td>reagents</td><td>essential-supplies</td><td>masks</td></tr><tr><td>medicines</td><td>federal-assistance</td><td>barriers</td><td>living-wage</td></tr><tr><td>#ppe</td><td>ventilators</td><td>demands</td><td>national-stockpile</td></tr><tr><td>supply</td><td>systems</td><td>repairs</td><td>medical-facilities</td></tr><tr><td>distribution</td><td>assets</td><td>relief-funds</td><td>assistance</td></tr><tr><td>goods</td><td>capacity</td><td>food-banks</td><td>packages</td></tr><tr><td>manufacturers</td><td>programs</td><td>utilities</td><td>trace</td></tr><tr><td>funds</td><td>aid</td><td>meds</td><td>dpa</td></tr><tr><td>plans</td><td>economic-relief</td><td>testing-capacity</td><td>purchases</td></tr><tr><td>essentials</td><td>kits</td><td>defense-production-act</td><td>handouts</td></tr><tr><td>essential-items</td><td>gowns</td><td>childcare</td><td>machines</td></tr><tr><td>financial-relief</td><td>food</td><td>ability</td><td>deliveries</td></tr><tr><td>needing</td><td>funding</td><td>services</td><td>local-governments</td></tr><tr><td>necessities</td><td>efforts</td><td>providers</td><td>paid-sick-leave</td></tr><tr><td>critical-supplies</td><td>medication</td><td>requirements</td><td>shortages</td></tr><tr><td>clean-water</td><td>supply-chain</td><td>surgical-masks</td><td>failed</td></tr><tr><td>resources</td><td>facilities</td><td>expenses</td><td>hospitals</td></tr></table>
|
| 145 |
+
|
| 146 |
+
Table A1: Resources generated for COVID-19
|
anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dedd3556122615429b9bf62d8930306eeb78a3b96b646e8c8a508e920e0e4619
|
| 3 |
+
size 189528
|
anempiricalmethodologyfordetectingandprioritizingneedsduringcrisisevents/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3c00df01e1e619853b5c1e237add015e13a75fb779b3ef6e0adbf6dfeda6da1f
|
| 3 |
+
size 149505
|
anevaluationmethodfordiachronicwordsenseinduction/7b2eec2b-e541-4104-975d-2d7cd4d77ca3_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e3bd5828e10bf8f77b0267f90a1bbfc5ad3af5fdeec4b32344f6c13adc32de54
|
| 3 |
+
size 79879
|
anevaluationmethodfordiachronicwordsenseinduction/7b2eec2b-e541-4104-975d-2d7cd4d77ca3_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:28fc8d2369ba8e5ef5304dbbfaa2faf3c1b3f75d074cd0aa869c87ca071283c0
|
| 3 |
+
size 95402
|
anevaluationmethodfordiachronicwordsenseinduction/7b2eec2b-e541-4104-975d-2d7cd4d77ca3_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:671f6291a46a2c4aad2859d604fbc6a6c6c5f08611dedd23500a404e7c1abcc8
|
| 3 |
+
size 569073
|
anevaluationmethodfordiachronicwordsenseinduction/full.md
ADDED
|
@@ -0,0 +1,324 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# An Evaluation Method for Diachronic Word Sense Induction
|
| 2 |
+
|
| 3 |
+
# Ashjan Alsulaimani
|
| 4 |
+
|
| 5 |
+
School of Computer Science and Statistics & Trinity Centre for Computing and Language Studies Trinity College Dublin alsulaia@tcd.ie
|
| 6 |
+
|
| 7 |
+
# Erwan Moreau
|
| 8 |
+
|
| 9 |
+
School of Computer Science and Statistics & Adapt Centre Trinity College Dublin moreaue@scss.tcd.ie
|
| 10 |
+
|
| 11 |
+
# Carl Vogel
|
| 12 |
+
|
| 13 |
+
School of Computer Science and Statistics & Trinity Centre for Computing and Language Studies Trinity College Dublin vogel@scss.tcd.ie
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
The task of Diachronic Word Sense Induction (DWSI) aims to identify the meaning of words from their context, taking the temporal dimension into account. In this paper we propose an evaluation method based on large-scale time-stamped annotated biomedical data, and a range of evaluation measures suited to the task. The approach is applied to two recent DWSI systems, thus demonstrating its relevance and providing an in-depth analysis of the models.
|
| 18 |
+
|
| 19 |
+
# 1 Introduction
|
| 20 |
+
|
| 21 |
+
Words naturally evolve through time, their meaning may encounter subtle or radical changes resulting in a variety of senses. For example, the word mouse only had the meaning of animal until it acquired a brand new sense in 1980 as computer device. But sense changes are not always so definite, a word usage may drift progressively from its original sense or be affected by historical events. A recent example of this phenomenon is the word coronavirus, which has seen a dramatic usage surge in 2020 because of the emergence of its SARS-CoV-2 variant. Before 2020, the word coronavirus was mostly a technical term describing a family of viruses, but it is now used in the mainstream media to mean the specific SARS-CoV-2, the related Covid19 disease or even the general health crisis and its consequences.
|
| 22 |
+
|
| 23 |
+
The dynamic behaviour of words contributes to semantic ambiguity, which is a challenge in many NLP tasks. The ability to detect such changes across time could potentially benefit various applications, such as machine translation and information retrieval. In the biomedical domain, it can improve the quality of the automatic identification of senses in contexts where no complete terminology is available, such as with clinical notes, and to assist indexers who build terminology resources.
|
| 24 |
+
|
| 25 |
+
Recent research focused on detecting semantic shifts across time (Kutuzov et al., 2018) but also Diachronic Word Sense Induction (Emms and Kumar Jayapal, 2016). The task of Diachronic Word Sense Induction (DWSI) is similar to Word Sense Induction (WSI) in identifying the meaning of words from their context, but also takes the temporal dimension into account.
|
| 26 |
+
|
| 27 |
+
In §2 we briefly present two Bayesian models that have been proposed for the DWSI task: Emms and Kumar Jayapal (2016) proposed a model which represents the evolution of word senses in order to detect the emergence year of new senses. A different model was proposed by Frermann and Lapata (2016), focusing instead on capturing the subtle meaning changes within a sense over time. However evaluating such models is difficult, as the lack of large scale time-stamped data prevents direct quantitative evaluation.
|
| 28 |
+
|
| 29 |
+
In this paper we introduce a method which relies on annotated biomedical data to evaluate DWSI. While the general aim of this article is the evaluation of DWSI systems across domains and genres, the biomedical domain is the only one to date which offers suitable data for the task. Our approach leverages the availability of unambiguous manual annotations (and publication years) in the Medline citation database in order to build a large time-stamped dataset, as detailed in §3. In §4 we introduce a range of evaluation measures which can be used to directly and quantitatively measure the performance of a DWSI system on such an annotated dataset. Finally in §5 we compare the two aforementioned models using our evaluation method, which demonstrates the relevance of the approach and allows a deep analysis of the models.
|
| 30 |
+
|
| 31 |
+
# 2 State of the Art
|
| 32 |
+
|
| 33 |
+
# 2.1 Diachronic Word Sense Induction
|
| 34 |
+
|
| 35 |
+
Most existing work on diachronic meaning change has focused on static methods, in the sense that the learning algorithms are either time-unaware or applied to independent periods of time (Lau et al., 2012; Cook et al., 2014; Mitra et al., 2015). For example, Mitra et al. (2015) split the data into eras and then apply WSI independently on each era subset in order to identify new senses of a word. However, recent approaches have introduced time aware probabilistic models in order to represent the changes in word meaning over time.
|
| 36 |
+
|
| 37 |
+
# 2.2 The NEO Model
|
| 38 |
+
|
| 39 |
+
The model introduced by Emms and Kumar Jayapal (2016), called $\mathbf{NEO}^2$ herein, is a generative Bayesian model that chooses a sense $s$ given a time $t$ (respecting relevant sense-given-time probabilities $P(s|t)$ ) then chooses context words $\mathbf{w}$ given the sense $s$ (respecting relevant word-given sense probabilities $P(w|s)$ ). The joint probability distribution over the parameters is defined as in (1).
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
\begin{array}{l} P (t, s, \mathbf {w}; \pi_ {1: N}, \theta_ {1: K}) \\ = \prod_ {t} D i r i c h \left(\theta_ {t}; \gamma_ {\pi}\right) \times \prod_ {k} D i r i c h \left(\theta_ {k}; \gamma_ {\theta}\right) \tag {1} \\ \times P (t; \tau_ {1: N}) P (s | t; \pi_ {1: N}) \prod_ {w _ {i} \in \mathbf {w}} P (w _ {i} | s; \theta_ {1: K}) \\ \end{array}
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
The authors' aim is to capture sense changes in order to detect the emergence, i.e. origin time, of a novel sense. In this model the probabilities of the context words are represented independently from time, which means that senses can change over time with respect to each other, but the probabilities of the words representing a particular sense are assumed to be constant.
|
| 46 |
+
|
| 47 |
+
# 2.3 The SCAN Model
|
| 48 |
+
|
| 49 |
+
Frermann and Lapata (2016) proposed a generative Bayesian model inspired from dynamic topic modeling (Blei and Lafferty, 2006), hereafter called SCAN, which shares similarities with NEO but is more complex: given a time $t$ , a sense $s$ is chosen following the distribution of the parameter $\phi_t$ ; then given a sense $s$ and a time $t$ , the context words $\mathbf{w}$ are drawn following the distribution of the parameter $\psi_{s,t}$ . This design allows the representation of a sense with a different distribution of words at different times, as opposed to NEO. Thus in the
|
| 50 |
+
|
| 51 |
+
SCAN model, time-adjacent representations of a sense are codependent in order to allow capturing the meaning change in a smooth and gradual way. This is made possible by defining their prior as an intrinsic Gaussian Markov Random Field. Following the structural dependencies defined through iGMRF prior, Frermann (2017) expresses the posterior distribution over the latent variables given the input $\mathbf{w}$ , parameters $a, b, \kappa^{\Psi}$ and the choices of the distributions Gamma $(Ga)$ , Logistic Normal distribution $(N)$ :
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
\begin{array}{l} P (s, \Phi , \Psi , \kappa^ {\Phi} | \mathbf {w}, \kappa^ {\Psi}, a, b) \\ \propto G a \left(\kappa^ {\Phi}; a, b\right) \prod_ {t} \left[ \prod_ {k} \left[ N \left(\Psi^ {t, k} \mid \kappa^ {\Psi}\right) \right] \prod_ {d} \left[ \Phi_ {s} ^ {t} \prod_ {w ^ {i} \in \mathbf {w}} \Psi_ {w ^ {i}} ^ {s, t} \right] \right. \tag {2} \\ \end{array}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
where $\kappa^{\Phi}$ is drawn from a conjugate Gamma prior and $\kappa^{\Psi}$ is estimated during inference, which both control the degree of sense-specific word distributions variations over time. Thus the SCAN model is meant to capture changes between senses but also changes of meaning within a sense.
|
| 58 |
+
|
| 59 |
+
# 2.4 Existing Evaluation Methods
|
| 60 |
+
|
| 61 |
+
One way to find the ground truth of sense emergence is by using a dictionary. This approach is taken by many studies (Rohrdantz et al., 2011; Lau et al., 2012; Cook et al., 2014; Mitra et al., 2015).
|
| 62 |
+
|
| 63 |
+
In (Emms and Kumar Jayapal, 2016), the model is evaluated qualitatively on the Google NGrams corpus (Michel et al., 2011), using a few manually selected target words. The ground truth is obtained by the "tracks-plot" method, which consists in representing a target sense by a few hand-picked co-occurrences (e.g. "screen", "click" for mouse as a computing device), then tracking these co-occurrences over time and taking the mean of the separate tracks. An emergence detection algorithm "EmergeTime" is proposed in (Jayapal, 2017) to detect the year of emergence either from the "tracks-plot" data (ground truth emergence) or a predicted distribution $P(s|t)$ (predicted emergence). The algorithm checks whether there is a year in the $P(s|t)$ plot which satisfies the following constraints:
|
| 64 |
+
|
| 65 |
+
- The year is followed by a 10 year window of sufficient increase in probabilities: $85\%$ of the years show a climb in probabilities of $2 - 3\%$ of the maximum value.
|
| 66 |
+
- $80\%$ of the preceding years are lower than 0.1 (i.e. close to zero in probability).
|
| 67 |
+
|
| 68 |
+
Emms and Kumar Jayapal (2016) evaluate the quality of the sense clustering qualitatively by inspecting the top 30 ranked words that are associated with a specific sense.
|
| 69 |
+
|
| 70 |
+
Frermann and Lapata (2016) present four indirect evaluation methods, relying on closely related tasks used as applications of their model:
|
| 71 |
+
|
| 72 |
+
- "Temporal Dynamic": qualitative evaluation of the appearance of a new sense.
|
| 73 |
+
- “Novel Sense Detection”: evaluation using Mitra et al. (2015)'s complex approach based on WordNet.3
|
| 74 |
+
- "Word Meaning Change": evaluation using Gulordava and Baroni (2011)'s method and data for detecting meaning change between two time slices.
|
| 75 |
+
- "Task-based Evaluation": extrinsic evaluation on the SemEval Diachronic Text Evaluation task (Popescu and Strapparava, 2015), designed for supervised learning.
|
| 76 |
+
|
| 77 |
+
Despite the authors's best efforts to compare their results against others, they state that the "scores [that they obtain] are not directly comparable due to the differences in training corpora, focus and reference times, and candidate words" (Frermann and Lapata, 2016, p.39). Additionally, models of both Emms and Kumar Jayapal (2016) and Frermann and Lapata (2016) offer a continuous time representation $P(s|t)$ . The sophistication of their systems would deserve a more suitable evaluation framework, since they have to simplify their outcomes in order to compare them against previous works which rely on models which only represent independent time slices.
|
| 78 |
+
|
| 79 |
+
A recent evaluation framework is proposed by (Schlechtweg et al., 2020) for the task of Unsupervised Lexical Semantic Change Detection (LSC) in SemEval-2020. However, the benchmark datasets contain only two independent periods of time. The subtasks are only designed to capture whether there is a change (subtask 1) or the extent of a change (subtask 2). Precisely, as opposed to the DWSI task, the subtasks do not capture how many distinct senses exist in the data, what kind of change happens over time, to which sense, and the emergence year of a novel sense. Although the annotation process involves clustering senses and computing sense frequency distributions for two independent periods of time, the sense information is neglected.
|
| 80 |
+
|
| 81 |
+
Instead, the target values of the subtasks are based on "change scores" which represent only the existence or degree of LSC. As a result of this simplification, the evaluation methods used in the Unsupervised LSC are incompatible with the WSI and DWSI tasks. The task differs from WSI and DWSI in the sense that it does not either provide a way to predict the sense of an instance or the set of senses of a polysemous target word and their prevalence.
|
| 82 |
+
|
| 83 |
+
# 3 A Biomedical Dataset for DWSI
|
| 84 |
+
|
| 85 |
+
The DWSI task requires not only target words with several senses, but also time-stamped data for every target word. The evaluation of DWSI is challenging because manual annotation of such a large amount of instances (since they span over many years) would be prohibitively costly. In this section, we propose a method to collect diachronic data for ambiguous terms in medical terminologies.
|
| 86 |
+
|
| 87 |
+
# 3.1 Data Collection Process
|
| 88 |
+
|
| 89 |
+
Our method relies on the medical literature and exploits medical terminology resources: Medline<sup>5</sup> is a database referencing most of the biomedical literature (30 millions citations). The citations are annotated with Mesh descriptors. MeSH<sup>6</sup> (Medical Subject Headings) is “the US National Library of Medicine (NLM) controlled vocabulary thesaurus used for indexing articles for PubMed.” The Unified Medical Language System (UMLS) Metathesaurus is “a large biomedical thesaurus that is organized by concept, or meaning, and it links similar names for the same concept” (Bodenreider, 2004).<sup>7</sup> Each concept in UMLS is identified by a Concept Unique Id (CUI), and all the terms listed in UMLS are assigned a CUI. Since UMLS includes MeSH terms, there is a partial mapping between MeSH descriptors and UMLS CUIs.
|
| 90 |
+
|
| 91 |
+
The MSH WSD data (Jimeno-Yepes et al., 2011) consists of 203 ambiguous medical terms, each provided with the list of CUIs which identify the different meanings of the term. This dataset was created for the Word Sense Disambiguation task,
|
| 92 |
+
|
| 93 |
+
so the instances it contains are labelled by CUI (sense) but they are not time-stamped. We collect a time-stamped dataset as follows:
|
| 94 |
+
|
| 95 |
+
1. The MSH WSD data provides us with target terms and CUIS.
|
| 96 |
+
2. For every CUI, the corresponding MeSH descriptor is extracted from UMLS.
|
| 97 |
+
3. From Medline, all the citations labeled with a particular MeSH descriptor are extracted (title, publication year and abstract if any).
|
| 98 |
+
4. When available, the text of the full article is retrieved from PubMed Central. $^{8}$
|
| 99 |
+
|
| 100 |
+
# 3.2 Data pre-processing
|
| 101 |
+
|
| 102 |
+
For every target and every sense (CUI), a collection of documents made of titles, abstracts and full articles is obtained. Every occurrence of the target term in a document is assumed to have the sense given by the CUI.<sup>9</sup> In the interest of maximising the number of instances available for each year, we also collect the full list of terms associated with the CUI from UMLS and substitute every occurrence of such a term with the ambiguous target. In both cases of collecting instances, the longest possible term is matched in order to capture the most specific expressions.<sup>10</sup>
|
| 103 |
+
|
| 104 |
+
$\mathrm{SpaCy}^{11}$ is used to tokenise the documents into sentences and words. Using a global stopwords list based on the tokens frequencies, the most frequent tokens such as non-content words (the, a, however) and punctuation signs $(!, \%)$ are removed from the context. Every occurrence of the target in a document is extracted together with its 10-word context (5 words on each side). In order to provide the DWSI systems with sufficient data for every year, we only include the longest consecutive period with at least 4 instances every year across senses.
|
| 105 |
+
|
| 106 |
+
At the end of the process, the dataset contains 188 target (out of 203 initial targets).<sup>12</sup> 175 targets have two senses, 12 have 3 and one has 5 senses.
|
| 107 |
+
|
| 108 |
+
There are 61,352 instances by sense in average. $^{13}$ 102 senses out of 391 have emergence according to the "EmergeTime" method. $^{14}$
|
| 109 |
+
|
| 110 |
+
# 4 Evaluation
|
| 111 |
+
|
| 112 |
+
As explained in §3, the collected dataset contains sense labels which can be used to directly evaluate a DWSI system in a reliable way. Since by definition the output of an unsupervised clustering algorithm is unlabeled, we propose in §4.1 a method to match a gold sense with a predicted sense. Thanks to this matching method, a system can be evaluated externally, in a way similar to a supervised WSD system. We propose several evaluation methods, each meant to capture the performance of a DWSI system from a different perspective.
|
| 113 |
+
|
| 114 |
+
# 4.1 Global Maximum Matching Method
|
| 115 |
+
|
| 116 |
+
After estimating the model, the posterior probability is calculated for every instance, according to Eq. (3) for NEO and Eq. (4) for SCAN. The sense corresponding to the maximum probability is assigned to the instance.
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
P (S \mid t ^ {d}, \mathbf {w} ^ {d}) = \frac {P (S , t ^ {d} , \mathbf {w} ^ {d})}{\sum_ {S ^ {\prime}} P \left(S ^ {\prime} \mid t ^ {d} , \mathbf {w} ^ {d}\right)} \tag {3}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
P (S \mid t ^ {d}, \mathbf {w} ^ {d}) \propto P (S ^ {d} \mid t ^ {d}) P (\mathbf {w} ^ {d} \mid t ^ {d}, S) \tag {4}
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
The pairs of gold/predicted senses are matched iteratively based on their joint frequency. At every iteration, the pair corresponding to the highest frequency (global maximum) in the table is matched. Once a gold sense is matched with a predicted sense, neither the gold nor the predicted sense can be matched again with another sense. This eliminates the possibility of having two different gold senses matched with the same predicted sense or two different predicted senses matched with the same gold sense, an issue present in the methods used by (Agirre and Soroa, 2007; Manandhar et al., 2010). Moreover, by matching the largest senses first, the number of incorrectly matched instances is minimized. An example is provided in table 1.
|
| 127 |
+
|
| 128 |
+
# 4.2 Based on Clusters of Instances
|
| 129 |
+
|
| 130 |
+
# 4.2.1 Clustering Classification Measures
|
| 131 |
+
|
| 132 |
+
Given the true class (i.e. true sense, obtained as explained in §3) and the assigned predicted
|
| 133 |
+
|
| 134 |
+
<table><tr><td></td><td>C0030131</td><td>C0030625</td><td>C0078944</td><td>C0149576</td><td>C0429865</td></tr><tr><td>0</td><td>608</td><td>502</td><td>4680</td><td>352</td><td>5171</td></tr><tr><td>1</td><td>108</td><td>191</td><td>1963</td><td>466</td><td>17345</td></tr><tr><td>2</td><td>131</td><td>220</td><td>2139</td><td>484</td><td>16128</td></tr><tr><td>3</td><td>153</td><td>230</td><td>2684</td><td>637</td><td>26222</td></tr><tr><td>4</td><td>1313</td><td>1623</td><td>885</td><td>98</td><td>569</td></tr></table>
|
| 135 |
+
|
| 136 |
+
<table><tr><td></td><td>C0030131</td><td>C0030625</td><td>C0078944</td><td>C0149576</td><td>C0429865</td></tr><tr><td>0</td><td>608</td><td>502</td><td>4680</td><td>352</td><td>-</td></tr><tr><td>1</td><td>108</td><td>191</td><td>1963</td><td>466</td><td>-</td></tr><tr><td>2</td><td>131</td><td>220</td><td>2139</td><td>484</td><td>-</td></tr><tr><td>3</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>4</td><td>1313</td><td>1623</td><td>885</td><td>98</td><td>-</td></tr></table>
|
| 137 |
+
|
| 138 |
+
<table><tr><td>Predicted sense</td><td>Gold sense</td></tr><tr><td>0</td><td>C0078944</td></tr><tr><td>1</td><td>C0030131</td></tr><tr><td>2</td><td>C0149576</td></tr><tr><td>3</td><td>C0429865</td></tr><tr><td>4</td><td>C0030625</td></tr></table>
|
| 139 |
+
|
| 140 |
+
Table 1: Global maximum matching example. The top contingency table shows the number of instances for every predicted/gold sense pair (the predicted sense is assigned by calculating the maximum of the posterior probability). At the first iteration, senses C0429865 and 3 are matched based on the global maximum (in bold). The second table shows the remaining frequencies at the second iteration. The bottom table shows the resulting matching at the end of the process.
|
| 141 |
+
|
| 142 |
+
class (obtained using the matching method presented in §4.1), every instance can be categorised as True/False Positive/Negative for any specific sense $s$ , following the standard classification methodology. This way the standard binary classification measures can be applied at the level of a sense: precision, recall, F1-score. The micro-average and macro-average of these measures are calculated to represent the performance at the level of a target or across targets.
|
| 143 |
+
|
| 144 |
+
# 4.2.2 Clustering Mean Absolute Error
|
| 145 |
+
|
| 146 |
+
The classification measures do not distinguish whether the system is confident in its prediction (e.g. if the posterior probability is 0.99) or not (e.g. if it is 0.51), this is why we also propose to use the mean absolute error (MAE). The intuition behind this measure is that a perfect system should predict probability one for the gold sense and zero for any other sense. Therefore, the further the predicted probability deviates from one, the higher the error. We use the mean absolute error to measure how close to one is the posterior probability of the gold sense in average. The mean absolute error is defined for every sense as in Eq. (5).
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
\frac {1}{| D |} \sum_ {d \in D} (1 - P (\hat {s _ {g}} | d)) \tag {5}
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
where $D$ represents a set of instances, $\hat{s_g}$ is the sense that matches the gold sense, and the posteriors are defined as mentioned in Eq. (3) and (4). Since the individual error value is unique for a given instance, this measure can be calculated for any set of instances, in particular at the level of a single sense, a target or across the whole data. By contrast to the classification measures which assign a categorical label to an instance, this measure takes into account the potential numerical variations of the probability values. However at the level of a sense it does not capture any information about the false positive cases. As a consequence, classification measures and MAE are susceptible to show complementary aspects of performance.
|
| 153 |
+
|
| 154 |
+
# 4.3 Based on the Estimated Parameters
|
| 155 |
+
|
| 156 |
+
# 4.3.1 Emergence Classification Measures
|
| 157 |
+
|
| 158 |
+
Generally the task of emergence detection consists in predicting the year (or period of time) when a new sense emerges. As explained in §2.4, this task is performed by applying the emergence detection algorithm on the inferred $P(s|t)$ parameter. In theory the true answer is the emergence year, but in a classification setting it is reasonable to allow some margin of error. Thus the predictions of an emergence is counted as correct if it falls within the bounds of a 5 year window centered on the true emergence year. Based on this categorisation, the standard precision, recall and F1-score can be calculated across all targets.
|
| 159 |
+
|
| 160 |
+
# 4.3.2 Emergence Mean Absolute Error
|
| 161 |
+
|
| 162 |
+
The binary classification measures restrict the predicted answer to be either inside or outside a window, thus do not take into account the distance between the gold and predicted emergence years. By contrast, a numerical error value can be calculated as follows:
|
| 163 |
+
|
| 164 |
+
$$
|
| 165 |
+
e = \left\{ \begin{array}{l l} 0 & \text {i f} \neg g \wedge \neg p \\ M & \text {i f} (\neg g \wedge p) \vee (g \wedge \neg p) \\ | y - \bar {y} | & \text {i f} g \wedge p \end{array} \right.
|
| 166 |
+
$$
|
| 167 |
+
|
| 168 |
+
where:
|
| 169 |
+
|
| 170 |
+
- $g$ (resp. $p$ ) is true if and only if the gold (resp. predicted) sense has emergence,
|
| 171 |
+
- $M$ is the maximum error defined as the number of years of data for a specific target,
|
| 172 |
+
- $y$ is the true year of emergence and $\hat{y}$ is the predicted year of emergence.
|
| 173 |
+
|
| 174 |
+
In order to compare error levels across different targets, a normalised variant is defined as $e_{norm} =$
|
| 175 |
+
|
| 176 |
+
$\frac{e}{M}$ . The MAE is defined over a set of senses $S$ as the mean of their $e_{norm}$ values.
|
| 177 |
+
|
| 178 |
+
The intuition is that the case where both the gold and the predicted senses have emergence should always be assigned a lower error than when only one of them has emergence, therefore we assign the maximum error in the latter case. Since all the targets do not have the same number of years of data, the maximum individual error is different among targets, this is why a normalised variant is used where the individual value is divided by the total number of years. This allows comparisons of the error level between senses, targets, as well as at the system level.
|
| 179 |
+
|
| 180 |
+
# 4.3.3 Time Series Distances
|
| 181 |
+
|
| 182 |
+
The predicted evolution across time of the sense probability $P(s|t)$ is an essential outcome of the DWSI task. We use distance measures in order to evaluate how far the predicted $P(s|t)$ is from the true probability across time. There are many options available for measuring the distance between two time series. We propose two of them:
|
| 183 |
+
|
| 184 |
+
- The linear Euclidean distance is a simple measure which assumes that the $i^{th}$ point in one sequence is aligned with the exact $i^{th}$ point in the other one.
|
| 185 |
+
- The non-linear Dynamic Time Warping (DTW) distance measure performs an alignment of the two sequences (Berndt and Clifford, 1994; Sardá-Espinosa, 2017). This allows a more flexible comparison of the dissimilarity with respect to the alignment of the two series across time.
|
| 186 |
+
|
| 187 |
+
The superiority of DTW over Euclidean measure is that DTW is tailored to time shifts, scale and noise and not only defined for series of equal length. In our task, we will compare both Euclidean and DTW results and test whether DTW finds local similarities between sequences which share some patterns but are not fully aligned.
|
| 188 |
+
|
| 189 |
+
# 5 Results and Analysis
|
| 190 |
+
|
| 191 |
+
In this section, we evaluate the NEO and SCAN systems using the dataset presented in §3 and the evaluation methods defined in §4. This allows us to compare the two systems on the same grounds. Additionally, this rich annotated dataset allows us to provide an in-depth analysis, thus uncovering the strengths and weaknesses of the two systems.
|
| 192 |
+
|
| 193 |
+
The DWSI task is unsupervised, so the whole
|
| 194 |
+
|
| 195 |
+
data is used both to estimate the parameters and perform evaluation on the predictions. No parameter has been tuned at any point: the experiments are run using the systems provided by the original authors with their default parameters, except for the number of senses (the true number of senses is used for every target), one-year time interval, and the size of the context window (10).<sup>16</sup>
|
| 196 |
+
|
| 197 |
+
# 5.1 Observations of Posterior Distribution
|
| 198 |
+
|
| 199 |
+
The graphs in Figure 1 show the frequency of the predicted probabilities that correspond to the matched gold senses and the frequency of the highest predicted probabilities that are assigned for each instance. The predicted probabilities follow a U-shaped distribution, which means the system tends to assign extreme probabilities (close to either zero or one) to the majority of the data. The graphs also show the overlap between the predicted gold sense probabilities and the highest predicted probabilities, which represents the instances where the true sense was predicted correctly. By contrast, the area in red on the left half represents cases where the true sense is predicted with a low probability (false negative), and the blue area which does not overlap represents instances where an incorrect sense is predicted (false positive). In comparison to NEO, SCAN tends to assign even more extreme probabilities. In particular, SCAN tends to make more serious errors: in more than 5 million cases, the predicted probability is 0 (or close to 0) for the gold sense instead of 1.
|
| 200 |
+
|
| 201 |
+
Table 2 compares the deciles of the error distribution between NEO and SCAN. For NEO, the error is below 0.1 (near perfect predictions) for more than $30\%$ of the instances while it is above 0.9 (totally incorrect predictions) for slightly less than $20\%$ of the instances. In contrast, SCAN scores correctly more than $40\%$ of the instances while the incorrect predictions are more than $30\%$ .
|
| 202 |
+
|
| 203 |
+
Overall, NEO performs better than SCAN according to the MAE: 0.425 vs. 0.444. This difference is significant (p-value 0.000024 for Wilcoxon signed rank test at the level of targets).
|
| 204 |
+
|
| 205 |
+
# 5.2 Influence of Data Size
|
| 206 |
+
|
| 207 |
+
It is often expected that performance improves with the amount of data provided. This is not verified in the data, which shows a slight negative correlation level (between -0.1 and -0.3) between data size and performance across targets in both systems.
|
| 208 |
+
|
| 209 |
+

|
| 210 |
+
Figure 1: Distribution of the probabilities predicted by NEO and SCAN systems: the red distribution represents the predicted probability of the gold sense for every instance in the data; the blue distribution represents the highest predicted probability for every instance.
|
| 211 |
+
Pearson correlation: NEO -0.48, SCAN -0.52
|
| 212 |
+
|
| 213 |
+
<table><tr><td>Bottom N %</td><td>decile (NEO)</td><td>decile (SCAN)</td></tr><tr><td>10%</td><td>0.009</td><td>0.0000002</td></tr><tr><td>20%</td><td>0.039</td><td>0.00003</td></tr><tr><td>30%</td><td>0.095</td><td>0.001</td></tr><tr><td>40%</td><td>0.189</td><td>0.016</td></tr><tr><td>50%</td><td>0.331</td><td>0.174</td></tr><tr><td>60%</td><td>0.518</td><td>0.774</td></tr><tr><td>70%</td><td>0.718</td><td>0.985</td></tr><tr><td>80%</td><td>0.880</td><td>0.999</td></tr><tr><td>90%</td><td>0.973</td><td>0.999</td></tr></table>
|
| 214 |
+
|
| 215 |
+
We investigate how the size of each sense (as opposed to the full target size) contributes to the performance of the model. In other words, we observe the difference between targets where the senses have a similar size and targets where there is a strong imbalance between the senses. For every target, the standard deviation of the sense size proportions is used as a measure of the imbalance across senses. Figure 2 shows the relationship between SD and macro F1-score. There is a clear pattern where higher imbalance between senses is associated with lower performance in general, regardless of the model type.
|
| 216 |
+
|
| 217 |
+
A detailed analysis shows that SCAN outperforms NEO when the imbalance level is not large between senses within a target, while the two systems perform similarly otherwise. This effect can be observed in the global classification results in table 3. SCAN outperforms NEO at the level of
|
| 218 |
+
|
| 219 |
+

|
| 220 |
+
Figure 2: Relation between gold sense imbalance and performance by target.
|
| 221 |
+
|
| 222 |
+
Table 2: Deciles for error values for the predicted senses (across all instances) based on the clustering mean absolute error evaluation measure for NEO and SCAN systems.
|
| 223 |
+
|
| 224 |
+
<table><tr><td rowspan="2">Perf.</td><td colspan="3">NEO</td><td colspan="3">SCAN</td></tr><tr><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>macro</td><td>0.548</td><td>0.569</td><td>0.558</td><td>0.562</td><td>0.591</td><td>0.577</td></tr><tr><td>micro</td><td>0.595</td><td>0.595</td><td>0.595</td><td>0.558</td><td>0.558</td><td>0.558</td></tr></table>
|
| 225 |
+
|
| 226 |
+
macro results whereas NEO performs better at the level of micro results. However, Wilcoxon rank test shows that the superiority of SCAN at the level of macro F1-score by target is not significant (p-value: 0.354) whereas the superiority of NEO at the level of micro F1-score is (p-value: 1.167e-07). Given that macro scores are based on the average performance across senses independently from their size, this means that SCAN performs better than NEO with the minority class (i.e. sense) and conversely NEO shows better performance with the majority class. Table 4 confirms that the superiority of SCAN for the minority class is not significant yet the superiority of NEO for the majority class is.
|
| 227 |
+
|
| 228 |
+
Table 3: Global classification results for NEO and SCAN systems. P/R/F1: Precision/Recall/F1-score
|
| 229 |
+
|
| 230 |
+
<table><tr><td rowspan="2">Number of Senses</td><td rowspan="2">Sense rank</td><td colspan="2">Mean F1-score</td><td rowspan="2">Wilcoxon test p-value</td></tr><tr><td>NEO</td><td>SCAN</td></tr><tr><td>-</td><td>first</td><td>0.299</td><td>0.321</td><td>6.657119e-01</td></tr><tr><td>-</td><td>last</td><td>0.732</td><td>0.692</td><td>3.503092e-10</td></tr><tr><td>2</td><td>first</td><td>0.315</td><td>0.335</td><td>6.920240e-01</td></tr><tr><td>2</td><td>second</td><td>0.740</td><td>0.6995</td><td>1.310836e-09</td></tr><tr><td>3</td><td>first</td><td>0.100</td><td>0.143</td><td>1.000000e+00</td></tr><tr><td>3</td><td>second</td><td>0.253</td><td>0.390</td><td>1.220703e-02</td></tr><tr><td>3</td><td>third</td><td>0.629</td><td>0.597</td><td>2.333984e-01</td></tr></table>
|
| 231 |
+
|
| 232 |
+
Table 4: Comparison of the performance by senses, ranked by proportion within a target. The sense rank is organised by the number of senses. It starts from the smallest sense (in proportion; rank first) and increases to the largest (rank last). “-” means the ranking is based on the min and the max senses across all the data. Wilcoxon test is applied on the F1 scores of the senses in order to assess whether the distribution of F1 scores is significantly different between NEO and
|
| 233 |
+
|
| 234 |
+

|
| 235 |
+
Figure 3: Relation between size of the gold and predicted senses for NEO (top) and SCAN (bottom).
|
| 236 |
+
|
| 237 |
+
<table><tr><td>System</td><td>Precision</td><td>Recall</td><td>F1-score</td></tr><tr><td>NEO</td><td>0.306</td><td>0.250</td><td>0.275</td></tr><tr><td>SCAN</td><td>0.126</td><td>0.090</td><td>0.105</td></tr></table>
|
| 238 |
+
|
| 239 |
+
Having confirmed that the imbalance between gold senses size has a strong impact on performance, we observe how the two systems behave with respect to the predicted size of the senses. It can be observed on Figure 3 that both systems split the data in favour of the senses with a low proportion, i.e. tend to predict a larger size for small senses and conversely a smaller size for large senses.[17] This tendency is exacerbated for SCAN which splits most senses equally regardless of their true size.
|
| 240 |
+
|
| 241 |
+
# 5.3 Evaluation of Emergence
|
| 242 |
+
|
| 243 |
+
Table 5 shows the global results after applying the emergence algorithm on the predictions of both systems. NEO performs much better than SCAN in predicting the emergence of a new sense, with an F1-score of 0.275 against 0.106 for SCAN.
|
| 244 |
+
|
| 245 |
+
Figure 4 shows the gold standard and the predicted emergence years for every sense which has emergence in both NEO and SCAN. SCAN tends to have earlier emergence results compared to the gold, while NEO tends to take the opposite direction with an average difference of -17.318 and 0.697 respectively across the senses. This tendency
|
| 246 |
+
|
| 247 |
+
Table 5: Results of NEO and SCAN regarding detecting the emergence of a new sense (5 year window).
|
| 248 |
+
|
| 249 |
+
<table><tr><td>System</td><td>Global MAE</td><td>Normalised Global MAE</td></tr><tr><td>NEO</td><td>17.076</td><td>0.295</td></tr><tr><td>SCAN</td><td>19.028</td><td>0.327</td></tr></table>
|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
Figure 4: Gold and predicted emergence years for NEO and SCAN, ordered by gold emergence year.
|
| 253 |
+
|
| 254 |
+
is confined by the fact that $90\%$ of the amount of the difference error (predicted -gold) is predicted earlier for SCAN while NEO has only $45\%$ of early predictions. The MAE results shown in table 6 are consistent with the classification results, showing a better performance by NEO. The emergence results in both systems are affected by data imbalance: for instance, both systems have a high number of FN cases when senses have a lower proportion of data $(< 0.5)$ . Similarly, the FP cases tend to correspond to senses which have a lower proportion.
|
| 255 |
+
|
| 256 |
+
# 5.4 Evaluation on $P(s|t)$
|
| 257 |
+
|
| 258 |
+
Table 7 shows that NEO has less errors by senses across years than SCAN according to the distance measures over $P(s|t)$ . This is confirmed by Wilcoxon test, which shows that the errors distributions of the two systems are significantly different.
|
| 259 |
+
|
| 260 |
+
One would expect that the distance errors have an impact on emergence. By examining the means of two categories, TP cases (when the emergence is predicted within 5 years of the true emergence, see 5.3) as a category and the rest as a second category, one can observe that the means of the errors is lower for the former while its higher for the latter, as shown in table 8.
|
| 261 |
+
|
| 262 |
+
# 5.5 Comparing Evaluation Measures
|
| 263 |
+
|
| 264 |
+
The evaluation measures reflect different types of errors. The correlation values between clustering-based classification and regression measures are -0.71 for NEO and -0.44 for SCAN. This apparent
|
| 265 |
+
|
| 266 |
+
Table 6: Global emergence MAE, based on individual error by sense.
|
| 267 |
+
|
| 268 |
+
<table><tr><td rowspan="2">Distance</td><td>NEO</td><td>SCAN</td><td rowspan="2">Wilcoxon p-value</td></tr><tr><td>Global mean</td><td>Global mean</td></tr><tr><td>DTW</td><td>0.182</td><td>0.222</td><td>2.0413e-15</td></tr><tr><td>Euclidean</td><td>0.124</td><td>0.142</td><td>5.3543e-06</td></tr></table>
|
| 269 |
+
|
| 270 |
+
Table 7: Mean distance errors across senses by DTW and Euclidean algorithms.
|
| 271 |
+
|
| 272 |
+
<table><tr><td></td><td>Predicted Emergence</td><td>DTW mean</td><td>Euclidean mean</td><td>Error mean</td></tr><tr><td rowspan="2">NEO</td><td>TP</td><td>0.078</td><td>0.0415</td><td>0.009</td></tr><tr><td>not TP</td><td>0.189</td><td>0.130</td><td>0.313</td></tr><tr><td rowspan="2">SCAN</td><td>TP</td><td>0.193</td><td>0.124</td><td>0.016</td></tr><tr><td>not TP</td><td>0.222</td><td>0.142</td><td>0.334</td></tr></table>
|
| 273 |
+
|
| 274 |
+
Table 8: Comparison between mean errors by predicted emergence status (error values normalised by the number of years). DTW and Euclidean distance are obtained by comparing the predicted vs. gold $P(s|t)$ , whereas the classification status (TP vs. not TP) and normalised error mean are calculated based on the emergence year by sense.
|
| 275 |
+
|
| 276 |
+
<table><tr><td></td><td>Distance Measure</td><td>Sense level F1-score</td><td>Target level macro F1-score</td></tr><tr><td rowspan="2">NEO</td><td>DTW</td><td>-0.270</td><td>-0.448</td></tr><tr><td>Euclidean</td><td>-0.230</td><td>-0.432</td></tr><tr><td rowspan="2">SCAN</td><td>DTW</td><td>-0.313</td><td>-0.419</td></tr><tr><td>Euclidean</td><td>-0.248</td><td>-0.374</td></tr></table>
|
| 277 |
+
|
| 278 |
+
Table 9: Correlation between distance measures and classification measures at the level of senses/targets.
|
| 279 |
+
|
| 280 |
+
discrepancy between the two evaluation measures is explained by several factors, some related to the definition of the measures and some due to the data characteristics. On one hand, the MAE is calculated as the average error across the instances which are labeled only with this particular true sense. On the other hand, in the classification setting, all the instances of a target are taken into account for a specific sense. This implies that the instances of the other senses are also taken into account.
|
| 281 |
+
|
| 282 |
+
For any given year $t$ , the probability of the parameter $P(s|t)$ is estimated from the proportion of a sense among the instances of this year. This means that the value of the parameter $P(s|t)$ is directly related to the posterior probability used for the evaluation at the level of the instances. Therefore one would expect a quite strong correlation level between the DTW and/or Euclidean distance based on the estimated parameter $P(s|t)$ and the evaluation score based on the instances. However the correlation values observed at the level of senses (e.g. F1-score) is weak, although they are more significant at the level of targets, as shown in table 9.
|
| 283 |
+
|
| 284 |
+
The low correlation level is primarily due to the fact that the majority of the targets have two senses which are complement of each other, thus the two $P(s|t)$ series are a mirror of each other (i.e. $P(s_{1}|t) = 1 - P(s_{2}|t)$ ), in turn causing the DTW and Euclidean distance values to be the same for both senses. On the contrary, the instance-based evaluation scores tend to be very different for the
|
| 285 |
+
|
| 286 |
+
two senses, especially in the case of strong size imbalance (see 5.2). The difference in correlation between the level of senses and the level of targets is likely due to the fact that the discrepancies in the evaluation between senses are balanced out at the level of targets.
|
| 287 |
+
|
| 288 |
+
# 6 Conclusion and Discussion
|
| 289 |
+
|
| 290 |
+
We have addressed the issue of evaluating DWSI: we evaluated two models, NEO and SCAN, directly on the task itself, independently from any extrinsic related tasks, with a large dataset collected from biomedical resources. We defined and tested various external evaluation measures. Overall, NEO performs significantly better in the tasks of detecting senses and the emergence of new senses, according to most of our evaluation measures.
|
| 291 |
+
|
| 292 |
+
The design differences between the models and their parameters could potentially have an effect on the amount of data they require, but it turns out that the global data size has no important effect on the accuracy of either system. Both systems are unable to predict the correct size of the clusters: they tend to split the data almost equally between senses irrespective of the true semantic sense represented by the context words, and this impacts the correct detection of the emergence. This issue also explains why the original studies tend to use a high number of senses in order to capture the true senses, even though this causes the clusters to be split and the appearance of "junk senses". We also find that NEO performs better with larger senses while SCAN tends to perform better with smaller senses. This opens the perspective of combining the advantages of the two systems. We acknowledge that the data is domain-specific, however the observed biases of the systems are likely to hold across domains.
|
| 293 |
+
|
| 294 |
+
# Acknowledgements
|
| 295 |
+
|
| 296 |
+
We would like to thank Dr. Martin Emms and Dr. Lea Frermann for sharing the code of their systems. We are also grateful to the anonymous reviewers for their valuable comments.
|
| 297 |
+
|
| 298 |
+
The first author is grateful to King Abdullah Scholarship Program from the Saudi Arabian Government for supporting this work.
|
| 299 |
+
|
| 300 |
+
The ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.
|
| 301 |
+
|
| 302 |
+
# References
|
| 303 |
+
|
| 304 |
+
Eneko Agirre and Aitor Soroa. 2007. Semeval-2007 task 02: Evaluating word sense induction and discrimination systems. In Proceedings of the fourth international workshop on semantic evaluations (semeval-2007), pages 7-12.
|
| 305 |
+
Donald J Berndt and James Clifford. 1994. Using dynamic time warping to find patterns in time series. In KDD workshop, volume 10, pages 359-370. Seattle, WA.
|
| 306 |
+
David M Blei and John D Lafferty. 2006. Dynamic topic models. In Proceedings of the 23rd international conference on Machine learning, pages 113-120. ACM.
|
| 307 |
+
Olivier Bodenreider. 2004. The unified medical language system (UMLS): integrating biomedical terminology. *Nucleic acids research*, 32(suppl_1):D267-D270.
|
| 308 |
+
Paul Cook, Joy Han Lau, Diana McCarthy, and Timothy Baldwin. 2014. Novel word-sense identification. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1624-1635.
|
| 309 |
+
Martin Emms and Arun Kumar Jayapal. 2016. Dynamic generative model for diachronic sense emergence detection. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1362-1373.
|
| 310 |
+
Lea Frermann. 2017. Bayesian Models of Category Acquisition and Meaning Development. PhD thesis, University of Edinburgh.
|
| 311 |
+
Lea Frermann and Mirella Lapata. 2016. A bayesian model of diachronic meaning change. Transactions of the Association for Computational Linguistics, 4:31-45.
|
| 312 |
+
Kristina Gulordava and Marco Baroni. 2011. A distributional similarity approach to the detection of semantic change in the google books ngram corpus. In Proceedings of the GEMS 2011 workshop on geometrical models of natural language semantics, pages 67-71.
|
| 313 |
+
Arun Jayapal. 2017. Finding Sense Changes by Unsupervised Methods. PhD thesis, Trinity College Dublin.
|
| 314 |
+
Antonio J Jimeno-Yepes, Bridget T McInnes, and Alan R Aronson. 2011. Exploiting mesh indexing in medline to generate a data set for word sense disambiguation. BMC bioinformatics, 12(1):223.
|
| 315 |
+
Andrey Kutuzov, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embeddings and semantic shifts: a survey. arXiv preprint arXiv:1806.03537.
|
| 316 |
+
|
| 317 |
+
Jey Han Lau, Paul Cook, Diana McCarthy, David Newman, and Timothy Baldwin. 2012. Word sense induction for novel sense detection. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 591-601. Association for Computational Linguistics.
|
| 318 |
+
Suresh Manandhar, Ioannis Klapaftis, Dmitriy Dligach, and Sameer Pradhan. 2010. SemEval-2010 task 14: Word sense induction &disambiguation. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 63–68, Uppsala, Sweden. Association for Computational Linguistics.
|
| 319 |
+
Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K Gray, Joseph P Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, et al. 2011. Quantitative analysis of culture using millions of digitized books. science, 331(6014):176-182.
|
| 320 |
+
Sunny Mitra, Ritwik Mitra, Suman Kalyan Maity, Martin Riedl, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2015. An automatic approach to identify word sense changes in text media across timescales. Natural Language Engineering, 21(5):773-798.
|
| 321 |
+
Octavian Popescu and Carlo Strapparava. 2015. Semeval 2015, task 7: Diachronic text evaluation. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 870-878.
|
| 322 |
+
Christian Rohrdantz, Annette Hautli, Thomas Mayer, Miriam Butt, Daniel A Keim, and Frans Plank. 2011. Towards tracking semantic change by visual analytics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 305-310. Association for Computational Linguistics.
|
| 323 |
+
Alexis Sardá-Espinosa. 2017. Comparing time-series clustering algorithms in r using the dtwclust package. R package vignette, 12:41.
|
| 324 |
+
Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, and Nina Tahmasebi. 2020. Semeval-2020 task 1: Unsupervised lexical semantic change detection. arXiv preprint arXiv:2007.11464.
|
anevaluationmethodfordiachronicwordsenseinduction/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cf1943050c1e4b146a452dc8e85edf1843f89504bb1ca90f4344e0fa03ce1619
|
| 3 |
+
size 345303
|
anevaluationmethodfordiachronicwordsenseinduction/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e7e50d0e6e8b141f1041f7870f8013cf6e39ade434e2808a336667482594a529
|
| 3 |
+
size 358792
|
aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/bd003223-e145-4ed7-9ad1-1c55f61df622_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:013000b17778ef173b36f4bcbfcb0a404f90f64b6560f64cfb09b0c705a4e083
|
| 3 |
+
size 53785
|
aninstancelevelapproachforshallowsemanticparsinginscientificproceduraltext/bd003223-e145-4ed7-9ad1-1c55f61df622_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:36c745b1aca5c8d41690f0acc8ea870a2cb8ee7ca87ab81aa20e774de7e5b900
|
| 3 |
+
size 62016
|