Add Batch 525913fd-7400-46be-a7d2-b9139fa02eb9
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- abantiquoneuralprotolanguagereconstruction/8e354737-2481-4f6c-9530-f3e8fe40c614_content_list.json +3 -0
- abantiquoneuralprotolanguagereconstruction/8e354737-2481-4f6c-9530-f3e8fe40c614_model.json +3 -0
- abantiquoneuralprotolanguagereconstruction/8e354737-2481-4f6c-9530-f3e8fe40c614_origin.pdf +3 -0
- abantiquoneuralprotolanguagereconstruction/full.md +325 -0
- abantiquoneuralprotolanguagereconstruction/images.zip +3 -0
- abantiquoneuralprotolanguagereconstruction/layout.json +3 -0
- abstractmeaningrepresentationguidedgraphencodinganddecodingforjointinformationextraction/2b61daf0-5426-470b-be35-b21e068f6df6_content_list.json +3 -0
- abstractmeaningrepresentationguidedgraphencodinganddecodingforjointinformationextraction/2b61daf0-5426-470b-be35-b21e068f6df6_model.json +3 -0
- abstractmeaningrepresentationguidedgraphencodinganddecodingforjointinformationextraction/2b61daf0-5426-470b-be35-b21e068f6df6_origin.pdf +3 -0
- abstractmeaningrepresentationguidedgraphencodinganddecodingforjointinformationextraction/full.md +372 -0
- abstractmeaningrepresentationguidedgraphencodinganddecodingforjointinformationextraction/images.zip +3 -0
- abstractmeaningrepresentationguidedgraphencodinganddecodingforjointinformationextraction/layout.json +3 -0
- acomparativestudyonschemaguideddialoguestatetracking/8ed55747-ed10-49bb-8869-1cebe4ecebc6_content_list.json +3 -0
- acomparativestudyonschemaguideddialoguestatetracking/8ed55747-ed10-49bb-8869-1cebe4ecebc6_model.json +3 -0
- acomparativestudyonschemaguideddialoguestatetracking/8ed55747-ed10-49bb-8869-1cebe4ecebc6_origin.pdf +3 -0
- acomparativestudyonschemaguideddialoguestatetracking/full.md +449 -0
- acomparativestudyonschemaguideddialoguestatetracking/images.zip +3 -0
- acomparativestudyonschemaguideddialoguestatetracking/layout.json +3 -0
- acontextdependentgatedmoduleforincorporatingsymbolicsemanticsintoeventcoreferenceresolution/6e63067d-fcb8-49b2-b6b7-811ec886b9d7_content_list.json +3 -0
- acontextdependentgatedmoduleforincorporatingsymbolicsemanticsintoeventcoreferenceresolution/6e63067d-fcb8-49b2-b6b7-811ec886b9d7_model.json +3 -0
- acontextdependentgatedmoduleforincorporatingsymbolicsemanticsintoeventcoreferenceresolution/6e63067d-fcb8-49b2-b6b7-811ec886b9d7_origin.pdf +3 -0
- acontextdependentgatedmoduleforincorporatingsymbolicsemanticsintoeventcoreferenceresolution/full.md +317 -0
- acontextdependentgatedmoduleforincorporatingsymbolicsemanticsintoeventcoreferenceresolution/images.zip +3 -0
- acontextdependentgatedmoduleforincorporatingsymbolicsemanticsintoeventcoreferenceresolution/layout.json +3 -0
- actionbasedconversationsdatasetacorpusforbuildingmoreindepthtaskorienteddialoguesystems/8c2190d2-e7f4-4542-9c71-875144996065_content_list.json +3 -0
- actionbasedconversationsdatasetacorpusforbuildingmoreindepthtaskorienteddialoguesystems/8c2190d2-e7f4-4542-9c71-875144996065_model.json +3 -0
- actionbasedconversationsdatasetacorpusforbuildingmoreindepthtaskorienteddialoguesystems/8c2190d2-e7f4-4542-9c71-875144996065_origin.pdf +3 -0
- actionbasedconversationsdatasetacorpusforbuildingmoreindepthtaskorienteddialoguesystems/full.md +520 -0
- actionbasedconversationsdatasetacorpusforbuildingmoreindepthtaskorienteddialoguesystems/images.zip +3 -0
- actionbasedconversationsdatasetacorpusforbuildingmoreindepthtaskorienteddialoguesystems/layout.json +3 -0
- active2learningactivelyreducingredundanciesinactivelearningmethodsforsequencetaggingandmachinetranslation/a3588c8d-885e-4bb1-a6ad-11f288633471_content_list.json +3 -0
- active2learningactivelyreducingredundanciesinactivelearningmethodsforsequencetaggingandmachinetranslation/a3588c8d-885e-4bb1-a6ad-11f288633471_model.json +3 -0
- active2learningactivelyreducingredundanciesinactivelearningmethodsforsequencetaggingandmachinetranslation/a3588c8d-885e-4bb1-a6ad-11f288633471_origin.pdf +3 -0
- active2learningactivelyreducingredundanciesinactivelearningmethodsforsequencetaggingandmachinetranslation/full.md +382 -0
- active2learningactivelyreducingredundanciesinactivelearningmethodsforsequencetaggingandmachinetranslation/images.zip +3 -0
- active2learningactivelyreducingredundanciesinactivelearningmethodsforsequencetaggingandmachinetranslation/layout.json +3 -0
- adaptableandinterpretableneuralmemoryoversymbolicknowledge/16b2aa6a-d5d2-411a-a0cb-1a466d152946_content_list.json +3 -0
- adaptableandinterpretableneuralmemoryoversymbolicknowledge/16b2aa6a-d5d2-411a-a0cb-1a466d152946_model.json +3 -0
- adaptableandinterpretableneuralmemoryoversymbolicknowledge/16b2aa6a-d5d2-411a-a0cb-1a466d152946_origin.pdf +3 -0
- adaptableandinterpretableneuralmemoryoversymbolicknowledge/full.md +422 -0
- adaptableandinterpretableneuralmemoryoversymbolicknowledge/images.zip +3 -0
- adaptableandinterpretableneuralmemoryoversymbolicknowledge/layout.json +3 -0
- adaptingbertforcontinuallearningofasequenceofaspectsentimentclassificationtasks/1c9e01f1-5430-4ac4-b70c-b289752ebdb0_content_list.json +3 -0
- adaptingbertforcontinuallearningofasequenceofaspectsentimentclassificationtasks/1c9e01f1-5430-4ac4-b70c-b289752ebdb0_model.json +3 -0
- adaptingbertforcontinuallearningofasequenceofaspectsentimentclassificationtasks/1c9e01f1-5430-4ac4-b70c-b289752ebdb0_origin.pdf +3 -0
- adaptingbertforcontinuallearningofasequenceofaspectsentimentclassificationtasks/full.md +329 -0
- adaptingbertforcontinuallearningofasequenceofaspectsentimentclassificationtasks/images.zip +3 -0
- adaptingbertforcontinuallearningofasequenceofaspectsentimentclassificationtasks/layout.json +3 -0
- adaptingcoreferenceresolutionforprocessingviolentdeathnarratives/f03fd56f-b55d-41e6-b8a1-fc58f89bd508_content_list.json +3 -0
- adaptingcoreferenceresolutionforprocessingviolentdeathnarratives/f03fd56f-b55d-41e6-b8a1-fc58f89bd508_model.json +3 -0
abantiquoneuralprotolanguagereconstruction/8e354737-2481-4f6c-9530-f3e8fe40c614_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:13e549153f97426f7c4174fc06eda1e4e12bdeed4972755637ae29ee13a4cdee
|
| 3 |
+
size 89383
|
abantiquoneuralprotolanguagereconstruction/8e354737-2481-4f6c-9530-f3e8fe40c614_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4a922eceaf84c2900bd6288806a75d1e4ffc35f384821096e0706f55dc2f6311
|
| 3 |
+
size 105419
|
abantiquoneuralprotolanguagereconstruction/8e354737-2481-4f6c-9530-f3e8fe40c614_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9182316cf69ffd075395ff1e80d151c92b4711f1834b8a12978ce891a4df7a4d
|
| 3 |
+
size 812087
|
abantiquoneuralprotolanguagereconstruction/full.md
ADDED
|
@@ -0,0 +1,325 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Ab Antiquo: Neural Proto-language Reconstruction
|
| 2 |
+
|
| 3 |
+
Carlo Meloni*1 Shauli Ravfogel*2,3 Yoav Goldberg*2,3
|
| 4 |
+
|
| 5 |
+
$^{1}$ Department of Comparative Language Science, University of Zurich
|
| 6 |
+
|
| 7 |
+
$^{2}$ Computer Science Department, Bar Ilan University
|
| 8 |
+
|
| 9 |
+
3Allen Institute for Artificial Intelligence
|
| 10 |
+
|
| 11 |
+
carlomeloni@mail.tau.ac.il, {shauli.ravfogel, yoav.goldberg}@gmail.com
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Historical linguists have identified regularities in the process of historic sound change. The comparative method utilizes those regularities to reconstruct proto-words based on observed forms in daughter languages. Can this process be efficiently automated? We address the task of proto-word reconstruction, in which the model is exposed to cognates in contemporary daughter languages, and has to predict the proto word in the ancestor language. We provide a novel dataset for this task, encompassing over 8,000 comparative entries, and show that neural sequence models outperform conventional methods applied to this task so far. Error analysis reveals a variability in the ability of neural model to capture different phonological changes, correlating with the complexity of the changes. Analysis of learned embeddings reveals the models learn phonologically meaningful generalizations, corresponding to well-attested phonological shifts documented by historical linguistics.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Historical linguists seek to identify and explain the various ways in which languages change through time. Research in historical linguistics has revealed that groups of languages (language families) can often be traced into a common, ancestral language, a "proto-language". Large-scale lexical comparison of words across different languages enables linguists to identify cognates: words sharing a common proto-word. Comparing cognates makes it possible to identify rules of phonetic historic change, and by back-tracing those rules one can identify the form of the proto-word, which is often not documented. That methodology is called the comparative method (Anttila, 1989), and is the main tool used to reconstruct the lexicon and phonology of extinct languages. Inferring the form of proto-words from existing cog
|
| 20 |
+
|
| 21 |
+
nates in daughter languages is possible since historical sound changes within a language family are not random. Rather, the phonological change is characterized by regularities that are the result of constraints imposed by the human articulatory and cognitive faculties (Millar, 2013). For example, we can find such regular change—commonly called "systematic correspondence"—by looking at the evolution of the first phoneme of Latin's word for "sky":<sup>1</sup>
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
Figure 1: the evolution of Latin word for "sky" is several Romance languages.
|
| 25 |
+
|
| 26 |
+
The Spanish word's first sound is $[\theta]$ , while the Italian word begins with $[tʃ]$ , the French word with [s], Romansh with [ts] and Sardinian with [k]. This pattern is systematic, and will be found throughout the languages. Working this way, historical linguists reconstruct words in the proto-language from existing cognates in the daughter languages, and determine how words in the proto-language may have sounded.
|
| 27 |
+
|
| 28 |
+
To what extent can a machine-learning model learn to reconstruct proto-words from examples in this way? And what generalizations of phonetic change will it learn? We focus on the task of proto-word reconstruction: the model is trained on sets of cognates and their known proto-word, and is then tasked with predicting the proto-word for an unseen set of cognates. Our study concentrate on the romance language family<sup>2</sup> and the model is trained to reconstruct the Latin origin. We show
|
| 29 |
+
|
| 30 |
+
that a recurrent neural-network model can learn to perform this task well (outperforming previous attempts).<sup>3</sup>
|
| 31 |
+
|
| 32 |
+
More interesting than the raw performance numbers are the learned generalizations and error patterns. The Romance languages are widely studied (Ernst (2003); Ledgeway and Maiden (2016); Holtus et al. (1989) among others), and their phonological evolution from Latin is well mapped. The existence of this comprehensive knowledge allows exploring to what extent neural models internalize and capture the documented rules of language change, and where do they deviate from it. We provide an extensive error analysis, relating errors patterns to knowledge in historical linguistics. This is often not possible in common NLP tasks, such as parsing or semantic inference, in which the rules governing linguistic phenomena—or even the suitable framework to describe them—are still in dispute among linguists.
|
| 33 |
+
|
| 34 |
+
Contributions Inspection of existing datasets of cognates in Romance languages has revealed inherent problems. We thus have collected a new comprehensive dataset for performing the reconstruction task (§4). Besides the dataset, our main contribution is the extensive analysis of what is being captured by the models, both on orthographic and phonetic versions of the dataset (§6). We find that the error patterns are not random, and they correlate with the relative opacity of the historic change. These patterns were divided in different categories, each one motivated by a sound phonological explanation. Moreover, in order to further evaluate the learning of rules of phonetic change, we evaluated models on a synthetic dataset (§6.3), showing that the model is able to correctly capture several phonological change rules. Finally, we analyze the learned inner representations of the model, and show it learns phonologically meaningful properties of phonemes (§6.4) and attributes different importance to different daughter languages (§6.5).
|
| 35 |
+
|
| 36 |
+
# 2 Related Work
|
| 37 |
+
|
| 38 |
+
The related task of cognates detection has been extensively studied. In this task, a set of cognates should be extracted from word lists in different languages. Most effort in Machine learn
|
| 39 |
+
|
| 40 |
+
ing approaches to this task has been focused on distance-based methods, which quantify the distance (according to some metric), or the similarity, between a given candidate of cognates. The similarity can be either static (e.g. Levenshtein distance) or learned. Once the metric is established, a classification can be performed either based on hard-decision (words below a certain threshold are considered cognates) or by learning a classifier over the distance measures and other features (Kondrak, 2001; Mann and Yarowsky, 2001; Inkpen et al., 2005; Ciobanu and Dinu, 2014a; List et al., 2016); Mulloni and Pekar (2006) have evaluated an alternative approach, in which explicit rules of transformation are derived based on edit operations. See Rama et al. (2018) for a recent evaluation of the performance of several cognates detection algorithms.
|
| 41 |
+
|
| 42 |
+
Several studies have gone beyond the stage of cognates extraction, and used resulted list of cognates to reconstruct the lexicon of protolanguages. Most studies in this direction borrowed techniques from computational phylogeny, drawing a parallel between the hypothesized branching of (latent) proto words into their (observed) current forms and the gradual change of genes during evolution. Bouchard-Côte et al. (2007) has applied such a model to the development of the Romance languages, based on a dataset composed of aligned-translations. Bouchard-Côte et al. (2009, 2013) used an extensive dataset of Austronesian languages and their reconstructed proto-languages, and built a parameterized graphical model which models the probability of a phonetic change between a word and its ancestral form; the probability is branch-dependent, allowing for the learning of different trends of change across lineages. While achieving impressive performance, even without necessitating a cognates lists as an input, their model is based on a given phylogeny tree that accurately represents the development of the languages in question.
|
| 43 |
+
|
| 44 |
+
Wu and Yarowsky (2018) have automatically constructed cognate datasets for several languages, including Romance languages, and used a character-level NMT system to complete missing entries (not necessarily the proto-form). Several works studied the induction of multilingual dictionaries from partial data in related languages. Wu et al. (2020) reconstruct cognates in Austronesian languages (where the proto-language is not attested). Lewis et al. (2020) employ a mixture
|
| 45 |
+
|
| 46 |
+
of-experts approach for lexical translation induction, combining neural and probabilistic methods, and Nishimura et al. (2020) translate from a multi-source input that contains partial translations to different languages, concatenated. Finally, Ciobanu and Dinu (2018) have applied a CRF model with alignment to a dataset of Romance cognates, created from automatic alignment of translations (Ciobanu and Dinu, 2014b). The researchers also applied RNNs on the same dataset, but reported negative results.
|
| 47 |
+
|
| 48 |
+
# 3 Proto-word Reconstruction
|
| 49 |
+
|
| 50 |
+
Our proto-word reconstruction is as follows: the training set is composed of pairs $(x_{i},y_{i})$ , where each $x_{i} = c_{i}^{\ell_{1}},\ldots ,c_{i}^{\ell_{n}}$ is a set of cognate words, each tagged with a language $\ell_{j}$ , and $y_{i}$ is the protoword (Latin word) of that set. We consider an orthographic task, where the cognates and protowords are spelled out as written. As the orthography is often arbitrary and more conservative than spoken language, we consider also a phonetic task, in which the cognates and proto-words are represented as their phonetic transcriptions into IPA.
|
| 51 |
+
|
| 52 |
+
An example of a training instance $(x,y)$ for the orthographic task is:
|
| 53 |
+
|
| 54 |
+
$$
|
| 55 |
+
x = \text {l a p t e} ^ {\mathrm {R M}}, \text {l a i t} ^ {\mathrm {F R}}, \text {l a t t e} ^ {\mathrm {I T}}, \text {l e c h e} ^ {\mathrm {S P}}, \text {l e i t e} ^ {\mathrm {P T}}
|
| 56 |
+
$$
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
y = \text {l a c t e m}
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
$$
|
| 63 |
+
a n d \text {f o r}
|
| 64 |
+
$$
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
x = \operatorname {l a p t e} ^ {\mathrm {R M}}, \mathrm {l} \varepsilon^ {\mathrm {F R}}, \operatorname {l a t t e} ^ {\mathrm {I T}}, \operatorname {l e f t} \mathrm {f e} ^ {\mathrm {S P}}, \operatorname {l e j t i} ^ {\mathrm {P T}}
|
| 68 |
+
$$
|
| 69 |
+
|
| 70 |
+
$$
|
| 71 |
+
y = \mathrm {l a k t} \varepsilon \mathrm {m}
|
| 72 |
+
$$
|
| 73 |
+
|
| 74 |
+
A cognate in one of the languages may be missing, in which case we represent it by a dash. Here, we are missing the Italian and Romanian cognates:
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
x = - ^ {\mathrm {R M}}, \operatorname {t r a v a j} ^ {\mathrm {F R}}, - ^ {\mathrm {I T}}, \operatorname {t r a b a x o} ^ {\mathrm {S P}}, \operatorname {t r e} \beta \mathrm {a} \Lambda^ {\mathrm {P T}}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
$$
|
| 81 |
+
y = \operatorname {t r i p a l} \varepsilon \mathrm {m}
|
| 82 |
+
$$
|
| 83 |
+
|
| 84 |
+
At test time, we are given a set of cognates and are asked to predict their proto-word.
|
| 85 |
+
|
| 86 |
+
# 4 Comprehensive Romance Dataset
|
| 87 |
+
|
| 88 |
+
The different experiments described in the paper were performed on a large dataset of our creation, which contained cognates and their proto-words in both orthographic and phonetic (IPA) forms. The dataset's departure point is Ciobanu and Dinu (2014b), which consists of 3,218 complete cognate sets in six different languages: French, Italian, Spanish, Portuguese, Romanian and Latin ${}^{4}$ . We augmented the dataset’s items with a
|
| 89 |
+
|
| 90 |
+
freely available resource, Wiktionary, whose data were manually checked against DIEZ and Donkin (1864) to ensure their etymological relatedness with the Latin source. The entries were transcribed into IPA using the transcription module of the eSpeak library<sup>5</sup>, which offers transcriptions for all languages in our dataset, including Latin. The final dataset contains 8,799 cognate sets (not all of them complete), which were randomly split into train, evaluation and test sets: 7,038 cognate sets $(80\%)$ were used for training, 703 $(8\%)$ for evaluation and 1,055 $(12\%)$ for testing. Overall, the dataset contains 41,563 distinct words for a total of 83,126 words counting both the orthographic and the phonetic datasets. Vowel lengths were found to be difficult to recover (see Table 1), hence we created the following variations of the dataset: with and without vowel length (for both the orthographic and phonetic datasets), and without a contrast (for the phonetic dataset); see section §6 for further discussion.
|
| 91 |
+
|
| 92 |
+
A detailed description of the dataset collection process is available at the appendix §A.1. We make our additions to the dataset of Ciobanu and Dinu (2014b) publicly available<sup>6</sup>.
|
| 93 |
+
|
| 94 |
+
# 5 Experimental Setup
|
| 95 |
+
|
| 96 |
+
# 5.1 NMT-based Neural Model
|
| 97 |
+
|
| 98 |
+
Our proto-word reconstruction setup follows an encoder-decoder with attention architecture, similar to contemporary neural machine translation (NMT) systems (Bahdanau et al., 2015; Cho et al., 2014).
|
| 99 |
+
|
| 100 |
+
We use a standard character-based encoder-decoder architecture with attention (Bahdanau et al., 2015). Both encoder and decoder are GRU networks with 150 cells. The encoder reads the forms of the words in the daughter languages, and output a contextualized representation of each character. At each decoding step, the decoder attends to the encoder's representations via a dot-product attention. The output of the attention is then fed into a MLP with 200 hidden units, which outputs the next Latin character to generate.
|
| 101 |
+
|
| 102 |
+
Input representation Each character (a letter in the orthographic case, and a phoneme in the phonetic case) is represented by an embedding vector
|
| 103 |
+
|
| 104 |
+
<table><tr><td rowspan="2"></td><td colspan="7">Edit Distance</td></tr><tr><td>0</td><td>≤1</td><td>≤2</td><td>≤3</td><td>≤4</td><td>Average</td><td>Avg, norm</td></tr><tr><td>Ortographic</td><td>64.1%</td><td>84.0%</td><td>92.7%</td><td>96.8%</td><td>98.5%</td><td>0.65</td><td>0.064</td></tr><tr><td>IPA</td><td>50.0%</td><td>73.9%</td><td>85.9%</td><td>93.3%</td><td>97.0%</td><td>1.022</td><td>0.100</td></tr><tr><td>Ortographic, added vowel lengths</td><td>42.5%</td><td>72.0%</td><td>86.3%</td><td>92.5%</td><td>96.7%</td><td>1.13</td><td>0.102</td></tr><tr><td>IPA, added vowel lengths</td><td>47.9%</td><td>62.0%</td><td>80.2%</td><td>87.1%</td><td>93.8%</td><td>1.331</td><td>0.119</td></tr><tr><td>IPA, no contrast</td><td>65.1%</td><td>80.0%</td><td>87.5%</td><td>93.6%</td><td>96.7%</td><td>0.797</td><td>0.077</td></tr></table>
|
| 105 |
+
|
| 106 |
+
Table 1: Distribution of edit distances between the reconstructed and original Latin form, on the orthographic and transcribed detaets. Edit distance of 0 corresponds to perfect reconstruction. "Average" refers to average edit distance, and "Avg, norm" to normalized average edit distance.
|
| 107 |
+
|
| 108 |
+
of size 100. While all Romance languages are orthographically similar, the same letters represent different sounds, and thus convey different kinds of information for the task of Latin reconstruction. A possible approach would encode each language's characters using a unique embedding table. We instead share the character embedding table across all languages (including Latin), but concatenate to each character vector also a language-embedding vector. The final representation of a character $c$ in language $\ell$ is then $WE[c] + UE[\ell]$ where $E$ is a shared embedding matrix, $c$ is a character id, $\ell$ is a language id, and $W$ and $U$ are a linear projection layers.
|
| 109 |
+
|
| 110 |
+
# 5.2 Evaluation Metric
|
| 111 |
+
|
| 112 |
+
Our main quantitative metric for evaluation is the edit distance between the reconstructed word and the gold Latin word. We use the standard edit distance with equal weight of 1 for deletion, insertion and substitution. We report test set average edit distance and average normalized edit distance (divided by word length), as well as the percentage of instances with less than $k$ edit operations between the reconstruction and the gold, for $k = 0$ to 4.
|
| 113 |
+
|
| 114 |
+
# 6 Results and Analysis
|
| 115 |
+
|
| 116 |
+
Table 1 summarizes our main quantitative results. "Orthographic, added vowel lengths" and "IPA, added vowel lengths" refer to variations of the datasets that include explicit marking of vowel length in Latin words, marked by $<:>$ after long vowels. The models performance on the orthographic dataset demonstrates a substantial improvement over previously reported results. Our method has achieved average edit distance of 0.65, average normalized edit distance of 0.064, and $64.1\%$ complete reconstruction rate (edit distance of 0). These numbers compare favorably with the edit distance of 1.07, normalized edit distance of 0.13 and $50\%$ complete reconstruction reported
|
| 117 |
+
|
| 118 |
+
by Ciobanu and Dinu (2018). We note, however, that as our method is different both in the training corpus and in the type of model we employ, it is not clear whether this improvement should be attributed to the quality of the data, to the model, or to both of them.
|
| 119 |
+
|
| 120 |
+
The performances on the phonetic dataset were lower than those derived from the orthographic one: in the phonetic dataset the average edit distance was of 1.022, and the average normalized edit distance of 0.1, with $50.0\%$ complete reconstruction rate.
|
| 121 |
+
|
| 122 |
+
This disparity can be explained at least partially by a peculiarity of the phonetic dataset: it implicitly encodes vowel length, which was neutralized in the orthographic dataset. The reason for this difference is that length contrast in Latin co-occurred with quality differences: short vowels tended to be more open than their long counterparts, a contrast also called "tense-lax" (Allen and Allen, 1989). This contrast is not present in Latin orthography, but it appears in its phonetic transcription. This results in a noticeable gap between the results of the orthographic dataset with vowel lengths and without vowel lengths (0.064 average normalized edit distance vs. 0.119), while the differences between the phonetic IPA dataset with vowel lengths and without vowels lengths are much smaller. When the contrast "tense-lax" is manually neutralized<sup>8</sup>, the performances achieved are similar to the ones on the orthographic dataset (as it is possible to see from the performances on "IPA, no contrast", whose Latin entries do not contain a "tense-lax"
|
| 123 |
+
|
| 124 |
+
7When we train a smaller version of our model (75-dimensional GRU) on the original dataset of Ciobanu and Dinu (2014b) we achieve average edit distance of 0.881, average normalized edit distance of 0.103, and complete reconstruction rate of $59.1\%$ . Training a similar model on their dataset after cleaning resulted in average edit distance of 0.612, average normalized edit distance of 0.062 and complete reconstruction rate of $68.8\%$ .
|
| 125 |
+
8We achieved that by respectively changing the characters $< \mathrm{o}>$ $< \mathrm{o}>$ $< \mathrm{i}>$ $< \varepsilon >$ to $< \mathrm{u}>$ $< \mathrm{o}>$ $< \mathrm{i}>$ $< \mathrm{e}>$ in the Latin words
|
| 126 |
+
|
| 127 |
+
<table><tr><td>Error type</td><td>Orthographic</td><td>Phonetic</td></tr><tr><td>High-mid</td><td>18%</td><td>8%</td></tr><tr><td>Deletion</td><td>14%</td><td>6%</td></tr><tr><td>Consonant</td><td>13%</td><td>15%</td></tr><tr><td>Cluster</td><td>12%</td><td>3%</td></tr><tr><td>Morphology</td><td>11%</td><td>10%</td></tr><tr><td>Vowel</td><td>7%</td><td>8%</td></tr><tr><td>Length</td><td>—</td><td>26%</td></tr><tr><td>Orthography</td><td>5%</td><td>—</td></tr><tr><td>Other</td><td>20%</td><td>24%</td></tr></table>
|
| 128 |
+
|
| 129 |
+
Table 2: Error type distribution based on 650 orthographic and 650 phonological errors.
|
| 130 |
+
|
| 131 |
+
contrast).
|
| 132 |
+
|
| 133 |
+
# 6.1 Error Patterns
|
| 134 |
+
|
| 135 |
+
The following subsections focus on the model performances on the orthographic and phonetic datasets without explicit vowel length marking. A thorough analysis of both datasets reveals that the model's errors are not arbitrary, but rather tend to correspond to one of a few well-defined linguistic phenomena characterizing the evolution of Latin to its daughter languages. From an analysis of about 1300 errors, equally divided between the orthographic and the phonetic datasets, we find that $80\%$ of the errors of the model on the orthographic dataset, and $75\%$ on the phonetic one can be grouped into one of the following groups: high-mid vowel alternations, segment deletion, segment changes, cluster changes, morphological changes and other vowel changes. Additionally, one error category is unique to the phonetic dataset, tenselax errors, and one is unique to the orthographic dataset, orthography errors. Table 2 summarizes the results, and Figure 2 visualizes the vowels error patterns on the phonetic dataset.
|
| 136 |
+
|
| 137 |
+
We briefly discuss each of these groups.
|
| 138 |
+
|
| 139 |
+
High-mid alternation. The largest number of errors on the orthographic dataset, $18\%$ , can be attributed to confusion between high and mid-high vowels (correspondingly $<\mathrm{i}>$ , $<\mathrm{u}>$ and $<\mathrm{e}>$ , $<\mathrm{o}>$ ), as shown by the reconstruction $<\text{pescarium}>$ instead of the Latin $<\text{piscarium}>$ (alternation between $<\mathrm{e}>$ and $<\mathrm{i}>$ ). That error is much rarer in the phonetic dataset, accounting only for $8\%$ of all the errors. The reason of this error can be attributed to the origin of the mid-vowels in the daughter languages: while Latin
|
| 140 |
+
|
| 141 |
+
long vowels [ɪː], [eː], [ɔː] and [uː], always evolved into [i], [e], [o] and [u] in the daughter languages (with minor changes related to syllable structure), Latin short vowels—[ɪ], [ε], [ɔ] and [ʊ]—are not deterministically mapped into corresponding vowels in the daughter languages: Latin [ɪ] and [ε] both usually became Romance [e] (with alternations related to syllable structure, as diphthongization to [je]), while Latin [ɔ] and [ʊ] have different reflexes in the daughter languages as [u], [o], [ɔ], [ɒ] or as diphthongs. Because of this complex evolution, which merges different Latin phonemes into the same one in the daughter languages, the model is unable of unequivocally predicting the Latin vowel. Nonetheless, it seems that the tenselax contrast present in the phonetic dataset eases the task of distinguishing the different phonemes, and enables the network to reconstruct their origin more often.
|
| 142 |
+
|
| 143 |
+
Segment deletion. examples of these errors are the reconstruction of $\langle \text{aspargum} \rangle$ instead of Latin $\langle \text{asparagus} \rangle$ , and the reconstruction of [abitatem] instead of Latin [habilitatem]. During the evolution from Latin to Romance languages, unstressed syllables tended to be dropped. This phenomenon was not systematic, and occurred in different ways among and within the languages. Such process could affect either whole syllables (consonant + vowel) or only the vowel, creating new consonant clusters. Because of the erratic nature of this process, it seems that the network struggles with the exact reconstruction of segments eliminated in the daughter languages. A special kind of deletion is that of the consonant [h]. This consonant did not survive in any Romance languages (although it may be represented orthographically), and hence many times the network does not reconstruct it.
|
| 144 |
+
|
| 145 |
+
Segment changes. this category encompasses errors in the reconstruction of consonants—such as voicing changes (reconstructing $<\text{faculdadem}>$ vs. Latin $<\text{facultatem}>$ ), assimilation ([wessare] vs. [weksare]) and gemination ([agregationem] vs. [aggregationem]). All these errors reflect processes that took place in all of the daughter languages, that obscures the original form of the proto-word.
|
| 146 |
+
|
| 147 |
+
Cluster changes. These are changes that occur with two contiguous consonants. Consider, for example, the reconstruction of [rətɪnɪm] instead of Latin [rəktɪnɪm], and of <sennorem> instead of Latin <seniorem>. The former is an
|
| 148 |
+
|
| 149 |
+
instance of cluster simplification, while the latter is an instance of cluster palatalization. In many of the daughter languages clusters of two different sounds underwent simplification, either by the dropping of one of the sound or the assimilation of one of them. Palatalization is the process by which certain sounds tend to be pronounced more closely to the palate, usually because of an adjacent front vowel. This change occurred in all Romance languages, even though its orthographic representation may vary among them.
|
| 150 |
+
|
| 151 |
+
Morphological changes. Latin had a very developed morphology, with several classes of special conjugations and irregular forms. The network struggles to reconstruct correctly irregular forms, as these forms were mostly lost in the daughter languages. An instance of such irregular verbs is <praeferre>, reconstructed as <praefere> by the network. Moreover, other special morphological classes, such as Latin neuters, tend to be reconstructed as more usual forms. Another interesting class of errors is change of morphological category: some nouns have suffixes reminiscent of those of verbs, and hence are wrongly reconstructed as such. A separated case is that of Greek words: Latin contained several Greek loanwords that conserved their original morphology, different from the Latin one. Since these peculiarities were, for most part, not retained in the daughter languages, the network reconstructs them with normal Latin suffixes. For example, the greek [syntaksis] was reconstructed as [syntaksεm], with the normal Latin suffix.
|
| 152 |
+
|
| 153 |
+
Other vowel changes. Latin contained several diphthongs, among them the diphthongs [ai] and [oi]. These sounds did not survive in any of the daughter languages (although in some rare cases they may be represented in the orthography), and both changed into [e] in the different Romance languages. This left to reconstruction errors such as reconstructing $\langle \text{eigrum} \rangle$ instead of Latin $\langle \text{aegrum} \rangle$ . Some changes also occurred with the vowel [a], which was reconstructed as a different vowel.
|
| 154 |
+
|
| 155 |
+
Greek orthography. some Latin words from Greek origin retained some orthographic conventions alien to Latin, such as the use of $<\mathrm{y}>$ , $<\mathrm{ph}>$ , $<\mathrm{th}>$ , $<\mathrm{rh}>$ etc. These conventions were only partially retained in the daughter languages, which creates some inconsistencies in their reconstruction by the network.
|
| 156 |
+
|
| 157 |
+
Tense-Lax alternation. this is the largest cate
|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
tense-lax errors
|
| 161 |
+
|
| 162 |
+

|
| 163 |
+
Figure 2: Phonological mistakes resulting from alternations between vowels, on the phonetic dataset. The numbers signify the number of errors, excluding singleton errors.
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
backness errors
|
| 167 |
+
|
| 168 |
+
gory found in the networks errors on the phonetic dataset – up to $26\%$ of all errors. As said previously, the tense-lax contrast reflects vowel length in Latin, which is not entirely predictable based on the daughter languages. The network tends to confuse between the lax and the tense vowels.
|
| 169 |
+
|
| 170 |
+
Figure 2 shows clearly that the network's errors are internally consistent and not random: all the vowel errors fit neatly in one of the aforementioned categories, while other possible errors do not occur.
|
| 171 |
+
|
| 172 |
+
Orthographic vs. Phonetic Importantly, the phonetic and orthographic tasks differ in their error distributions: while the performance of the network on the orthographic task displays many syllable changes – changes that alter the structure of the syllable (mostly changes in consonant clusters and deletion of segments) – on the phonetic tasks the model tends to retain syllable structure, but perform more segment-related errors (i.e., changing a specific vowel or consonant for another one). The IPA performance contained more idiosyncratic errors that could not be categorized in one of the main categories. Such errors tended to occur when the network had only one or two cognates from the daughter languages. Even though the orthographic performance also exhibited poorer reconstructions in these cases, it seems that the IPA performance was even more affected by the singular words, leading to more erratic reconstructions.
|
| 173 |
+
|
| 174 |
+
# 6.2 Learnt generalizations
|
| 175 |
+
|
| 176 |
+
This section will focus on the phonetic dataset. A closer inspection of the errors made by the model, and of those that do not occur in the data, can shed light on the processes of phonological change learnt by the model. We will first focus on the vowels. The Latin vowel [a] is quite resilient to
|
| 177 |
+
|
| 178 |
+
changes, and most of the daughter languages retain it without change (only in French and Romanian some phonological changes occur, in certain phonological environments). Indeed, the network has almost no mistakes in recovering it, apart from some isolated cases that derives from insufficient cognates in the daughter languages. The network also makes virtually no errors regarding the reconstruction of vowel backness – here also the only few cases are caused by the paucity of cognates and by assimilation processes in the daughter languages that make the Latin source opaque (metaphony processes). All in all, the network learns correctly the phonological changes that occurred in Latin vowels, and the main errors are a result of changes that cannot be fully reverted from the daughter languages.
|
| 179 |
+
|
| 180 |
+
The model learnt well the mapping of consonants between Latin and its daughter languages, and vowel reconstruction errors are considerably more prevalent. Focusing on one type of errors, palatalization, shows that the network failed to reconstruct the original consonant in opaque contexts, that is, when phonological cues crucial for the right reconstruction were lacking. Specifically, the network confused between the consonants [t] and [k] in the Latin reconstruction, since they palatalize to the same segments in Spanish and French. Without the other daughter languages, it is impossible to reconstruct correctly the original sound in Latin.
|
| 181 |
+
|
| 182 |
+
Finally, the network correctly generalized the occurrence of nasals in Latin clusters. Latin nasal tended to assimilate to the place of articulation of the adjacent consonant, deriving clusters such [ŋk], [mp] and [nt]. When the network deleted a consonant in a cluster containing a nasal, or changed a consonant adjacent to a nasal, the nasal consonant always changed to match the place of articulation of the following consonant. Hence, by deleting [k] in the cluster [ŋkt], the network reconstructs [nt]. Similarly, by changing [p] to [t] in the cluster [mp], the nasal consonant accordingly: [nt].
|
| 183 |
+
|
| 184 |
+
# 6.3 Evaluating Rules of Phonetic Change
|
| 185 |
+
|
| 186 |
+
To what extent did the model learn known rules of phonetic change?
|
| 187 |
+
|
| 188 |
+
The evolution of the Romance languages is well studied and linguists documented the set of phonological transformations that underwent between Latin and its daughter languages. We
|
| 189 |
+
|
| 190 |
+
collected 33 of these phonological change rules, and used them to create a "synthetic" test set, containing syllable examples each focusing on a different phonological change. An example of a row in this dataset, corresponding to the rule of change of Latin [j] at word initial, is:
|
| 191 |
+
|
| 192 |
+
$$
|
| 193 |
+
\begin{array}{l} x = 3 a ^ {R M}, 3 a ^ {F R}, d 3 a ^ {I T}, x a ^ {S P}, 3 a ^ {P T} \\ y = \mathrm {j a} \\ \end{array}
|
| 194 |
+
$$
|
| 195 |
+
|
| 196 |
+
Since the model was trained on complete words, isolated syllables tended to be unnatural for the network, and the output often contained additional consonants (usually morphological endings). When evaluating the model output we focus on the specific phonemes involved in the phonological change, and we ignore additional phonological material.
|
| 197 |
+
|
| 198 |
+
Results The complete list of synthetic examples and predictions is available at Table 3. The network correctly predicted 22 out of the 33 phonological rules (66.67% of the changes). The results are compatible with the results of the main reconstruction experiment: In both experiments, the network correctly reconstructed phonemes retained with little or no changes in all languages (e.g. [a] in different -phonological environments). Another class of phonemes correctly reconstructed in both cases are those which changed in a predictable way in each one of the daughter languages. Thus, [w] was correctly reconstructed since it predictably changed to [v] in all the daughter languages (apart from Spanish, which merged it with [b]). Phonemes that tended to change differently, but consistently, were also faithfully recovered: even though Latin [k] tended to change differently depending on the daughter language ([s] in French and Portuguese, [θ] in Spanish and [tʃ] in Italian and Romanian), it was reconstructed correctly because of the consistence of the change in each daughter language. The phonemes wrongly reconstructed tended to be those whose phonological change was "opaque". The "opaqueness" of their change can be ascribed to the fact that they were neutralized in the daughter languages, making it impossible to recover them without additional information. Relevant to this case are mostly vowels and diphthongs, as Latin [e] and [ɪ], which both became [e] in all the different daughter languages (with variants influenced by the phonological environment).
|
| 199 |
+
|
| 200 |
+
<table><tr><td>Latin phoneme</td><td>Romanian</td><td>French</td><td>Italian</td><td>Spanish</td><td>Portuguese</td><td>Latin</td><td>Latin - reconstruction</td><td>Correct</td></tr><tr><td>/e/ blocked syllable</td><td>pep</td><td>pep</td><td>pep</td><td>pep</td><td>pep</td><td>pep</td><td>pip</td><td>no</td></tr><tr><td>/o/ blocked syllable</td><td>pop</td><td>pup</td><td>pop</td><td>pop</td><td>pop</td><td>pop</td><td>pup</td><td>no</td></tr><tr><td>/ε/ blocked syllable</td><td>pjep</td><td>pεp</td><td>pεp</td><td>pjep</td><td>pεp</td><td>pεp</td><td>pep</td><td>no</td></tr><tr><td>/kt/ medially, before nasals</td><td>-</td><td>anta</td><td>anta</td><td>anta</td><td>anta</td><td>ankta</td><td>antam</td><td>no</td></tr><tr><td>/aɪ/</td><td>pe</td><td>pe</td><td>pe</td><td>pe</td><td>pe</td><td>paɪ</td><td>pɛm</td><td>no</td></tr><tr><td>/ɔɪ/</td><td>pe</td><td>pe</td><td>pe</td><td>pe</td><td>pe</td><td>pɔɪ</td><td>pɛm</td><td>no</td></tr><tr><td>/b/ intervocalic</td><td>aa</td><td>ava</td><td>ava</td><td>aβa</td><td>ava</td><td>aba</td><td>awam</td><td>no</td></tr><tr><td>/e/ free syllable</td><td>pe</td><td>pwa</td><td>pe</td><td>pe</td><td>pe</td><td>pe</td><td>pɛm</td><td>no</td></tr><tr><td>/o/ free syllable</td><td>po</td><td>pø</td><td>po</td><td>po</td><td>po</td><td>po</td><td>pʊm</td><td>no</td></tr><tr><td>/ɪ/ free syllable</td><td>pe</td><td>pwa</td><td>pe</td><td>pe</td><td>pe</td><td>pi</td><td>pɛm</td><td>no</td></tr><tr><td>/n/ before front vowels</td><td>ji</td><td>ni</td><td>ni</td><td>ni</td><td>ni</td><td>ni</td><td>ŋidɛm</td><td>no</td></tr><tr><td>/a/ before nasal</td><td>pin</td><td>pan</td><td>pan</td><td>pan</td><td>pan</td><td>pan</td><td>pan</td><td>yes</td></tr><tr><td>/a/ blocked syllable</td><td>pap</td><td>pap</td><td>pap</td><td>pap</td><td>pap</td><td>pap</td><td>pap</td><td>yes</td></tr><tr><td>/i/</td><td>pi</td><td>pi</td><td>pi</td><td>pi</td><td>pi</td><td>pi</td><td>pi</td><td>yes</td></tr><tr><td>/u/</td><td>pu</td><td>py</td><td>pu</td><td>pu</td><td>pu</td><td>pu</td><td>pu</td><td>yes</td></tr><tr><td>/ɪ/ blocked syllable</td><td>pep</td><td>pep</td><td>pep</td><td>pep</td><td>pep</td><td>piŋ</td><td>piŋ</td><td>yes</td></tr><tr><td>/ʊ/ blocked syllable</td><td>pup</td><td>pup</td><td>pop</td><td>pop</td><td>pop</td><td>pup</td><td>pup</td><td>yes</td></tr><tr><td>/ɔ/ blocked syllable</td><td>pop</td><td>pɔp</td><td>pɔp</td><td>pwep</td><td>pɔp</td><td>pɔp</td><td>pɔp</td><td>yes</td></tr><tr><td>/k/ before front vowels</td><td>tfi</td><td>si</td><td>tfi</td><td>θi</td><td>si</td><td>ki</td><td>ki</td><td>yes</td></tr><tr><td>/sk/ before front vowels</td><td>fti</td><td>si</td><td>fi</td><td>θi</td><td>fi</td><td>ski</td><td>ski</td><td>yes</td></tr><tr><td>/kt/ medially, elsewhere</td><td>apta</td><td>ata</td><td>atta</td><td>atfa</td><td>ata</td><td>akta</td><td>aktam</td><td>yes</td></tr><tr><td>/au/</td><td>pau</td><td>pɔ</td><td>pɔ</td><td>po</td><td>po</td><td>paʊ</td><td>paʊm</td><td>yes</td></tr><tr><td>/pl/ word initial</td><td>pla</td><td>pla</td><td>pja</td><td>ʌa</td><td>fɑ</td><td>pla</td><td>plam</td><td>yes</td></tr><tr><td>/a/ free syllable</td><td>pa</td><td>pa</td><td>pa</td><td>pa</td><td>pa</td><td>pa</td><td>pam</td><td>yes</td></tr><tr><td>/ε/ free syllable</td><td>pje</td><td>pje</td><td>pje</td><td>pje</td><td>pε</td><td>pε</td><td>pɛm</td><td>yes</td></tr><tr><td>/w/</td><td>va</td><td>va</td><td>va</td><td>ba</td><td>va</td><td>wa</td><td>wam</td><td>yes</td></tr><tr><td>/b/ word initial</td><td>ba</td><td>ba</td><td>ba</td><td>ba</td><td>ba</td><td>ba</td><td>bam</td><td>yes</td></tr><tr><td>/j/ word initial</td><td>3a</td><td>3a</td><td>dʒa</td><td>xa</td><td>3a</td><td>ja</td><td>jam</td><td>yes</td></tr><tr><td>/f/ word initial</td><td>fa</td><td>fa</td><td>fa</td><td>a</td><td>fa</td><td>fa</td><td>fam</td><td>yes</td></tr><tr><td>/f/ elsewhere</td><td>afa</td><td>afa</td><td>afa</td><td>afa</td><td>afa</td><td>afa</td><td>affam</td><td>yes</td></tr><tr><td>/ʊ/ free syllable</td><td>pu</td><td>pɒ</td><td>pɒ</td><td>pɒ</td><td>pɒ</td><td>pu</td><td>pʊpɒm</td><td>yes</td></tr><tr><td>/ɔ/ free syllable</td><td>po</td><td>pɒ</td><td>pwɔ</td><td>pwe</td><td>pɔ</td><td>pɔ</td><td>pɒdɛm</td><td>yes</td></tr><tr><td>/ɪ/ before front vowels</td><td>ji</td><td>ji</td><td>ʌi</td><td>xi</td><td>ʌi</td><td>li</td><td>gɪlʊm</td><td>yes</td></tr></table>
|
| 201 |
+
|
| 202 |
+
Table 3: the set of test phonemes used to evaluate the model's generalizations. Each row represents a distinct rule of phonetic change, which focuses on a single phoneme. The phoneme in question is bolded, and other consonants / vowels are added to simulate the phonological environment of the rule. The added consonants / vowels were chosen because they did not affect the evolution of the examined phonemes from Latin to the Romance languages. "correct" signifies whether the network's prediction were correct.
|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
Figure 3: Hierarchical clustering of French phoneme embeddings
|
| 206 |
+
|
| 207 |
+
# 6.4 Learnt phoneme representations
|
| 208 |
+
|
| 209 |
+
Does training on proto-word reconstructions implicitly encourage the model to acquire phonologically-meaningful representations? We visualize the representation learned by network on the phonetic task by performing hierarchical clustering on the characters embedding vectors using the sklearn (Pedregosa et al., 2011) implementation of Ward variance minimization
|
| 210 |
+
|
| 211 |
+
algorithm (Ward Jr, 1963).
|
| 212 |
+
|
| 213 |
+
Here we will briefly discuss the learned French phoneme representations (Figure 3). For all other languages, see appendix §5. As can be seen, the primary division that the network performs is between vowels and consonants, displayed on two different branches of the tree. On a lower level other phonologically motivated groupings are found: the network tends to place under the same node pairs of voiced and unvoiced consonants (as [ʃ] and [ʒ], [d] and [t]), allophones ([æ] and [ø]) or phonemes of the same category (as the glides [j] and [w]). To conclude, the results demonstrate the learning of a phonologically meaningful taxonomy of phonemes, without explicit supervision.
|
| 214 |
+
|
| 215 |
+

|
| 216 |
+
Orthographic
|
| 217 |
+
Phonetic
|
| 218 |
+
|
| 219 |
+

|
| 220 |
+
Figure 4: position in output vs. most attended language (left) and output letter vs. most attended language (right) for the orthographic (upper) and phonetic (lower) tasks.
|
| 221 |
+
|
| 222 |
+
# 6.5 Attention analysis
|
| 223 |
+
|
| 224 |
+
Since different languages can diverge to a varying extent from their proto-language, we hypothesize that the 5 daughter languages we use in this work would be of different importance for the model. To test this hypothesis, we inspect the learned attention weights. We focus on the most attended input character at each time step (the character having the largest attention weight) and count the number of times each of the 5 input languages is the most attended language, as a function of the location in the output and of the identity of the Latin character produced in that time step. We normalize the count with respect to time step, letter frequency and language frequency in the corpus.
|
| 225 |
+
|
| 226 |
+
Results The results for the phonetic and orthographic tasks are presented in Figure 4. In both cases, Italian is the most attended language. There are some differences between the settings, however. For the orthographic task, the network focuses noticeably more on French than in the phonetic task. This tendency can be attributed to the very conservative orthography of French, that masks the phonological innovations that occurred in the language. Indeed, the network focuses exclusively on French for the reconstruction of the characters $\langle \mathrm{h} \rangle$ and $\langle \mathrm{y} \rangle$ , which are consistently represented only in French orthography, disappearing from the written form of the other Romance languages. The comparison to the atten
|
| 227 |
+
|
| 228 |
+
tion of the phonetic dataset shows that the network tends to actually ignore French, favoring other sources instead. Similarly, in the orthographic dataset, French is favored in the initial positions, a tendency that disappears in the phonetic dataset. Finally, an interesting trend in the phonetic dataset is a tendency to attend to Romanian at the initial positions and to Portuguese at later ones.
|
| 229 |
+
|
| 230 |
+
# 7 Conclusions
|
| 231 |
+
|
| 232 |
+
In this work, we introduce a new dataset for the task of proto-word reconstruction in the Romance language family, and used it to evaluate the ability of neural networks to capture the regularities of historic language change. We have shown that neural methods outperform previously suggested models for this task. Analysis of the linguistic generalizations the model acquires during training demonstrated that the mistakes are related to the complexity of the phonetic change. A controlled experiment on a set of rules for phonetic alternations between Latin and its daughter languages demonstrated the model internalizes some of the systematic processes that Latin had undergone during the evolution of the Romance languages. Visualizing the learned phoneme-embedding vectors has revealed a hierarchical division of phonemes that reflects phonological realities, and inspection of attention patterns demonstrated the model attributes different importance to different languages, in a position-dependent manner.
|
| 233 |
+
|
| 234 |
+
While the task examined in this paper is commonly called "proto-word reconstruction", in practice the task the model faces is considerably less challenging than the work of historical linguists, as the model is trained in a supervised setting. A future line of work we suggest is applying neural models for the end task of proto word reconstruction, without relying on cognates lists, in a way that would more naturally model the historical linguistic methodology.
|
| 235 |
+
|
| 236 |
+
# Acknowledgements
|
| 237 |
+
|
| 238 |
+
We thank Arya McCarthy for pointing out to relevant references. This project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT).
|
| 239 |
+
|
| 240 |
+
# References
|
| 241 |
+
|
| 242 |
+
Ti Alkire and Carol Rosen. 2010. *Romance languages: A historical introduction*. Cambridge University Press.
|
| 243 |
+
W Sidney Allen and William Sidney Allen. 1989. Vox Latina. Cambridge University Press.
|
| 244 |
+
Raimo Anttila. 1989. Historical and comparative linguistics, volume 6. John Benjamins Publishing.
|
| 245 |
+
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
|
| 246 |
+
Alexandre Bouchard-Côté, Thomas L Griffiths, and Dan Klein. 2009. Improved reconstruction of protolanguage word forms. In Proceedings of human language technologies: The 2009 annual conference of the north american chapter of the association for computational linguistics, pages 65-73. Association for Computational Linguistics.
|
| 247 |
+
Alexandre Bouchard-Côte, David Hall, Thomas L Griffiths, and Dan Klein. 2013. Automated reconstruction of ancient languages using probabilistic models of sound change. Proceedings of the National Academy of Sciences, 110(11):4224-4229.
|
| 248 |
+
Alexandre Bouchard-Côté, Percy Liang, Thomas L. Griffiths, and Dan Klein. 2007. A probabilistic approach to diachronic phonology. In EMNLP-CoNLL 2007, Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, June 28-30, 2007, Prague, Czech Republic, pages 887-896.
|
| 249 |
+
Peter Boyd-Bowman. 1980. From Latin to Romance in sound charts. Georgetown University Press.
|
| 250 |
+
Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah-danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST@EMNLP 2014, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, Doha, Qatar, 25 October 2014, pages 103-111.
|
| 251 |
+
Alina Maria Ciobanu and Liviu P Dinu. 2014a. Automatic detection of cognates using orthographic alignment. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 99-105.
|
| 252 |
+
Alina Maria Ciobanu and Liviu P Dinu. 2014b. Building a dataset of multilingual cognates for the Romanian lexicon. In Proceedings of the 9th International Conference on Language Resources and Evaluation, LREC, pages 1038-1043.
|
| 253 |
+
|
| 254 |
+
Alina Maria Ciobanu and Liviu P Dinu. 2018. Ab initio: Automatic latin proto-word reconstruction. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1604-1614.
|
| 255 |
+
J Halvor Clegg and Willis C Fails. 2017. Manual de fonética y fonología españolas. Routledge.
|
| 256 |
+
Josette Rey Deabove and Alain Rey. 2000. Le Nouveau Petit Robert: Dictionnaire alphabetique et analogique de la langue française. Dictionnaires Le Robert.
|
| 257 |
+
Friedrich Christian DIEZ and TC Donkin. 1864. An Etymological Dictionary of the Romance Languages; chiefly from the German of F. Diez. By TC Donkin. Williams and Norgate.
|
| 258 |
+
Gerhard Ernst. 2003. Romanische Sprachgeschichte: Ein Internationales Handbuch Zur Geschichte Der Romanischen Sprachen Und ihrer Erforschung/Manuel International Sur L'Histoire Et L'Etude Linguistique DES Langues Romanes. Mouton de Gruyter.
|
| 259 |
+
Félix Gaffiot and Pierre Flobert. 1934. Dictionnaire latin-français. Hachette Paris.
|
| 260 |
+
Robert A Hall. 1944. Italian phonemes and orthography. Italica, 21(2):72-82.
|
| 261 |
+
Martin Harris and Nigel Vincent. 2003. The romance languages. Routledge.
|
| 262 |
+
Günter Holtus, Michael Metzeltin, and Christian Schmitt. 1989. LRL, volume 3. M. Niemeyer.
|
| 263 |
+
Diana Inkpen, Oana Frunza, and Grzegorz Kondrak. 2005. Automatic identification of cognates and false friends in french and english. In Proceedings of the International Conference Recent Advances in Natural Language Processing, volume 9, pages 251-257.
|
| 264 |
+
Grzegorz Kondrak. 2001. Identifying cognates by phonetic and semantic similarity. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, pages 1-8. Association for Computational Linguistics.
|
| 265 |
+
Adam Ledgeway and Martin Maiden. 2016. The Oxford guide to the Romance languages, volume 1. Oxford University Press.
|
| 266 |
+
Charlton T. Lewis and Charles Short. 1879. A latin dictionary. Perseus Digital Library.
|
| 267 |
+
Dylan Lewis, Winston Wu, Arya D. McCarthy, and David Yarowsky. 2020. Neural transduction for multilingual lexical translation. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 4373-4384. International Committee on Computational Linguistics.
|
| 268 |
+
|
| 269 |
+
Johann-Mattis List, Philippe Lopez, and Eric Baptiste. 2016. Using sequence similarity networks to identify partial cognates in multilingual wordlists. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 599-605.
|
| 270 |
+
Gideon S. Mann and David Yarowsky. 2001. Multipath translation lexicon induction via bridge languages. In Language Technologies 2001: The Second Meeting of the North American Chapter of the Association for Computational Linguistics, NAACL 2001, Pittsburgh, PA, USA, June 2-7, 2001.
|
| 271 |
+
Maria Helena Mateus and Ernesto d'Andrade. 2000. The phonology of Portuguese. OUP Oxford.
|
| 272 |
+
Robert McColl Millar. 2013. Trask's historical linguistics. Routledge.
|
| 273 |
+
Andrea Mulloni and Viktor Pekar. 2006. Automatic detection of orthographic cues for cognate recognition. In LREC, pages 2387-2390.
|
| 274 |
+
Yuta Nishimura, Katsuhito Sudoh, Graham Neubig, and Satoshi Nakamura. 2020. Multi-source neural machine translation with missing data. IEEE ACM Trans. Audio Speech Lang. Process., 28:569-580.
|
| 275 |
+
Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct):2825-2830.
|
| 276 |
+
Taraka Rama, Johann-Mattis List, Johannes Wahle, and Gerhard Jäger. 2018. Are automatic methods for cognate detection good enough for phylogenetic reconstruction in historical linguistics? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 393-400.
|
| 277 |
+
Mika Sarlin. 2014. *Romanian grammar*. BoD-Books on Demand.
|
| 278 |
+
Joe H Ward Jr. 1963. Hierarchical grouping to optimize an objective function. Journal of the American statistical association, 58(301):236-244.
|
| 279 |
+
Winston Wu, Garrett Nicolai, and David Yarowsky. 2020. Multilingual dictionary based construction of core vocabulary. In Proceedings of The 12th Language Resources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pages 4211-4217. European Language Resources Association.
|
| 280 |
+
Winston Wu and David Yarowsky. 2018. Creating large-scale multilingual cognate tables. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018. European Language Resources Association (ELRA).
|
| 281 |
+
|
| 282 |
+
# A Appendix
|
| 283 |
+
|
| 284 |
+
# A.1 Dataset Creation
|
| 285 |
+
|
| 286 |
+
In order to perform the reconstruction task, we required a large dataset of cognates and their protowords, in both orthographic and phonetic (IPA) forms.
|
| 287 |
+
|
| 288 |
+
Despite growing interest in recent years, high-quality digital resources for the tasks of proto-word reconstruction and cognates detection are scarce. Our departure point is the dataset provided by Ciobanu and Dinu (2014b), which, to the best of our knowledge, is the most extensive dataset for proto-word reconstruction of a well-attested proto-language. The dataset contains 3,218 complete cognate sets in five Romance languages (Spanish, Italian, Portuguese, French, Romanian) together with their Latin etymological ancestor. Although being a valuable resource, this dataset was constructed via automatic method of cognate extraction, and a comparison with references on the development of Romance languages (Boyd-Bowman, 1980; Alkire and Rosen, 2010) reveals some problems, such as false cognates, truncated forms, non-existent words and mismatch between the part of speech of the cognates and the ancestor. Another salient problem of the dataset regarded the grammatical case of Latin nouns: Romance languages derived their words from the accusative Latin case (Harris and Vincent, 2003), while in the dataset Latin words were displayed in the nominative case, an inconsistency making the reconstruction inherently more challenging.
|
| 289 |
+
|
| 290 |
+
Lastly, as neural models often requires large amounts of training data, we aimed to expand the dataset. We thus created a cleaned and extended dataset by Wiktionary scrapping, followed by manual validation and cleansing.
|
| 291 |
+
|
| 292 |
+
Wikitionary scraping We augment the existing dataset with a freely available resource: Wiktionary. Wiktionary entries for Latin words usually contain inflection tables, and often list the descendants in Romance languages; these descendants are, by definition, cognates. We scraped all Latin entries from Wiktionary, and extracted the forms of the daughter languages (available in the "Descendants" section). This resulted in 5,598 additional comparative entries, for a total of 22,361 new individual words. Contrary to the previous dataset, the Wiktionary-derived cognates are not based on automatic alignment between translations, but rather on direct human annotation. On
|
| 293 |
+
|
| 294 |
+
the other hand, the Wiktionary-based entries are often incomplete, and include cognates in only a subset of the daughter languages.
|
| 295 |
+
|
| 296 |
+
Form normalization Using the Wiktionaryprovided inflection tables, we decline the Latin nouns to the accusative case, and conjugate verbs to the infinitive form. We do this both to the Wiktionary-based entries and to the ones in the original dataset. We selected a sample of around 100 Latin words to check the accuracy of the automatic conjugation, against Gaffiot and Flobert (1934), finding them all correct. Finally, Latin words in the Ciobanu and Dinu (2014b) dataset for which we did not find a Wiktionary entry were conjugated "manually" by consulting (Lewis and Short, 1879; Gaffiot and Flobert, 1934).
|
| 297 |
+
|
| 298 |
+
Manual verification and cleaning After the collection of the Wikitionary dataset, we went manually through all the Latin words contained in Ciobanu and Dinu (2014b), checking them against Lewis and Short (1879); Gaffiot and Flobert (1934). Additionally, we went over the some suspicious-looking words from the daughter language and verified them against DIEZ and Donkin (1864) to ensure their etymological relatedness with the Latin source, fixing if necessary. This sort of fix was not performed systematically, but we did fix or remove around 170 words.
|
| 299 |
+
|
| 300 |
+
Finally, we sample 300 entries from the original (Ciobanu and Dinu, 2014b) dataset prior to cleaning and 300 words from our cleaned and unified version of the dataset, and manually verified them. We find 43 mistakes in the original dataset and only 4 in our version, indicating that, while still not perfect, it is of substantial higher quality.
|
| 301 |
+
|
| 302 |
+
IPA transcription To obtain the phonetic transcriptions into IPA, we utilized the transcription module of the eSpeak library, which offers transcriptions for all languages in our dataset, including Latin. While a human transcription would be preferable, a manual evaluation of 200 of the resulting transcriptions by comparing them against several sources (Allen and Allen, 1989; Hall, 1944; Deabove and Rey, 2000; Clegg and Fails, 2017; Mateus and d'Andrade, 2000; Sarlin, 2014) show high accuracy: all the 200 words were correct, except for minor systematic changes which we fixed globally to better suit the transcription to phonological conventions. Specifically, we deleted the vowel symbols $<\upsilon>$ and $<\mathrm{I}>$ in Italian and Romanian, which resulted in alien to
|
| 303 |
+
|
| 304 |
+
those languages, changed the sequence $\langle \mathrm{rr} \rangle$ to $\langle \mathrm{r} \rangle$ in Spanish, and regularized the Portuguese transcriptions, which showed some phonological traits of Brazilian Portuguese.
|
| 305 |
+
|
| 306 |
+
Final dataset The resulting dataset, used for all experiments in this work, contains 8,799 entries. The dataset was randomly split into train, evaluation and test sets, with 7,038 examples $(80\%)$ used for training, 703 $(8\%)$ for evaluation and 1,055 $(12\%)$ for testing.
|
| 307 |
+
|
| 308 |
+
Overall, the dataset contains 41,563 distinct words across the different languages (for a total of 83,126 words counting both the orthographic and the phonetic datasets), with 7,384 Italian words, 7,183 Spanish words, 6,806 Portuguese words, 6,505 French words and 4,886 Romanian words. As vowel lengths were found to be difficult to recover, we created the following variations of the dataset: with and without vowel length (for both the orthographic and phonetic datasets), and without a contrast (for the phonetic dataset).
|
| 309 |
+
|
| 310 |
+
# A.2 Phoneme representations
|
| 311 |
+
|
| 312 |
+

|
| 313 |
+
|
| 314 |
+

|
| 315 |
+
|
| 316 |
+

|
| 317 |
+
Figure 5
|
| 318 |
+
|
| 319 |
+

|
| 320 |
+
|
| 321 |
+

|
| 322 |
+
|
| 323 |
+

|
| 324 |
+
|
| 325 |
+
In this appendix, we show the hierarchical clustering created for all the languages in our dataset. As it can be noted from figure 5, the results for the different languages exhibit representations similar to those found in the French clustering: the primary division in each language is between vowels and consonants. In Portuguese, Latin, Spanish and Romanian some consonants are grouped together with vowels. These consonants are restricted to nasals, liquids or glides. The inclusion of these consonants can be explained by the peculiarity of their nature: all of them have a special phonological status, displaying similarities in their behavior to vowels. In all languages phonologically related phonemes tend to be group under the same nodes. Among the others, glides are either found together with each other (as in French, Italian and Romanian) or with their vocalic counterparts (Latin, Spanish and Portuguese), consonants differentiated only in voicing are usually paired ([ʃ] and [3]), front and back vowels forms clusters and allophones usually shares the same node (Italian, French, Romanian, Spanish, Portuguese).
|
abantiquoneuralprotolanguagereconstruction/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:95ccd7deb3f88721f10fa7463b245d207c895f58d5d041dbefe404cd23e34237
|
| 3 |
+
size 459316
|
abantiquoneuralprotolanguagereconstruction/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5d929b260093742d7be089b85a32fa21ecf7837ac4b88a7ffcdf00d1c70ffa6c
|
| 3 |
+
size 372559
|
abstractmeaningrepresentationguidedgraphencodinganddecodingforjointinformationextraction/2b61daf0-5426-470b-be35-b21e068f6df6_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ef24247acc52409c6e7a7902006b37fc05b813e53353541eec1d0329034a13a8
|
| 3 |
+
size 82702
|
abstractmeaningrepresentationguidedgraphencodinganddecodingforjointinformationextraction/2b61daf0-5426-470b-be35-b21e068f6df6_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:728a382c006966c026e4ad639d2c8e75232a6f0e8861823e71d957fc3f569cde
|
| 3 |
+
size 99386
|
abstractmeaningrepresentationguidedgraphencodinganddecodingforjointinformationextraction/2b61daf0-5426-470b-be35-b21e068f6df6_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:94440e6c5e6fc3831e4a5a088bf3e7eb3d0b7e751aa8e4ae3a0a73d8c1feab36
|
| 3 |
+
size 1149450
|
abstractmeaningrepresentationguidedgraphencodinganddecodingforjointinformationextraction/full.md
ADDED
|
@@ -0,0 +1,372 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Abstract Meaning Representation Guided Graph Encoding and Decoding for Joint Information Extraction
|
| 2 |
+
|
| 3 |
+
Zixuan Zhang and Heng Ji
|
| 4 |
+
|
| 5 |
+
Computer Science Department
|
| 6 |
+
|
| 7 |
+
University of Illinois at Urbana-Champaign
|
| 8 |
+
|
| 9 |
+
{zixuan11, hengji}@illinois.edu
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
The tasks of Rich Semantic Parsing, such as Abstract Meaning Representation (AMR), share similar goals with Information Extraction (IE) to convert natural language texts into structured semantic representations. To take advantage of such similarity, we propose a novel AMR-guided framework for joint information extraction to discover entities, relations, and events with the help of a pre-trained AMR parser. Our framework consists of two novel components: 1) an AMR based semantic graph aggregator to let the candidate entity and event trigger nodes collect neighborhood information from AMR graph for passing message among related knowledge elements; 2) an AMR guided graph decoder to extract knowledge elements based on the order decided by the hierarchical structures in AMR. Experiments on multiple datasets have shown that the AMR graph encoder and decoder have provided significant gains and our approach has achieved new state-of-the-art performance on all IE subtasks<sup>1</sup>.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Information extraction (IE) aims to extract structured knowledge as an information network (Li et al., 2014) from unstructured natural language texts, while semantic parsing attempts to construct a semantic graph to summarize the meaning of the input text. Since both of them focus on extracting the main information from a sentence, the output information networks and semantic graphs have a lot in common in terms of node and edge semantics. In an example shown in Figure 1, many knowledge elements in the information network can be perfectly matched to certain nodes in the semantic graph with similar semantic meanings. Moreover, these two types of graphs may also be similar with regard to network topology. Specifically, the nodes that are
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: Comparison of the AMR graph generated from pre-trained AMR parser and information network from IE for the same sentence from ACE05: Scott Peterson now faces death penalty because of murdering his wife Laci and their unborn son at their house.
|
| 23 |
+
|
| 24 |
+
neighbors or connected via a few hops in the semantic graph are also likely to be close to each other in the corresponding information network. In Figure 1 we can see that "Scott Peterson", which acts as a shared argument for two event triggers "murdering" and "faces", is also directly linked to two main predicates murder-01 and face-01 in the semantic graph. From a global perspective, an information network can be approximately considered as a subgraph of semantic parsing, where the IE nodes are roughly a subset of the nodes in the semantic graph while maintaining similar inter-connections.
|
| 25 |
+
|
| 26 |
+
To further exploit and make use of such similarities for information extraction, we propose an intuitive and effective framework to utilize information from semantic parsing to jointly extract an in
|
| 27 |
+
|
| 28 |
+
formation network composed of entities, relations, event triggers and their arguments. We adopt Abstract Meaning Representation (AMR) (Banarescu et al., 2013) which contains rich semantic structures with fine-grained node and edge types as our input semantic graphs. Compared with previous IE models, our proposed model mainly consists of the following two novel components.
|
| 29 |
+
|
| 30 |
+
AMR-Guided Graph Encoding. The AMR graph topology can directly inform the IE model some global inter-dependencies among knowledge elements, even if they are located far away in the original sentence. Such a property makes it easier for the IE model to capture some non-local long-distance connections for relation and event argument role labeling. We design a semantic graph aggregator based on Graph Attention Networks (GAT) (Velickovic et al., 2018) to let the candidate entity and event trigger nodes to aggregate neighborhood information from the semantic graph for passing message among related knowledge elements. The GAT architecture used in our model is specifically designed to allow interactions between node and edge features, making it possible to effectively leverage the rich edge types in AMR.
|
| 31 |
+
|
| 32 |
+
AMR-Conditioned Graph Decoding. A large number of nodes in these two types of graphs share similar meanings, which makes it possible to obtain a meaningful node alignment between information networks and semantic graphs. Such an alignment provides potential opportunities to design a more organized way in the decoding part of a joint IE model. Instead of using sequential decoding as in previous models like OneIE (Lin et al., 2020), where the types of knowledge elements are determined in a left-to-right order according to their positions in the original sentence, we propose a new hierarchical decoding method. We use AMR parsing as a condition to decide the order of decoding knowledge elements, where the nodes and edges are determined in a tree-like order based on the semantic graph hierarchy.
|
| 33 |
+
|
| 34 |
+
Experiment results on multiple datasets show that our proposed model significantly outperforms state-of-the-art on all IE subtasks.
|
| 35 |
+
|
| 36 |
+
# 2 Problem Formulation
|
| 37 |
+
|
| 38 |
+
We focus on extracting entities, relations, event triggers and their arguments jointly from an input sentence to form an information network. Note that the AMR graphs in our model are not required
|
| 39 |
+
|
| 40 |
+
to be ground-truth but are generated by pretrained AMR parsers. Therefore, we do not incorporate additional information and our problem settings are identical to typical joint information extraction approaches such as DyGIE++ (Wadden et al., 2019) and OneIE (Lin et al., 2020). Given an input sentence $S = \{w_{1}, w_{2}, \dots, w_{N}\}$ , we formulate our problem of joint information extraction as follows.
|
| 41 |
+
|
| 42 |
+
Entity Extraction Entity extraction aims to identify word spans as entity mentions and classify them into pre-defined entity types. Given the set of entity types $E$ , the entity extraction task is to output a collection $\mathcal{E}$ of entity mentions:
|
| 43 |
+
|
| 44 |
+
$$
|
| 45 |
+
\mathcal {E} = \left\{\varepsilon_ {i} = \left(a _ {i}, b _ {i}, e _ {i}\right) \mid a _ {i} \leqslant b _ {i}, e _ {i} \in E \right\}
|
| 46 |
+
$$
|
| 47 |
+
|
| 48 |
+
where $a_{i},b_{i}\in \{1,2,\dots ,N\}$ denote the starting and ending indices of the extracted entity mentions, and $e_i$ represents the entity type in a type set $E$ . For example, in Figure 1, the entity mention "Scott Peterson" is represented as $(0,1,\text{PER})$ .
|
| 49 |
+
|
| 50 |
+
Relation Extraction The task of relation extraction is to assign a relation type to every possible ordered pair in the extracted entity mentions. Given the identified entity mentions $\mathcal{E}$ and pre-defined relation types $R$ , the set of relations is extracted as
|
| 51 |
+
|
| 52 |
+
$$
|
| 53 |
+
\mathcal {R} = \left\{r _ {i} = (\varepsilon_ {i}, \varepsilon_ {j}, l _ {i j} ^ {r}) \mid l _ {i j} ^ {r} \in R, \varepsilon_ {i}, \varepsilon_ {j} \in \mathcal {E} \right\}
|
| 54 |
+
$$
|
| 55 |
+
|
| 56 |
+
where $\varepsilon_{i}$ and $\varepsilon_{j}$ are entity mentions from $\mathcal{E}$ and $i,j\in \{1,2,\dots ,|\mathcal{E}|\}$ . An example relation mention is ("their", "son", PER-SOC) in Figure 1.
|
| 57 |
+
|
| 58 |
+
Event Extraction The task of event extraction includes extracting event triggers and their arguments. Event trigger extraction is to identify the words or phrases that most clearly indicate the occurrence of a certain type of event from an event type set $T$ , which can be formulated as:
|
| 59 |
+
|
| 60 |
+
$$
|
| 61 |
+
\mathcal {T} = \{\tau_ {i} = (p _ {i}, q _ {i}, t _ {i}) \mid p _ {i} \leqslant q _ {i}, t _ {i} \in T \}
|
| 62 |
+
$$
|
| 63 |
+
|
| 64 |
+
where $p_i, q_i \in \{1, 2, \dots, N\}$ denotes the starting and ending indices of the extracted event mentions, and $t_i$ represents an event type in $T$ . Given the pre-defined set of event arguments $A$ , the task of event argument extraction is to assign each trigger and entity pair an argument role label to indicate if an entity mention acts as some certain role of the event, which is formulated as extracting an argument set $\mathcal{A}$
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
\mathcal {A} = \left\{\alpha_ {i} = (\tau_ {i}, \varepsilon_ {j}, l _ {i j} ^ {a}) \mid l _ {i j} ^ {a} \in A, \tau_ {i} \in \mathcal {T}, \varepsilon_ {j} \in \mathcal {E} \right\}
|
| 68 |
+
$$
|
| 69 |
+
|
| 70 |
+
where $\tau_{i}$ and $\varepsilon_{j}$ are previously extracted event and entity mentions respectively, and $l_{ij}^{a}$ denotes the event argument role label.
|
| 71 |
+
|
| 72 |
+
Information Network Construction All of these extracted knowledge elements form an information network $G = (V, E)$ (an example is shown in Figure 1). Each node $v_{i} \in V$ is an entity mention or event trigger, and each edge $e_{i} \in E$ indicates a relation or event argument role. Thus our problem can be formulated as generating an information network $G$ given an input sentence $S$ .
|
| 73 |
+
|
| 74 |
+
# 3 Our Approach
|
| 75 |
+
|
| 76 |
+
Given an input sentence $S$ , we first use a pretrained transformer-based AMR parser (Fernandez Astudillo et al., 2020) to obtain the AMR graph for $S$ . We then use RoBERTa (Liu et al., 2019) to encode each sentence to identify entity mentions and event triggers as candidate nodes. After that, we map each candidate node to AMR nodes and enforce message passing using a GAT-based semantic graph aggregator to capture global inter-dependency between candidate nodes. All the candidate nodes and their pairwise edges are then passed through task-specific feed-forward neural networks to calculate score vectors. During decoding, we use the hierarchical structure in each AMR graph as a condition to decide the order in beam search and find the best candidate graph with the highest global score.
|
| 77 |
+
|
| 78 |
+
# 3.1 AMR Parsing
|
| 79 |
+
|
| 80 |
+
We employ a transformer based AMR parser (Fernandez Astudillo et al., 2020) pre-trained on AMR 3.0 annotations $^2$ to generate an AMR graph $G^{a} = (V^{a},E^{a})$ with an alignment between AMR nodes and word spans in an input sentence $S$ . Each node $v_{i}^{a} = (m_{i}^{a},n_{i}^{a}) \in V^{a}$ represents an AMR concept or predicate, and we use $m_{i}^{a}$ and $n_{i}^{a}$ to denote the starting and ending indices of such a node in the original sentence. For AMR edges, we use $e_{i,j}^{a}$ to denote the specific relation type between nodes $v_{i}^{a}$ and $v_{j}^{a}$ in AMR annotations.
|
| 81 |
+
|
| 82 |
+
Embeddings for AMR Relation Clusters To reduce the risk of over-fitting on hundreds of fine-grained AMR edge types, we only consider the edge types that are most relevant to IE tasks, and manually define $M = 12$ clusters of AMR edge types as shown in Table 1. Note that each $ARGx$
|
| 83 |
+
|
| 84 |
+
relation is considered as an individual cluster since each $ARGx$ indicates a distinct argument role. For each edge type cluster, we randomly initialize a $d_{E}$ dimensional embedding and obtain an embedding matrix $\pmb{E} \in \mathbb{R}^{M \times d_{E}}$ , which will be optimized during the training process.
|
| 85 |
+
|
| 86 |
+
<table><tr><td>Categories</td><td>AMR relation types</td></tr><tr><td>Spatial</td><td>location, destination, path</td></tr><tr><td>Temporal</td><td>year, time, duration, decade, weekday</td></tr><tr><td>Means</td><td>instrument, manner, topic, medium</td></tr><tr><td>Modifiers</td><td>mod, poss</td></tr><tr><td>Operators</td><td>op-X</td></tr><tr><td>Prepositions</td><td>prep-X</td></tr><tr><td>Core Roles</td><td>ARG0, ARG1, ARG2, ARG3, ARG4</td></tr><tr><td>Others</td><td>Other AMR relation types.</td></tr></table>
|
| 87 |
+
|
| 88 |
+
Table 1: Manually defined AMR relation clusters for IE, where each $ARGx$ is treated as an individual cluster.
|
| 89 |
+
|
| 90 |
+
# 3.2 Entity and Event Trigger Identification
|
| 91 |
+
|
| 92 |
+
We first identify the entity mentions and event triggers as candidate nodes from an input sentence. Similar to (Lin et al., 2020), we adopt feed forward neural networks constrained by conditional random fields (CRFs) to identify the word spans for entity mentions and event triggers.
|
| 93 |
+
|
| 94 |
+
Contextual Encoder Given an input sentence $S = \{w_{1}, w_{2}, \dots, w_{N}\}$ of length $N$ , we first calculate the contextual word representation $x_{i}$ for each word $w_{i}$ using a pre-trained RoBERTa encoder (Liu et al., 2019). If one word is split into multiple pieces by the RoBERTa tokenizer, we take the average of the representation vectors for all word pieces as the final word representation.
|
| 95 |
+
|
| 96 |
+
CRFs based Sequence Tagging After obtaining the contextual word representations, we use a feedforward neural network $FFN$ to compute a score vector $\hat{y}_i = FFN(\boldsymbol{x}_i)$ for each word, where each element in $\hat{y}_i$ represents the score for a certain tag in the tag set<sup>3</sup>. The overall score for a tag path $\hat{z} = \{\hat{z}_1, \hat{z}_2, \dots, \hat{z}_N\}$ is calculated by
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
s (\hat {\pmb {z}}) = \sum_ {i = 1} ^ {N} \hat {y} _ {i, \hat {z} _ {i}} + \sum_ {i = 1} ^ {N + 1} P _ {\hat {z} _ {i - 1}, \hat {z} _ {i}},
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+
where $\hat{y}_{i,\hat{z}_i}$ is the $\hat{z}_i$ -th element of the score vector $\hat{\pmb{y}}_i$ , and $P_{\hat{z}_{i - 1},\hat{z}_i}$ denotes the transition score from tag $\hat{z}_{i - 1}$ to $\hat{z}_i$ from an estimizable matrix $\pmb{P}$ . Similar to (Chiu and Nichols, 2016), the training
|
| 103 |
+
|
| 104 |
+
objective for node identification is to maximize the log-likelihood $\mathcal{L}^I$ of the gold tag-path $z$ .
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\mathcal {L} ^ {I} = s (\boldsymbol {z}) - \log \sum_ {\hat {\boldsymbol {z}} \in Z} e ^ {s (\hat {\boldsymbol {z}})} \tag {1}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
We use separate CRF-based taggers for entity and event trigger extraction. Note that we do not use the specific node types predicted by the CRF taggers as the final output classification results for entities and triggers, but only keep the identified entity and trigger spans. The final types of entities and triggers are jointly decided with relation and argument extraction in the subsequent decoding step. Specifically, we will obtain the collections of entity spans $\{(a_i,b_i)\}_{i = 1}^{|\mathcal{E}|}$ and trigger spans $\{(p_i,q_i)\}_{i = 1}^{|T|}$ during this step, where $a_{i},b_{i},p_{i},q_{i}$ denote the starting and ending indices of the word spans.
|
| 111 |
+
|
| 112 |
+
# 3.3 Semantic Graph Aggregator
|
| 113 |
+
|
| 114 |
+
To make the best use of the shared semantic features and topological features from the AMR parsing for the input sentence, we design a semantic graph aggregator, which enables the candidate entity nodes and event nodes to aggregate information from their neighbors based on the AMR topology.
|
| 115 |
+
|
| 116 |
+
Initial Node Representation Each entity node, trigger node or AMR node is initialized with a vector representation $h_i^0$ by averaging the word embeddings for all the words in their spans. For example, given an entity node $(a_i, b_i)$ , its representation vector is calculated by
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
\pmb {h} _ {i} ^ {0} = \frac {1}{| b _ {i} - a _ {i} + 1 |} \sum_ {k = a _ {i}} ^ {b _ {i}} \pmb {x} _ {k}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
where $\pmb{x}_k$ is the word representation from the RoBERTa encoder.
|
| 123 |
+
|
| 124 |
+
Node Alignment We first try to align each identified entity node and trigger node to one of the AMR nodes before conducting message passing. Take an entity node with its span $(a_{i},b_{i})$ as an example. Given the set of AMR nodes $\{(m_i^a,n_i^a)\}_{i = 1}^{|V^a |}$ , we consider $b_{i}$ as the index of the head word of the entity node, and aim to find $(m_{i*}^a,n_{i*}^a)$ that covers $b_{i}$ as the matched AMR node for $(a_{i},b_{i})$ , that is, such a node satisfies $m_{i*}^a\leqslant b_i\leqslant n_{i*}^a$ . If no nodes can be matched to $(a_{i},b_{i})$ in this way, we turn to search for the nearest AMR node:
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
i ^ {*} = \arg \min _ {k} \left(| b _ {i} - m _ {k} | + | b _ {i} - n _ {k} |\right),
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+

|
| 131 |
+
Figure 2: An illustration for node alignment and message passing. For unmatched entity or trigger nodes, such as $h_5^l$ , we add a new node with feature $h_5^l$ and link this new node to the nearest node of $h_5^l$ .
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
|
| 135 |
+
where $(m_{i^*}^a, n_{i^*}^a)$ is the AMR node with the shortest distance to the entity node $(a_i, b_i)$ . We also conduct alignment for event trigger nodes in the same way.
|
| 136 |
+
|
| 137 |
+
Heterogeneous Graph Construction After obtaining the matched or nearest AMR node for each identified entity mention and event trigger, we construct a heterogeneous graph with initialized node and edge features as follows. Given an AMR graph $G^{a} = (V^{a},E^{a})$ , we consider the following three cases to initialize feature vectors for each node $v_{i}^{a}$ :
|
| 138 |
+
|
| 139 |
+
- Node $v_{i}^{a}$ has been matched to an entity mention or event trigger. We take the representation vector of the matched node (instead of $v_{i}^{a}$ ) as the initialized feature vector.
|
| 140 |
+
- Node $v_{i}^{a}$ is not matched to any identified nodes but labeled as the nearest node for an entity mention or event trigger, e.g., $(a_{i}, b_{i})$ . We add a new node in the AMR topology with the representation vector of $(a_{i}, b_{i})$ , and link this new node from $v_{i}^{a}$ with an edge type Others defined in Table 1.
|
| 141 |
+
- Node $v_{i}^{a}$ is neither matched nor acted as the nearest node to any entities (triggers). We use its own node representation as the initialized feature vector.
|
| 142 |
+
|
| 143 |
+
For each edge $e_{i,j}^{a}$ , we first map it to an AMR relation cluster according to Table 1 and then look up for its representation $e_{i,j}$ from the embedding matrix $\mathbf{E}$ . We use $h_{i}^{0}$ to represent the initial feature for each node. An illustration for this step is shown in Figure 2.
|
| 144 |
+
|
| 145 |
+
Attention Based Message Passing Inspired from Graph Attention Networks (GATs) (Velickovic et al., 2018), we design an $L$ -layer attention based message passing mechanism on an AMR graph topology to enable the entity and trigger
|
| 146 |
+
|
| 147 |
+
nodes to aggregate neighbor information. For the node $i$ in layer $l$ , we first calculate the attention score for each neighbor $j \in \mathcal{N}_i$ based on node features $h_i^l, h_j^l$ and edge features $e_{i,j}^l$ .
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\alpha_ {i, j} ^ {l} = \frac {\exp \left(\sigma \left(f ^ {l} [ \mathbf {W h} _ {i} ^ {l} : \mathbf {W} _ {e} \boldsymbol {e} _ {i , j} : \mathbf {W h} _ {j} ^ {l} ]\right)\right)}{\sum_ {k \in \mathcal {N} _ {i}} \exp \left(\sigma \left(f ^ {l} [ \mathbf {W h} _ {i} ^ {l} : \mathbf {W} _ {e} \boldsymbol {e} _ {i , k} : \mathbf {W h} _ {k} ^ {l} ]\right)\right)}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
where $\mathbf{W}$ , $\mathbf{W}_e$ are trainable parameters, and $f^l$ and $\sigma(\cdot)$ are a single layer feed-forward neural network and LeakyReLU activation function respectively. Then the neighborhood information $h^*$ can be calculated by the weighted sum of neighbor features.
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\boldsymbol {h} ^ {*} = \sum_ {j \in \mathcal {N} _ {i}} \alpha_ {i, j} ^ {l} \boldsymbol {h} _ {i} ^ {l}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
The updated node feature is calculated by a combination of the original node feature and its neighborhood information, where $\gamma$ controls the level of message passing between neighbors, and $\mathbf{W}^*$ denotes a trainable linear transformation parameter.
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
\boldsymbol {h} ^ {l + 1} = \boldsymbol {h} ^ {l} + \gamma \cdot \mathbf {W} ^ {*} \boldsymbol {h} ^ {*} \tag {2}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
We select the entity and trigger nodes from the graph and take their feature vectors $\boldsymbol{h}_i^L$ from the final layer as the representation vectors that have aggregated information from the AMR graph (as Fig. 2 illustrates). We use $\boldsymbol{h}_i^e$ and $\boldsymbol{h}_i^t$ to denote the features of each entity and trigger respectively.
|
| 166 |
+
|
| 167 |
+
# 3.4 Model Training and Decoding
|
| 168 |
+
|
| 169 |
+
In this subsection, we introduce how we jointly decode the output information network given the identified entity and trigger nodes with their aggregated features $h_i^e$ and $h_i^t$ . We design a hierarchical decoding method that incorporates the AMR hierarchy as a condition to decide a more organized order for decoding knowledge elements.
|
| 170 |
+
|
| 171 |
+
Maximizing Scores with Global Features Similar to OneIE (Lin et al., 2020), we use task-specific feed-forward neural networks to map each node or node pair into a score vector. Specifically, we calculate four types of score vectors $s_i^e$ , $s_i^t$ , $s_{i,j}^r$ and $s_{i,j}^{a}$ for entity, trigger, relation, and argument role extraction tasks respectively, where the dimension of each score vector is identical to the number of classes in each task.
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
\begin{array}{l} \boldsymbol {s} _ {i} ^ {e} = F F N ^ {e} (\boldsymbol {h} _ {i} ^ {e}), \quad \boldsymbol {s} _ {i} ^ {t} = F F N ^ {t} (\boldsymbol {h} _ {i} ^ {t}), \\ \boldsymbol {s} _ {i, j} ^ {r} = F F N ^ {r} \left(\left[ \boldsymbol {h} _ {i} ^ {e}: \boldsymbol {h} _ {j} ^ {e} \right]\right), \\ \boldsymbol {s} _ {i, j} ^ {a} = F F N ^ {a} \left(\left[ \boldsymbol {h} _ {i} ^ {t}: \boldsymbol {h} _ {j} ^ {e} \right]\right). \\ \end{array}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
Therefore, the total score $c(G)$ is formulated as
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
c (G) = \sum_ {i = 1} ^ {| \mathcal {E} |} \pmb {s} _ {i} ^ {e} + \sum_ {i = 1} ^ {| \mathcal {T} |} \pmb {s} _ {i} ^ {t} + \sum_ {i = 1} ^ {| \mathcal {E} |} \sum_ {i = 1} ^ {| \mathcal {E} |} \pmb {s} _ {i, j} ^ {r} + \sum_ {i = 1} ^ {| \mathcal {T} |} \sum_ {i = 1} ^ {| \mathcal {E} |} \pmb {s} _ {i, j} ^ {a}.
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
We inherit the approach of using global features in OneIE (Lin et al., 2020) to enforce the model to capture more information on global interactions. The global score $g(G)$ for an information network $G$ is defined as the sum of local score $c(G)$ and the contribution of global features $f_{G}$ .
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
g (G) = c (G) + \boldsymbol {u} \cdot \boldsymbol {f} _ {G} \tag {3}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
where $\pmb{u}$ is a trainable parameter. The global feature vector $f_{G}$ is composed of binary values indicating whether the output graph possesses some interdependencies among knowledge elements (e.g., an attacker is likely to be a person being arrested). We use the global feature categories identical to (Lin et al., 2020) during training, and the overall training objective is to maximize the identification log-likelihood, the local score $s(G)$ while minimizing the gap on the global score between ground-truth $\hat{G}$ and predicted information network $G$ .
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
\max \mathcal {L} ^ {I} + c (\hat {G}) - (g (\hat {G}) - g (G)).
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
Hierarchical Ordered Decoding Given the output score vectors for all nodes and their pairwise edges, the most straightforward way is to output an information network $G$ with the highest global score $g(G)$ . Due to the utilization of global features, searching through all possible information networks could incur exponential complexity, thus we take a similar approach based on beam search used in (Lin et al., 2020). Compared with OneIE (Lin et al., 2020), we creatively incorporate the AMR hierarchy to decide a more organized decoding order instead of a simple left-to-right order based on the word positions in the original sentence. Specifically, given the nodes and their alignments with AMR, we sort up these nodes according to the positions of their aligned AMR nodes in a top-to-down manner, that is, the aligned AMR node which is nearest to the AMR root node needs to be decoded first. We illustrate the decoding order in Fig. 3 using an example. We use $\mathcal{U} = \{v_1, v_2, \dots, v_K\}$ to denote the sorted identified trigger and entity nodes, and similar to (Lin et al., 2020), we add these nodes step by step from $v_1$ to $v_K$ , and in each step, we obtain all possible subgraphs by enumerating the types of the new
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
Figure 3: An illustration of ordered decoding, where $\tau_{1}$ and $\tau_{2}$ are identified triggers while each $\varepsilon_{i,j}$ is identified entity. In this example, the order of beam search decoding is: $\tau_{1},\tau_{2},\varepsilon_{1,1},\varepsilon_{2,1},\varepsilon_{1,2},\varepsilon_{2,2},\varepsilon_{2,3}$ .
|
| 199 |
+
|
| 200 |
+
node and pairwise edges with other existing nodes. We only keep the top $\theta$ subgraphs in each step as candidate graphs to avoid exponential complexity before finally select the graph with the highest global score $g(G)$ in step $K$ as the output.
|
| 201 |
+
|
| 202 |
+
# 4 Experiments
|
| 203 |
+
|
| 204 |
+
# 4.1 Data
|
| 205 |
+
|
| 206 |
+
ACE-2005 Automatic Content Extraction (ACE) 2005 dataset<sup>4</sup> provides fine-grained annotations for entity, relation, and event extraction. We use the same preprocessing and data split as in OneIE (Lin et al., 2020) and DyGIE++ (Wadden et al., 2019) to obtain the ACE05-E corpus with 18,927 sentences. Following (Lin et al., 2020), we keep 7 entity types, 6 relation types, 33 event types, and 22 event argument roles.
|
| 207 |
+
|
| 208 |
+
ERE-EN We also adopt another dataset ERE-EN from the Deep Exploration and Filtering of Test (DEFT) program, which includes more recent news articles and political reviews. We extract 17,108 sentences from datasets LDC2015E29, LDC2015E68, and LDC2015E78. Following (Lin et al., 2020), we keep 7 entity types, 5 relation types, 38 event types, and 20 argument roles.
|
| 209 |
+
|
| 210 |
+
GENIA To further prove that our proposed model is generalizable to other specific domains, we also evaluate our model on biomedical event extraction datasets BioNLP Genia 2011 and 2013 (Kim et al., 2011, 2013). We ignore all of the trigger-trigger links (nested event structures) and merge all repeated event triggers into unified information networks to make them compatible for comparison with previous models. Since the test sets are blind and not available for merging the annotations, we evaluate the model performance on the official development sets instead. Details of dataset statistics are shown in Table 2.
|
| 211 |
+
|
| 212 |
+
<table><tr><td>Dataset</td><td>Split</td><td>#Sents</td><td>#Ents</td><td>#Events</td><td>#Refs</td></tr><tr><td rowspan="3">ACE05-E</td><td>Train</td><td>17,172</td><td>29,006</td><td>4,202</td><td>4,664</td></tr><tr><td>Dev</td><td>923</td><td>2,451</td><td>450</td><td>560</td></tr><tr><td>Test</td><td>832</td><td>3,017</td><td>403</td><td>636</td></tr><tr><td rowspan="3">ERE-EN</td><td>Train</td><td>14,736</td><td>39,501</td><td>6,208</td><td>5,054</td></tr><tr><td>Dev</td><td>1,209</td><td>3,369</td><td>525</td><td>408</td></tr><tr><td>Test</td><td>1,163</td><td>3,295</td><td>551</td><td>466</td></tr><tr><td rowspan="2">Genia'11</td><td>Train</td><td>9,583</td><td>12,058</td><td>5,854</td><td>513</td></tr><tr><td>Dev</td><td>3,499</td><td>4,842</td><td>1,933</td><td>117</td></tr><tr><td rowspan="2">Genia'13</td><td>Train</td><td>2,992</td><td>3,794</td><td>1,776</td><td>46</td></tr><tr><td>Dev</td><td>3,341</td><td>4,542</td><td>1,821</td><td>34</td></tr></table>
|
| 213 |
+
|
| 214 |
+
Table 2: Dataset statistics.
|
| 215 |
+
|
| 216 |
+
# 4.2 Experimental Setup
|
| 217 |
+
|
| 218 |
+
We adopt the most recent joint IE models DyGIE++ (Wadden et al., 2019) and OneIE (Lin et al., 2020) as baselines in our experiments, and use the same evaluation metrics as (Zhang et al., 2019b; Wadden et al., 2019; Lin et al., 2020) to report the F1-Score for each IE subtask.
|
| 219 |
+
|
| 220 |
+
Entity: An extracted entity mention is correct only if both the predicted word span $(a_{i},b_{i})$ and entity type $e_i$ match a reference entity mention.
|
| 221 |
+
|
| 222 |
+
Event Trigger: An event trigger is correctly identified (Trg-I) if the predicted span $(p_i, q_i)$ matches a reference trigger. It is correctly classified (Trg-C) if the predicted event type $t_i$ also matches the reference trigger.
|
| 223 |
+
|
| 224 |
+
Event Argument: A predicted event argument $(\tau_{i},\varepsilon_{j},l_{i,j}^{a})$ is correctly identified (Arg-I) if $(\tau_{i},\varepsilon_{j})$ matches a reference event argument. It is correctly classified (Arg-C) is the type $l_{i,j}^{a}$ also matches the reference argument role.
|
| 225 |
+
|
| 226 |
+
Relation: A predicted relation is correct only if its arguments $\varepsilon_{i}$ and $\varepsilon_{j}$ both match a reference relation mention.
|
| 227 |
+
|
| 228 |
+
We train our model with Adam (Kingma and Ba, 2015) on NVIDIA Tesla V100 GPUs for 80 epochs (approximately takes 10 minutes for 1 training epoch) with a learning rate 1e-5 for RoBERTa parameters and 5e-3 for other parameters. We take the level of message passing $\gamma$ as 0.001, which is a relatively low level of message passing because we found that too much message passing will result in the loss of own features for the nodes. We use a two-layer semantic graph aggregator and the feature dimensions are 2048 for nodes and 256 for edges. For other hyper-parameters, we keep them strictly identical to (Lin et al., 2020) to enforce fair comparison. Specifically, the FFNs consist of two layers with a dropout rate of 0.4, where the num
|
| 229 |
+
|
| 230 |
+
<table><tr><td>Dataset</td><td colspan="6">ACE05-E</td><td colspan="6">ERE-EN</td></tr><tr><td>Tasks</td><td>Ent</td><td>Trg-I</td><td>Trg-C</td><td>Arg-I</td><td>Arg-C</td><td>Rel</td><td>Ent</td><td>Trg-I</td><td>Trg-C</td><td>Arg-I</td><td>Arg-C</td><td>Rel</td></tr><tr><td>DyGIE++</td><td>89.7</td><td>-</td><td>69.7</td><td>53.0</td><td>48.8</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>OneIE</td><td>90.2</td><td>77.9</td><td>74.7</td><td>57.9</td><td>55.6</td><td>61.8</td><td>86.3</td><td>66.0</td><td>57.1</td><td>43.7</td><td>42.1</td><td>52.8</td></tr><tr><td>AMR-IE w/o Enc</td><td>90.3</td><td>77.9</td><td>74.8</td><td>58.8</td><td>56.6</td><td>61.8</td><td>86.5</td><td>66.2</td><td>57.1</td><td>44.8</td><td>43.0</td><td>53.0</td></tr><tr><td>AMR-IE w/o Dec</td><td>91.9</td><td>78.1</td><td>74.9</td><td>59.0</td><td>57.8</td><td>62.2</td><td>87.8</td><td>67.6</td><td>60.9</td><td>45.6</td><td>44.1</td><td>54.4</td></tr><tr><td>AMR-IE (Ours)</td><td>92.1</td><td>78.1</td><td>75.0</td><td>60.9</td><td>58.6</td><td>62.3</td><td>87.9</td><td>68.0</td><td>61.4</td><td>46.4</td><td>45.0</td><td>55.2</td></tr></table>
|
| 231 |
+
|
| 232 |
+
bers of hidden units are 150 for entity and relation extraction and 600 for event extraction, and the beam size is set to 10.
|
| 233 |
+
|
| 234 |
+
# 4.3 Overall Performance
|
| 235 |
+
|
| 236 |
+
We report the performance of our AMR-IE model and compare it with previous methods in Table 3 and Table 4. In general, our AMR guided method greatly outperforms the baselines on all IE subtasks including entity, event, and relation extraction. The performance improvement is particularly significant on edge classification tasks such as relation extraction and event argument role labeling, because the model can better understand the relations between knowledge elements with the help of external AMR graph structures. To further show the help of each individual part in our model, we introduce two variants of our model for ablation study and show the results in Table 3. In AMR-IE w/o Enc, we remove the semantic graph aggregator and only keep the ordered decoding, while in AMR-IE w/o Dec, we keep the semantic graph aggregator but use a flat left-to-right decoding order. From the results, we can see that only incorporating the graph encoder is already able to substantially improve the performance on all IE subtasks, because the identified nodes can capture some global interactions through message passing on the AMR topology. Moreover, using an AMR-guided decoding order could further boost the performance especially on the task of event argument extraction.
|
| 237 |
+
|
| 238 |
+
# 4.4 Influence of Message Passing
|
| 239 |
+
|
| 240 |
+
We also conduct parameter sensitivity analysis to study the influence of $\gamma$ defined in Eq. (2), which controls how much information to aggregate from the neighbor nodes in the AMR graph. We change this parameter from $10^{-5}$ to $10^{1}$ and show the performance trends of IE subtasks on ACE-05E dataset in Fig. 4. We can discover that for each subtask, the model performance experiences an in
|
| 241 |
+
|
| 242 |
+
Table 3: Overall test F-scores (\%) of joint information extraction. AMR-IE w/o Enc and AMR-IE w/o Dec are model ablation variants where we only keep the ordered decoding and graph encoding respectively.
|
| 243 |
+
|
| 244 |
+
<table><tr><td>Dataset</td><td>Model</td><td>Ent</td><td>Trg-C</td><td>Arg-C</td><td>Rel</td></tr><tr><td rowspan="2">Genia'11</td><td>OneIE</td><td>81.8</td><td>56.9</td><td>57.0</td><td>63.1</td></tr><tr><td>AMR-IE</td><td>82.2</td><td>61.5</td><td>59.8</td><td>65.2</td></tr><tr><td rowspan="2">Genia'13</td><td>OneIE</td><td>71.5</td><td>57.3</td><td>51.4</td><td>39.3</td></tr><tr><td>AMR-IE</td><td>78.4</td><td>63.8</td><td>58.0</td><td>42.4</td></tr></table>
|
| 245 |
+
|
| 246 |
+
Table 4: Dev set F-scores $(\%)$ for joint information extraction on BioNLP Genia 2011 and 2013 datasets.
|
| 247 |
+
|
| 248 |
+

|
| 249 |
+
Figure 4: Performance on ACE05-E dataset changes with the level of message passing.
|
| 250 |
+
|
| 251 |
+
crease as the level of message passing goes stronger. However, when $\gamma$ continually increases higher than $10^{-2}$ , the performance of all of the subtasks will undergo a clear decrease. Such a phenomenon follows our intuition since the identified nodes can collect useful information from their AMR neighbors by message passing. However, if the nodes focus too much on their neighborhood information, they will lose some of their own inherent semantic features which results in a performance decrease. In addition, we can also see that compared with entity and trigger extraction tasks, the performance of relation and argument extraction tasks varies more drastically with $\gamma$ . This is because edge type prediction requires high-quality embeddings for both of the involved nodes, which makes the edge type prediction tasks more sensitive to message passing.
|
| 252 |
+
|
| 253 |
+
<table><tr><td>Sentence</td><td>AMR Parsing</td><td>OneIE outputs</td><td>AMR-IE outputs</td></tr><tr><td>If the resolution is not passed, Washington would likely want to use the airspace for strikes against Iraq and for airlifting troops to northern Iraq.</td><td>airlift-01
|
| 254 |
+
“Washington” north
|
| 255 |
+
troop
|
| 256 |
+
“Iraq”</td><td>Movement:Transport
|
| 257 |
+
“airlifting”
|
| 258 |
+
“Washington” “Iraq”
|
| 259 |
+
“troop” Place
|
| 260 |
+
Artifact</td><td>Movement:Transport
|
| 261 |
+
“airlifting”
|
| 262 |
+
“Washington” “Iraq”
|
| 263 |
+
“Troop” Place
|
| 264 |
+
Artifact</td></tr><tr><td>A Pakistani court in central Punjab province has sentenced a Christian man to life imprisonment for a blasphemy conviction, police said Sunday.</td><td>cause-01
|
| 265 |
+
sentence-01
|
| 266 |
+
convict-01
|
| 267 |
+
province
|
| 268 |
+
man
|
| 269 |
+
blasphemy</td><td>Justice: Sentence
|
| 270 |
+
“sentenced” “convict”
|
| 271 |
+
“adjuunctator” “defendant”
|
| 272 |
+
“province” “court” “man”</td><td>Justice: Sentence
|
| 273 |
+
“sentenced” “conviction”
|
| 274 |
+
“adjuunctator” “defendant” “provinces” “court” “man”</td></tr><tr><td>Russian President Vladimir Putin's summit with the leaders of Germany and France may have been a failure that proves there can be no long-term "peace camp" alliance following the end of war in Iraq.</td><td>fail-01
|
| 275 |
+
summit
|
| 276 |
+
“Vladimir Putin” “Arabie” “France”</td><td>Contact:Meet
|
| 277 |
+
“summit”
|
| 278 |
+
Entity
|
| 279 |
+
“Vladimir Putin” “leaders”</td><td>Contact:Meet
|
| 280 |
+
“summit”
|
| 281 |
+
Entity
|
| 282 |
+
“Vladimir Putin” “leaders”</td></tr><tr><td>Major US insurance group AIG is in the final stage of talks to take over General Electric's Japanese life insurance arm in a deal to create Japan's sixth largest life insurer, reports said Wednesday.</td><td>create-01
|
| 283 |
+
“AIG” person
|
| 284 |
+
ARGI-of
|
| 285 |
+
“Japan” insure-01</td><td>Business:Start-Org
|
| 286 |
+
“create”
|
| 287 |
+
Agent
|
| 288 |
+
“AIG” “insurer”
|
| 289 |
+
“Japan”</td><td>Business:Start-Org
|
| 290 |
+
“create”
|
| 291 |
+
Agent
|
| 292 |
+
“Japan” “insurer”</td></tr></table>
|
| 293 |
+
|
| 294 |
+
Table 5: Examples from ACE05-E test set that illustrates how AMR parsing can improve the performance of joint IE. Note that due to the limitation of space, we only show a subset of each ARM graph that are most relevant for generating the correct IE outputs.
|
| 295 |
+
|
| 296 |
+
# 4.5 Qualitative Analysis
|
| 297 |
+
|
| 298 |
+
In order to further understand how our proposed AMR guided encoding and AMR conditioned decoding method help to improve the performance, we select typical examples from the output of our AMR-IE model for illustration in Table 5.
|
| 299 |
+
|
| 300 |
+
# 5 Related Work
|
| 301 |
+
|
| 302 |
+
Some recent efforts have incorporated dependency parsing trees into neural networks for event extraction (Li et al., 2019) and relation extraction (Miwa and Bansal, 2016; Pouran Ben Veyseh et al., 2020). For semantic role labeling (SRL), (Stanovsky and Dagan, 2016) manages to exploit the similarity between SRL and open domain IE by creating a mapping between two tasks. (Huang et al., 2016, 2018) employ AMR as a more concise input format for their IE models, but they decompose each AMR into triples to capture the local contextual information between nodes and edges, while the node information is not disseminated in a global graph topology. (Rao et al., 2017) proposes a subgraph matching based method to extract biomedical events from AMR graphs, while (Li et al., 2020) uses an additional GCN based encoder for obtaining better word representations.
|
| 303 |
+
|
| 304 |
+
Besides, graph neural networks are also widely
|
| 305 |
+
|
| 306 |
+
used for event extraction (Liu et al., 2018; Veyseh et al., 2020; Balali et al., 2020; Zhang et al., 2021) and relation and entity extraction (Zhang et al., 2018; Fu et al., 2019; Guo et al., 2019; Sun et al., 2020). Graph neural networks also demonstrate effectiveness to encode other types of intrinsic structures of a sentence, such as knowledge graph (Zhang et al., 2019a; Huang et al., 2020), document-level relations (Sahu et al., 2019; Lockard et al., 2020; Zeng et al., 2020), and self-constructed graphs (Kim and Lee, 2012; Zhu et al., 2019; Qian et al., 2019; Sahu et al., 2020). However, all these approaches focus on single IE tasks while can not scale to extracting a joint information network with entities, relations, and events.
|
| 307 |
+
|
| 308 |
+
There are some recent efforts that focus on building joint neural models for performing multiple IE tasks simultaneously, such as joint entity and relation extraction (Li and Ji, 2014; Katiyar and Cardie, 2017; Zheng et al., 2017; Bekoulis et al., 2018; Sun et al., 2019; Luan et al., 2019) and joint event and entity extraction (Yang and Mitchell, 2016). DyGIE++ (Wadden et al., 2019) designs a joint model to extract entities, events, and relations based on span graph propagation, while OneIE (Lin et al., 2020) further makes exploits global features to facilitate the model to capture more global interactions. Compared with the flat
|
| 309 |
+
|
| 310 |
+
encoder in OneIE, our proposed framework leverages a semantic graph aggregator to incorporate information from fine-grained AMR semantics and enforce global interactions in the encoding phase. In addition, instead of a simple left-to-right sequential decoder, we creatively use the AMR hierarchy to decide the decoding order of knowledge elements. Both the AMR-guided graph encoder and decoder are proven highly effective compared to their flat counterparts.
|
| 311 |
+
|
| 312 |
+
# 6 Conclusions and Future Work
|
| 313 |
+
|
| 314 |
+
AMR parsing and IE share the same goal of constructing semantic graphs from unstructured text. IE focuses more on a target ontology, and thus its output can be considered as a subset of AMR graph. In this paper, we present two intuitive and effective ways to leverage guidance from AMR parsing to improve IE, during both encoding and decoding phases. In the future, we plan to integrate AMR graph with entity coreference graph so our IE framework can be extended to document level.
|
| 315 |
+
|
| 316 |
+
# Acknowledgement
|
| 317 |
+
|
| 318 |
+
This research is based upon work supported in part by U.S. NSF No. 1741634, U.S. DARPA KAIROS Program No. FA8750-19-2-1004, U.S. DARPA AIDA Program No. FA8750-18-2-0014, Air Force No. FA8650-17-C-7715, LORELEI Program No. HR0011-15-C-0115, the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract No. FA8650-17-C-9116. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
|
| 319 |
+
|
| 320 |
+
# References
|
| 321 |
+
|
| 322 |
+
Ali Balali, Masoud Asadpour, Ricardo Campos, and Adam Jatowt. 2020. Joint event extraction along shortest dependency paths using graph convolutional networks. Knowl. Based Syst., 210:106492.
|
| 323 |
+
Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffith, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation
|
| 324 |
+
|
| 325 |
+
for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178-186, Sofia, Bulgaria. Association for Computational Linguistics.
|
| 326 |
+
Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Adversarial training for multi-context joint entity and relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2830-2836, Brussels, Belgium. Association for Computational Linguistics.
|
| 327 |
+
Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4:357-370.
|
| 328 |
+
Ramón Fernandez Astudillo, Miguel Ballesteros, Tahira Naseem, Austin Blodgett, and Radu Florian. 2020. Transition-based parsing with stack-transformers. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1001-1007, Online. Association for Computational Linguistics.
|
| 329 |
+
Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. 2019. GraphRel: Modeling text as relational graphs for joint entity and relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1409-1418, Florence, Italy. Association for Computational Linguistics.
|
| 330 |
+
Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention guided graph convolutional networks for relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 241-251, Florence, Italy. Association for Computational Linguistics.
|
| 331 |
+
Kung-Hsiang Huang, Mu Yang, and Nanyun Peng. 2020. Biomedical event extraction with hierarchical knowledge graphs. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 1277-1285, Online. Association for Computational Linguistics.
|
| 332 |
+
Lifu Huang, Taylor Cassidy, Xiaocheng Feng, Heng Ji, Clare R. Voss, Jiawei Han, and Avirup Sil. 2016. Liberal event extraction and event schema induction. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 258-268, Berlin, Germany. Association for Computational Linguistics.
|
| 333 |
+
Lifu Huang, Heng Ji, Kyunghyun Cho, Ido Dagan, Sebastian Riedel, and Clare Voss. 2018. Zero-shot transfer learning for event extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2160-2170, Melbourne, Australia. Association for Computational Linguistics.
|
| 334 |
+
|
| 335 |
+
Arzoo Katiyar and Claire Cardie. 2017. Going out on a limb: Joint extraction of entity mentions and relations without dependency trees. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 917-928, Vancouver, Canada. Association for Computational Linguistics.
|
| 336 |
+
Jin-Dong Kim, Yue Wang, Toshihisa Takagi, and Akinori Yonezawa. 2011. Overview of Genia event task in BioNLP shared task 2011. In Proceedings of BioNLP Shared Task 2011 Workshop, pages 7-15, Portland, Oregon, USA. Association for Computational Linguistics.
|
| 337 |
+
Jin-Dong Kim, Yue Wang, and Yamamoto Yasunori. 2013. The Genia event extraction shared task, 2013 edition - overview. In Proceedings of the BioNLP Shared Task 2013 Workshop, pages 8-15, Sofia, Bulgaria. Association for Computational Linguistics.
|
| 338 |
+
Seokhwan Kim and Gary Geunbae Lee. 2012. A graph-based cross-lingual projection approach for weakly supervised relation extraction. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 48-53, Jeju Island, Korea. Association for Computational Linguistics.
|
| 339 |
+
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
|
| 340 |
+
Diya Li, Lifu Huang, Heng Ji, and Jiawei Han. 2019. Biomedical event extraction based on knowledge-driven tree-LSTM. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1421-1430, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 341 |
+
Manling Li, Alireza Zareian, Qi Zeng, Spencer Whitehead, Di Lu, Heng Ji, and Shih-Fu Chang. 2020. Cross-media structured common space for multimedia event extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2557-2568. Association for Computational Linguistics.
|
| 342 |
+
Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 402-412, Baltimore, Maryland. Association for Computational Linguistics.
|
| 343 |
+
Qi Li, Heng Ji, Yu Hong, and Sujian Li. 2014. Constructing information networks using one single model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1846-1851, Doha, Qatar. Association for Computational Linguistics.
|
| 344 |
+
|
| 345 |
+
Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7999-8009, Online. Association for Computational Linguistics.
|
| 346 |
+
Xiao Liu, Zhunchen Luo, and Heyan Huang. 2018. Jointly multiple events extraction via attention-based graph information aggregation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1247-1256, Brussels, Belgium. Association for Computational Linguistics.
|
| 347 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. In ArXiv: abs/1907.11692.
|
| 348 |
+
Colin Lockard, Prashant Shiralkar, Xin Luna Dong, and Hannaneh Hajishirzi. 2020. ZeroShotCeres: Zero-shot relation extraction from semi-structured webpages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8105–8117, Online. Association for Computational Linguistics.
|
| 349 |
+
Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3036-3046, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 350 |
+
Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1116, Berlin, Germany. Association for Computational Linguistics.
|
| 351 |
+
Amir Pouran Ben Veyseh, Franck Dernoncourt, Dejing Dou, and Thien Huu Nguyen. 2020. Exploiting the syntax-model consistency for neural relation extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8021-8032, Online. Association for Computational Linguistics.
|
| 352 |
+
Yujie Qian, Enrico Santus, Zhijing Jin, Jiang Guo, and Regina Barzilay. 2019. GraphIE: A graph-based framework for information extraction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 751-761, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 353 |
+
|
| 354 |
+
Sudha Rao, Daniel Marcu, Kevin Knight, and Hal Daumé III. 2017. Biomedical event extraction using Abstract Meaning Representation. In *BioNLP* 2017, pages 126-135, Vancouver, Canada., Association for Computational Linguistics.
|
| 355 |
+
Sunil Kumar Sahu, Fenia Christopoulou, Makoto Miwa, and Sophia Ananiadou. 2019. Inter-sentence relation extraction with document-level graph convolutional neural network. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4309-4316, Florence, Italy. Association for Computational Linguistics.
|
| 356 |
+
Sunil Kumar Sahu, Derek Thomas, Billy Chiu, Neha Sengupta, and Mohammady Mahdy. 2020. Relation extraction with self-determined graph convolutional network. In CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020, pages 2205-2208.
|
| 357 |
+
Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2300-2305, Austin, Texas. Association for Computational Linguistics.
|
| 358 |
+
Changzhi Sun, Yeyun Gong, Yuanbin Wu, Ming Gong, Daxin Jiang, Man Lan, Shiliang Sun, and Nan Duan. 2019. Joint type inference on entities and relations via graph convolutional networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1361-1370, Florence, Italy. Association for Computational Linguistics.
|
| 359 |
+
Kai Sun, Richong Zhang, Yongyi Mao, Samuel Mensah, and Xudong Liu. 2020. Relation extraction with convolutional network over learnable syntax-transport graph. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI2020, New York, NY, USA, February 7-12, 2020, pages 8928-8935.
|
| 360 |
+
Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings.
|
| 361 |
+
Amir Pouran Ben Veyseh, Tuan Ngo Nguyen, and Thien Huu Nguyen. 2020. Graph transformer networks with syntactic and semantic structures for event argument extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020, Online Event, 16-20 November 2020, pages 3651-3661. Association for Computational Linguistics.
|
| 362 |
+
David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the
|
| 363 |
+
|
| 364 |
+
9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5784-5789, Hong Kong, China. Association for Computational Linguistics.
|
| 365 |
+
Bishan Yang and Tom M. Mitchell. 2016. Joint extraction of events and entities within a document context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 289-299, San Diego, California. Association for Computational Linguistics.
|
| 366 |
+
Shuang Zeng, Runxin Xu, Baobao Chang, and Lei Li. 2020. Double graph based reasoning for document-level relation extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1630-1640, Online. Association for Computational Linguistics.
|
| 367 |
+
Junchi Zhang, Qi He, and Yue Zhang. 2021. Syntax grounded graph convolutional network for joint entity and event extraction. Neurocomputing, 422:118-128.
|
| 368 |
+
Ningyu Zhang, Shumin Deng, Zhanlin Sun, Guanying Wang, Xi Chen, Wei Zhang, and Huajun Chen. 2019a. Long-tail relation extraction via knowledge graph embeddings and graph convolution networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3016-3025, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 369 |
+
Tongtao Zhang, Heng Ji, and Avirup Sil. 2019b. Joint entity and event extraction with generative adversarial imitation learning. Data Intelligence, 1(2):99-120.
|
| 370 |
+
Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205-2215, Brussels, Belgium. Association for Computational Linguistics.
|
| 371 |
+
Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extraction of entities and relations based on a novel tagging scheme. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1227-1236, Vancouver, Canada. Association for Computational Linguistics.
|
| 372 |
+
Hao Zhu, Yankai Lin, Zhiyuan Liu, Jie Fu, Tat-Seng Chua, and Maosong Sun. 2019. Graph neural networks with generated parameters for relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1331-1339, Florence, Italy. Association for Computational Linguistics.
|
abstractmeaningrepresentationguidedgraphencodinganddecodingforjointinformationextraction/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:828bf7311ccbb37005bb10f13af56dcdc480276a0a5bd39c09312cd4257353d5
|
| 3 |
+
size 514077
|
abstractmeaningrepresentationguidedgraphencodinganddecodingforjointinformationextraction/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:65fe47c1779e66df527ed01c582f89c9550ef6f0b356351fb0b21a26c74de8e8
|
| 3 |
+
size 435246
|
acomparativestudyonschemaguideddialoguestatetracking/8ed55747-ed10-49bb-8869-1cebe4ecebc6_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:09fa008fdbfa253c1ff4f6f4fb263297452f9993da08eb965f1e04a2a97b239b
|
| 3 |
+
size 120283
|
acomparativestudyonschemaguideddialoguestatetracking/8ed55747-ed10-49bb-8869-1cebe4ecebc6_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8cf8d79403f55b6035aeb266c380a8d51c779755b797b1f9618e45ee0d82fb03
|
| 3 |
+
size 139984
|
acomparativestudyonschemaguideddialoguestatetracking/8ed55747-ed10-49bb-8869-1cebe4ecebc6_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:51395a51f6444cac7848ac09544b453ef919afb0bac74e81fd89d9baa245981c
|
| 3 |
+
size 1103758
|
acomparativestudyonschemaguideddialoguestatetracking/full.md
ADDED
|
@@ -0,0 +1,449 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Comparative Study on Schema-Guided Dialogue State Tracking
|
| 2 |
+
|
| 3 |
+
Jie Cao *
|
| 4 |
+
|
| 5 |
+
School of Computing, University of Utah
|
| 6 |
+
|
| 7 |
+
jcao@cs.utah.edu
|
| 8 |
+
|
| 9 |
+
Yi Zhang
|
| 10 |
+
|
| 11 |
+
AWS AI, Amazon
|
| 12 |
+
|
| 13 |
+
yizhngn@amazon.com
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
Frame-based state representation is widely used in modern task-oriented dialog systems to model user intentions and slot values. However, a fixed design of domain ontology makes it difficult to extend to new services and APIs. Recent work proposed to use natural language descriptions to define the domain ontology instead of tag names for each intent or slot, thus offering a dynamic set of schema. In this paper, we conduct in-depth comparative studies to understand the use of natural language description for schema in dialog state tracking. Our discussion mainly covers three aspects: encoder architectures, impact of supplementary training, and effective schema description styles. We introduce a set of newly designed bench-marking descriptions and reveal the model robustness on both homogeneous and heterogeneous description styles in training and evaluation.
|
| 18 |
+
|
| 19 |
+
# 1 Introduction
|
| 20 |
+
|
| 21 |
+
From early frame-driven dialog system GUS (Bobrow et al., 1977) to virtual assistants (Alexa, Siri, and Google Assistant et al.), frame-based dialog state tracking has long been studied to meet various challenges. In particular, how to support an ever-increasing number of services and APIs spanning multiple domains has been a focal point in recent years, evidenced by multi-domain dialog modeling (Budzianowski et al., 2018; Byrne et al., 2019; Shah et al., 2018a) and transferable dialog state tracking to unseen intent/slots (Mrkšić et al., 2017; Wu et al., 2019; Hosseini-Asl et al., 2020).
|
| 22 |
+
|
| 23 |
+
Recently, Rastogi et al. (2019) proposed a new paradigm called schema-guided dialog for transferable dialog state tracking by using natural language description to define a dynamic set of service schemata. As shown in Figure 1, the primary motivation is that these descriptions can offer effective
|
| 24 |
+
|
| 25 |
+
# Schema
|
| 26 |
+
|
| 27 |
+
Services
|
| 28 |
+
|
| 29 |
+
"service_name": "Restaurants_1"
|
| 30 |
+
|
| 31 |
+
"description": "A leading provider for restaurant search and reservations"
|
| 32 |
+
|
| 33 |
+
# Intent
|
| 34 |
+
|
| 35 |
+
"name": "FindRestaurants"
|
| 36 |
+
|
| 37 |
+
"description": "Find a restaurant of a
|
| 38 |
+
|
| 39 |
+
particular cuisine in a city"
|
| 40 |
+
|
| 41 |
+
required_slot": ["cuisine","city"],
|
| 42 |
+
|
| 43 |
+
"optionalslots": {
|
| 44 |
+
|
| 45 |
+
"has_live_music": "dontcare",
|
| 46 |
+
|
| 47 |
+
"serves_alcohol": "dontcare"
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
Figure 1: An example dialog from Restaurant_1 service, along with its service/intent/slot descriptions and dialog state representation.
|
| 51 |
+
|
| 52 |
+
"name": "ReserveRestaurant"
|
| 53 |
+
|
| 54 |
+
"description": "Reserve a table at a
|
| 55 |
+
|
| 56 |
+
"restaurant"
|
| 57 |
+
|
| 58 |
+
"required_slot":
|
| 59 |
+
|
| 60 |
+
["restaurant_name","city","time"],
|
| 61 |
+
|
| 62 |
+
# Slots
|
| 63 |
+
|
| 64 |
+
"name": "culicine"
|
| 65 |
+
|
| 66 |
+
"description": "Cuisine of food served in the
|
| 67 |
+
|
| 68 |
+
restaurant
|
| 69 |
+
|
| 70 |
+
"is_categorical": false
|
| 71 |
+
|
| 72 |
+
"possible_values":
|
| 73 |
+
|
| 74 |
+
[“Mexican”,“Chinese”,“Indian”,
|
| 75 |
+
|
| 76 |
+
"American", "Italian"]
|
| 77 |
+
|
| 78 |
+

|
| 79 |
+
|
| 80 |
+
# Dialogue
|
| 81 |
+
|
| 82 |
+
User: I am looking for Chinese food in SFO
|
| 83 |
+
System: I found 10 restaurants in SFO.
|
| 84 |
+
User: Any popular one? By the way, I also want to buy a drink.
|
| 85 |
+
2 Use: Any popular one? By the way, I also want to buy a drink. System: There is a nice restaurant called Butterfly Restaurant.
|
| 86 |
+
System: There is a free restaurant called BuHrynBuR
|
| 87 |
+
3 User: Do they have live music? Where are they located?
|
| 88 |
+
System: They do not have live music. They are at 33 The Embarcadero.
|
| 89 |
+
|
| 90 |
+
Dialogue State Tracking
|
| 91 |
+
|
| 92 |
+
<table><tr><td colspan="2">User Turn 1</td><td colspan="2">User Turn 2</td><td colspan="2">User Turn 3</td></tr><tr><td>intent</td><td>FindRestanurants</td><td>intent</td><td>FindRestanurants</td><td>intent</td><td>FindRestanurants</td></tr><tr><td>reqSlots</td><td>N/A</td><td>req Slots</td><td>N/A</td><td rowspan="3">req Slots</td><td rowspan="3">has live music, street_address</td></tr><tr><td colspan="2">slot_values</td><td colspan="2">slot_values</td></tr><tr><td>city</td><td>SFO</td><td>city</td><td>SFO</td></tr><tr><td>cuisine</td><td>Chinese</td><td>cuisine</td><td>Chinese</td><td colspan="2">slot_values</td></tr><tr><td></td><td></td><td>alcohol</td><td>TRUE</td><td>city</td><td>SFO</td></tr><tr><td></td><td></td><td></td><td></td><td>cuisine</td><td>Chinese</td></tr><tr><td></td><td></td><td></td><td></td><td>alcohol</td><td>TRUE</td></tr></table>
|
| 93 |
+
|
| 94 |
+
knowledge sharing across different services, e.g., connecting semantically similar concepts across heterogeneous APIs, thus allowing a unified model to handle unseen services and APIs. With the publicly available schema-guided dialog dataset (SG-DST henceforward) as a testbed, they organized a state tracking shared task composed of four subtasks: intent classification (Intent), requested slot identification (Req), categorical slot labeling (Cat), and noncategorical slot labeling (NonCat) (Rastogi et al., 2020). Many participants achieved promising performance by exploiting the schema description for dialog modeling, especially on unseen services.
|
| 95 |
+
|
| 96 |
+
Despite the novel approach and promising results, current schema-guided dialog state tracking
|
| 97 |
+
|
| 98 |
+
task only evaluates on a single dataset with limited variation in schema definition. It is unknown how this paradigm generalizes to other datasets and other different styles of descriptions. In this paper, we focus our investigation on the study of three aspects in schema-guided dialog state tracking: (1) schema encoding model architectures (2) supplementary training on intermediate tasks (3) various styles for schema description. To make a more general discussion on the schema-guided dialog state tracking, we perform extensive empirical studies on both SG-DST and MULTIWOZ 2.2 datasets. In summary, our contributions include:
|
| 99 |
+
|
| 100 |
+
- A comparative study on schema encoding architectures, suggesting a partial-attention encoder for good balance between inference speed and accuracy.
|
| 101 |
+
- An experimental study of supplementary training on schema-guided dialog state tracking, via intermediate tasks including natural language inference and question answering.
|
| 102 |
+
- An in-depth analysis of different schema description styles on a new suite of benchmarking datasets with variations in schema description for both SG-DST and MULTIWOZ 2.2.
|
| 103 |
+
|
| 104 |
+
# 2 Schema-Guided Dialog State Tracking
|
| 105 |
+
|
| 106 |
+
A classic dialog state tracker predicts a dialog state frame at each user turn given the dialog history and predefined domain ontology. As shown in Figure 1, the key difference between schema-guided dialog state tracking and the classic paradigm is the newly added natural language descriptions. In this section, we first introduce the four subtasks and schema components in schema-guided dialog state tracking, then we outline the research questions in our paper. Subtasks. As shown in Figure 1, the dialog state for each service consists of 3 parts: active intent, requested slots, user goals (slot values). Without loss of generality, for both SG-DST and MULTIWOZ 2.2 datasets, we divide their slots into categorical and non-categorical slots by following previous study on dual-strategies (Zhang et al., 2019). Thus to fill the dialog state frame for each user turn, we solve four subtasks: intent classification (Intent), requested slot identification (Req), categorical slot labeling (Cat), and non-categorical slot labeling (NonCat). All subtasks require matching the current dialog history with candidate schema descriptions for multiple times.
|
| 107 |
+
|
| 108 |
+
Schema Components. Figure 1 shows three main schema components: service, intent, slot. For each intent, the schema also describes optional or required slots for it. For each slot, there are flags indicating whether it is categorical or not. Categorical means there is a set of predefined candidate values (Boolean, numeric or text). For instance, has_live_music in Figure 1 is a categorical slot with Boolean values. Non-categorical, on the other hand, means the slot values are filled from the string spans in the dialog history.
|
| 109 |
+
|
| 110 |
+
New Questions. These added schema descriptions pose the following three new questions. We discuss each of them in the following sections.
|
| 111 |
+
|
| 112 |
+
Q1. How should dialogue and schema be encoded? §5
|
| 113 |
+
Q2. How do different supplementary trainings impact each subtask? §6
|
| 114 |
+
Q3. How do different description styles impact the state tracking performance? §7
|
| 115 |
+
|
| 116 |
+
# 3 Related Work
|
| 117 |
+
|
| 118 |
+
Our work is related to three lines of research: multisentence encoding, multi-domain and transferable dialog state tracking. However, our focus is on the comparative study of different encoder architectures, supplementary training, and schema description style variation. Thus we adopt existing strategies from multi-domain dialog state tracking. Multi-Sentence Encoder Strategies. Similar to the recent study on encoders for response selection and article search tasks Humeau et al. (2019), we also conduct our comparative study on the two typical architectures Cross-Encoder (Bordes et al., 2014; Lowe et al., 2015) and Dual-Encoder (Wu et al., 2017; Yang et al., 2018). However, they only focus on sentence-level matching tasks. All subtasks in our case require sentence-level matching between dialog context and each schema, while the non-categorical slot filling task also needs to produce a sequence of token-level representation for span detection. Hence, we study multi-sentence encoding for both sentence-level and token-level tasks. Moreover, to share the schema encoding across subtasks and turns, we also introduce a simple Fusion-Encoder by caching schema token embeddings in §5.1, which improves efficiency without sacrificing much accuracy.
|
| 119 |
+
|
| 120 |
+
Multi-domain Dialog State Tracking. Recent research on multi-domain dialog system have been largely driven by the release of large-scale
|
| 121 |
+
|
| 122 |
+
multi-domain dialog datasets, such as MultiWOZ (Budzianowski et al., 2018), M2M (Shah et al., 2018a), accompanied by studies on key issues such as in/cross-domain carry-over (Kim et al., 2019). In this paper, our goal is to understanding the design choice for schema descriptions in dialog state tracking. Thus we simply follow the in-domain cross-over strategies used in TRADE (Wu et al., 2019). Additionally, explicit cross-domain carryover (Naik et al., 2018) is difficult to generalize to new services and unknown carryover links. We use longer dialog history to inform the model on the dialog in the previous service. This simplified strategy does impact our model performance negatively in comparison to a well-designed dialog state tracking model on seen domains. However, it helps reduce the complexity of matching extra slot descriptions for cross-service carryover. We leave the further discussion for future work.
|
| 123 |
+
|
| 124 |
+
Transferable Dialog State Tracking. Another line of research focuses on how to build a transferable dialog system that is easily scalable to newly added intents and slots. This covers diverse topics including e.g., resolving lexical/morphological variabilities by symbolic de-lexicalization-based methods (Henderson et al., 2014; Williams et al., 2016), neural belief tracking (Mrkšić et al., 2017), generative dialog state tracking (Peng et al., 2020; Hosseini-Asl et al., 2020), modeling DST as a question answering task (Zhang et al., 2019; Lee et al., 2019; Gao et al., 2020, 2019). Our work is similar with the last class. However, we further investigate whether the DST can benefit from NLP tasks other than question answering. Furthermore, without rich description for the service/intent/slot in the schema, previous works mainly focus on simple format on question answering scenarios, such as domain-slot-type compounded names (e.g., "restaurant-food"), or simple question template "What is the value for slot i?". We incorporate different description styles into a comparative discussion on §7.1.
|
| 125 |
+
|
| 126 |
+
# 4 Datasets
|
| 127 |
+
|
| 128 |
+
To the best of our knowledge, at the time of our study, SG-DST and MULTIWOZ 2.2 are the only two publicly available corpus for schema-guided dialog study. We choose both of them for our study. In this section, we first introduce these two representative datasets, then we discuss the generalizability in domain diversity, function overlapping, data collecting methods.
|
| 129 |
+
|
| 130 |
+
Schema-Guided Dialog Dataset. SG-DST dataset is especially designed as a test-bed for schema-guided dialog, which contains well-designed heterogeneous APIs with overlapping functionalities between services (Rastogi et al., 2019). In DSTC8 (Rastogi et al., 2020), SG-DST was introduced as the standard benchmark dataset for schema-guided dialog research. SG-DST covers 20 domains, 88 intents, 365 slots. However, previous research are mainly conducted based on this single dataset and the provided single description style. In this paper, we further extended this dataset with other benchmarking description styles as shown in §7, and then we perform both homogenous and heterogenous evaluation on it.
|
| 131 |
+
|
| 132 |
+
Remixed MultiWOZ 2.2 Dataset. To eliminate potential bias from the above single SG-DST dataset, we further add MULTIWOZ 2.2 (Zang et al., 2020) to our study. Among various extended versions for MultiWOZ dataset (2.0-2.3, Budzianowski et al., 2018; Eric et al., 2020; Zang et al., 2020; Han et al., 2020), besides rectifying the annotation errors, MULTIWOZ 2.2 also introduced the schema-guided annotations, which covers 8 domains, 19 intents, 36 slots. To evaluate performance on seen/unseen services with MultiWOZ, we remix the MULTIWOZ 2.2 dataset to include as seen services dialogs related to restaurant, attraction and train during training, and eliminate slots from other domains/services from training split. For dev, we add two new domains hotel and taxi as unseen services. For test, we add all remaining domains as unseen, including those that have minimum overlap with seen services, such as hospital, police, bus. The statistics of data splits are shown in Appendix A.2. Note that this data split is different from the previous work on zero-shot MultiWOZ DST which takes a leave-one-out approach in Wu et al. (2019). By remixing the data in the way described above, we can evaluate the zero-shot performance on MultiWOZ in a way largely compatible with SG-DST.
|
| 133 |
+
|
| 134 |
+
Discussion. First, the two datasets cover diverse domains. MULTIWOZ 2.2 covers various possible dialogue scenarios ranging from requesting basic information about attractions through booking a hotel room or travelling between cities. While SG-DST covers more domains, such as 'Payments', 'Calender', 'DoctorServices' and so on.
|
| 135 |
+
|
| 136 |
+
<table><tr><td>Datasets</td><td>Splits</td><td>Dialog</td><td>Domains</td><td>Services</td><td>Zero-shot Domains</td><td>Zero-shot Services</td><td>Function Overlap</td><td>Collecting Method</td></tr><tr><td rowspan="3">SG-DST</td><td>Train</td><td>16142</td><td>16</td><td>26</td><td>-</td><td>-</td><td rowspan="3">Across-domain Within-domain</td><td rowspan="3">M2M</td></tr><tr><td>Dev</td><td>2482</td><td>16</td><td>17</td><td>1</td><td>8</td></tr><tr><td>Test</td><td>4201</td><td>18</td><td>21</td><td>3</td><td>11</td></tr><tr><td rowspan="3">MULTIWOZ 2.2</td><td>Train</td><td>9617</td><td>3</td><td>3</td><td>-</td><td>-</td><td rowspan="3">Across-domain</td><td rowspan="3">H2H</td></tr><tr><td>Dev</td><td>2455</td><td>5</td><td>5</td><td>2</td><td>2</td></tr><tr><td>Test</td><td>2969</td><td>8</td><td>8</td><td>5</td><td>5</td></tr></table>
|
| 137 |
+
|
| 138 |
+
Table 1: Summary of characteristics of SG-DST MULTIWOZ 2.2 datasets, in domain diversity, function overlap, data collecting methods
|
| 139 |
+
|
| 140 |
+
Second, they include different levels of overlapping functionalities. SG-DST allows frequent function overlapping between multiple services, within the same domain (e.g. BookOneWayTicket v.s. BookRoundTripTicket), or across different domains (BusTicket v.s. TrainTicket). However, the overlapping in MULTIWOZ 2.2 only exists across different domains, e.g., 'destination', 'leaveat' slots for Taxi and Bus services, 'pricerange', 'bookday' for Restaurant and Hotel services.
|
| 141 |
+
|
| 142 |
+
Third, they are collected by two different approaches which are commonly used in dialog collecting. SG-DST is firstly collected by machine-to-machine self-play (M2M, Shah et al., 2018b) with dialog flows as seeds, then paraphrased by crowd-workers. While MULTIWOZ 2.2 are human-to-human dialogs (H2H, Kelley, 1984), which are collected with the Wizard-of-Oz approach.
|
| 143 |
+
|
| 144 |
+
We summarize the above discussion in Table 1. We believe that results derived from these two representative datasets can guide future research in schema guided dialog.
|
| 145 |
+
|
| 146 |
+
# 5 Dialog & Schema Representation and Inference (Q1)
|
| 147 |
+
|
| 148 |
+
In this section, we focus on the model architecture for matching dialog history with schema descriptions using pretrained BERT (Devlin et al., 2019) $^3$ . To support four subtasks, we first extend Dual-Encoder and Cross-Encoder to support both sentence-level matching and token-level prediction. Then we propose an additional Fusion-Encoder strategy to get faster inference without sacrificing much accuracy. We summarize different architectures in Figure 2. Then we show the classification head and results for each subtask.
|
| 149 |
+
|
| 150 |
+

|
| 151 |
+
Figure 2: Dual-Encoder, Cross-Encoder and Fusion Encoder, shaded block will be cached during training
|
| 152 |
+
|
| 153 |
+
# 5.1 Encoder Architectures
|
| 154 |
+
|
| 155 |
+
Dual-Encoder. It consists of two separate BERTs to encode dialog history and schema description respectively, as Figure 2 (a). We follow the setting in the official baseline provided by DSTC8 Track4 (Rastogi et al., 2020). We first use a fixed BERT to encode the schema description once and cached the encoded schema $\mathbf{CLS}_S$ . Then for sentence-level representation, we concatenate dialog history representation $\mathbf{CLS}_D$ and candidate schema representation $\mathbf{CLS}_S$ as the whole sentence-level representation for the pair, denoted as $\mathbf{CLS}^{DE}$ . For token-level representation, we concatenate the candidate schema $\mathbf{CLS}_S$ with each token embedding in the dialog history, denoted as $\mathbf{TOK}^{DE}$ . Because the candidate schema embeddings are encoded independently from the di
|
| 156 |
+
|
| 157 |
+
alog context, they can be pre-computed once and cached for fast inference.
|
| 158 |
+
|
| 159 |
+
Cross-Encoder. Another popular architecture as Figure 2 (b) is Cross-Encoder, which concatenates the dialog and schema as a single input, and encodes jointly with a single self-attentive encoder spanning over the two segments. When using BERT to encode the concatenated sentence pair, it performs full (cross) self-attention in every transformer layers, thus offer rich interaction between the dialog and schema. BERT naturally produces a summarized representation with [CLS] embedding $\mathbf{CLS}^{CE}$ and each schema-attended dialog token embeddings $\mathbf{TOK}^{CE}$ . Since the dialog and schema encoding always depend on each other, it requires recomputing dialog and schema encoding for multiple times, thus much slower in inference.
|
| 160 |
+
|
| 161 |
+
Fusion-Encoder. In Figure 2 (c), similar to DualEncoder, Fusion-Encoder also encodes the schema independently with a fixed BERT and finetuning another BERT for dialog encoding. However, instead of caching a single [CLS] vector for schema representation, it caches all token representation for the schema including the [CLS] token. What's more, to integrate the sequences dialog token representation with schema token representation, an extra stack of transformer layers are added on top to allow token-level fusion via self-attention, similar to Cross-Encoder. The top transformer layers will produce embeddings for each token $\mathbf{TOK}^{FE}$ including a schema-attended $\mathbf{CLS}^{FE}$ of the input [CLS] from the dialog history. With cached schema token-level representations, it can efficiently produce schema-aware sentence- and token-level representation for each dialog-schema pairs.
|
| 162 |
+
|
| 163 |
+
# 5.2 Model Overview
|
| 164 |
+
|
| 165 |
+
All the above 3 encoders will produce both sentence- and token-level representations for a given sentence pair. In this section, we abstract them as two representations CLS and TOK, and present the universal classification heads to make decisions for each subtask.
|
| 166 |
+
|
| 167 |
+
Active Intent. To decide the intent for current dialog turn, we match current dialog history $D$ with each intent descriptions $I_0\dots I_k$ . For each dialog-intent pair $(D,I_k)$ , we project the final sentence-level CLS representation to a single number $P_{I_k}^{\text{active}}$ with a linear layer follows a sigmoid function. We predict "NONE" if the $P_{I_k}^{\text{active}}$ of all intents are less than a threshold 0.5, which means
|
| 168 |
+
|
| 169 |
+
no intent is active. Otherwise, we predict the intent with largest $P_{I_k}^{\text{active}}$ . We predict the intent for each turn independently without considering the prediction on previous turns.
|
| 170 |
+
|
| 171 |
+
Requested Slot. As in Figure 1, multiple requested slots can exist in a single turn. We use the same strategy as in active intent prediction to predict a number $P_{req}^{active}$ . However, to support the multiple requested slots prediction. We predict all the requested slots with $P_{req}^{active} > 0.5$
|
| 172 |
+
|
| 173 |
+
Categorical Slot. Categorical slots have a set of candidate values. We cannot predict unseen values via n-way classification. Instead, we do binary classification on each candidate value. Besides, rather than directly matching with values, we also need to check that whether the corresponding slot has been activated. For Cross-Encoder and Fusion-Encoder, we use typical two-stage state tracking to incrementally build the state: Step 1. Using CLS to predict the slot status as none, dontcare or active. When the status is active, we use the predicted slot value; Otherwise, it will be assigned to dontcare meaning no user preference for this slot, or none meaning no value update for the slot in current turn; Step 2. If Step 1 is active, we match the dialog history with each value and select the most related value by ranking. We train on cross entropy loss. Two-stage strategy is efficient for Dual-Encoder and Fusion-Encoder, where cached schema can be reused, and get efficiently ranked globally in a single batch. However, it is not scalable for Cross-Encoder, especially for large number of candidate values in MultiWOZ dataset. Hence, during training, we only use a binary cross-entropy for each single value and postpone the ranking only to the inference time.
|
| 174 |
+
|
| 175 |
+
Noncategorical Slot. The slot status prediction for noncategorical slot use the same two-stage strategy. Besides that, we use the token representation of dialog history TOK to compute two softmax scores $f_{start}^{i}$ and $f_{end}^{i}$ for each token $i$ , to represent the score of predicting the token as start and end position respectively. Finally, we find the valid span with maximum sum of the start and end scores.
|
| 176 |
+
|
| 177 |
+
# 5.3 Experiments on Encoder Comparison
|
| 178 |
+
|
| 179 |
+
To fairly compare all three models, we follow the same schema input setting as in Table 2. We trained separate models for SG-DST and the remixed MultiWOZ datasets for all the experiments in our pa
|
| 180 |
+
|
| 181 |
+
<table><tr><td>Intent
|
| 182 |
+
Req
|
| 183 |
+
Cat
|
| 184 |
+
NonCat</td><td>service description, intent description
|
| 185 |
+
service description, slot description
|
| 186 |
+
slot description, cat value
|
| 187 |
+
service description, slot description</td></tr></table>
|
| 188 |
+
|
| 189 |
+
Table 2: Schema description input used for different tasks to compare Dual-Encoder, Cross-Encoder, and Fusion-Encoder. In the appendix A.3, we also studies other compositions of description input. We found that service description will not help for Intent, Req and Cat tasks, while the impact on NonCat task also varies from SG-DST and MULTIWOZ 2.2 dataset.
|
| 190 |
+
|
| 191 |
+
<table><tr><td></td><td colspan="5">SG-DST</td><td colspan="3">MULTIWOZ 2.2</td></tr><tr><td rowspan="2">Method/Task</td><td rowspan="2">Acc Intent</td><td rowspan="2">F1 Req</td><td colspan="3">Joint Acc</td><td colspan="3">Joint Acc</td></tr><tr><td>Cat</td><td>NonCat</td><td>All</td><td>Cat</td><td>NonCat</td><td>All</td></tr><tr><td colspan="9">Seen Services</td></tr><tr><td>Dual-Encoder</td><td>94.51</td><td>99.62</td><td>87.92</td><td>47.77</td><td>43.20</td><td>79.20</td><td>79.34</td><td>65.64</td></tr><tr><td>Fusion-Encoder</td><td>94.90</td><td>99.69</td><td>88.94</td><td>48.78</td><td>58.52</td><td>81.37</td><td>80.58</td><td>67.43</td></tr><tr><td>Cross-Encoder</td><td>95.55</td><td>99.59</td><td>93.68</td><td>91.85</td><td>87.58</td><td>85.99</td><td>81.02</td><td>71.93</td></tr><tr><td colspan="9">Unseen Services</td></tr><tr><td>Dual-Encoder</td><td>89.73</td><td>95.20</td><td>42.44</td><td>31.62</td><td>19.51</td><td>56.92</td><td>50.82</td><td>31.83</td></tr><tr><td>Fusion-Encoder</td><td>90.47</td><td>95.95</td><td>48.79</td><td>35.91</td><td>22.85</td><td>57.01</td><td>52.23</td><td>33.64</td></tr><tr><td>Cross-Encoder</td><td>93.84</td><td>98.26</td><td>71.55</td><td>74.13</td><td>54.54</td><td>59.85</td><td>59.62</td><td>38.46</td></tr></table>
|
| 192 |
+
|
| 193 |
+
Table 3: Test set results on SG-DST and MULTIWOZ 2.2. The Dual-Encoder model is a re-implementation of official DSTC8 baseline from Rastogi et al. (2019). Other models are trained with the architecture described in our paper.
|
| 194 |
+
|
| 195 |
+
pers<sup>5</sup>. Because there are very few intent and requested slots in MULTIWOZ 2.2 dataset, we ignore the intent and requested slots tasks for MULTIWOZ 2.2 in our paper.
|
| 196 |
+
|
| 197 |
+
Results. As shown in Table 3, Cross-Encoder performs the best over all subtasks. Our FusionEncoder with partial attention outperforms the Dual-Encoder by a large margin, epsecially on categorical and noncategorical slots predictions. Additionally, on seen services, we found that DualEncoder and Fusion-Encoder can perform as good as Cross-Encoder on Intent and Req tasks. However, they cannot generalize well on unseen services as Cross-Encoder.
|
| 198 |
+
|
| 199 |
+
Inference Speed. To test the inference speed, we conduct all the experiments with a maximum affordable batch size to fully exploit 2 V100 GPUs (with 16GB GPU RAM each). During training, we log the inference time of each evaluation on dev set. Both Dual-Encoder and Fusion-Encoder can do joint inference across 4 subtasks to obtain an integral dialog state for a dialog turn example. Dual-Encoder achieves the highest inference speed of 603.35 examples per GPU second, because the
|
| 200 |
+
|
| 201 |
+
encoding for dialog and schema are fully separated. A dialog only needed to be encoded for once during the inference of a dialog state example while the schema are precomputed once. However, for Cross-Encoder, to predict a dialog state for a single turn, it need to encode more than 300 sentence pairs in a batch, thus only processes 4.75 examples per GPU second. Fusion-Encoder performs one time encoding on dialog history, but it needs to jointly encode the same amount of dialog-schema pair ws Cross-Encoder, instead, however, with a two-layer transformer encoder. Overall it achieves 10.54 examples per GPU second, which is $2.2\mathrm{x}$ faster than Cross-Encoder. With regarding to the accuracy in Table 3, Fusion-Encoder performs much better than Dual-Encoder, especially on unseen services.
|
| 202 |
+
|
| 203 |
+
# 6 Supplementary Training (Q2)
|
| 204 |
+
|
| 205 |
+
Besides the pretrain-fintune framework used in §5, Phang et al. (2018) propose to add a supplementary training phase on an intermediate task after the pretraining, but before finetuning on target task. It shows significant improvement on the target tasks. Moreover, large amount pretrained and finetuned transformer-based models are publicly accessible, and well-organized in model hubs for sharing, training and testing<sup>6</sup>. Given the new task of schemaguided dialog state tracking, in this section, we study our four subtasks with different intermediate tasks for supplementary training.
|
| 206 |
+
|
| 207 |
+
# 6.1 Intermediate Tasks
|
| 208 |
+
|
| 209 |
+
As described in § 5.2, all our 4 subtasks take a pair of dialog and schema description as input, and predict with the summerized sentence-pair CLS representation. While NonCat also requires span-based detection such as question answering. Hence, they share the similar problem structure with the following sentence-pair encoding tasks.
|
| 210 |
+
|
| 211 |
+
Natural Language Inference. Given a hypothesis/premise sentence pair, natural language inference is a task to determine whether a hypothesis is entailed, contradicted or neutral given that premise. Question Answering. Given a passage/question pairs, the task is to extract the span-based answer in the passage.
|
| 212 |
+
|
| 213 |
+
Hence, when finetuning BERT on our subtaks, instead of directly using the originally pretrained BERT, we use the BERT finetuned on the above
|
| 214 |
+
|
| 215 |
+
<table><tr><td rowspan="3"></td><td colspan="9">SG-DST</td><td colspan="4">MULTIWOZ 2.2</td></tr><tr><td colspan="2">intent</td><td colspan="2">req</td><td colspan="2">cat</td><td colspan="3">noncat</td><td colspan="2">cat</td><td colspan="2">noncat</td></tr><tr><td>all seen</td><td>unseen</td><td>all seen</td><td>unseen</td><td>all seen</td><td>unseen</td><td>all seen</td><td>unseen</td><td>all seen</td><td>unseen</td><td>all seen</td><td>unseen</td><td>all seen</td></tr><tr><td>ΔSNLI</td><td>+0.51</td><td>+0.02</td><td>+0.68</td><td>-0.19</td><td>+0.38</td><td>-0.38</td><td>-1.63</td><td>-2.87</td><td>-1.23</td><td>-4.7</td><td>-0.1</td><td>-6.25</td><td>+2.05</td></tr><tr><td>ΔSQuAD</td><td>-1.81</td><td>-0.17</td><td>-1.32</td><td>-0.25</td><td>-0.01</td><td>-0.33</td><td>-2.87</td><td>-3.02</td><td>-5.17</td><td>+1.99</td><td>-1.79</td><td>+3.25</td><td>+0.04</td></tr></table>
|
| 216 |
+
|
| 217 |
+
Table 4: Relative performance improvement of different supplementary training on SG-DST and MULTIWOZ 2.2 dataset
|
| 218 |
+
|
| 219 |
+
two tasks for further finetuning. Due to better performance of Cross-Encoder in §5, we directly use the finetuned Cross-Encoder version of BERT models on SNLI and SQuAD2.0 dataset from Huggingface model hub. We add extra speaker tokens [user:] and [system:] into the vocabulary for encoding the multi-turn dialog histories.
|
| 220 |
+
|
| 221 |
+
# 6.2 Results on Supplementary Training
|
| 222 |
+
|
| 223 |
+
Table 4 shows the performances gain when finetuning 4 subtasks based on models with the above SNLI and SQuAD2.0 supplementary training.
|
| 224 |
+
|
| 225 |
+
We mainly find that SNLI helps on Intent task, SQuAD2 mainly helps on NonCat task, while neither of them helps much on Cat task. Recently, Namazifar et al. (2020) also found that when modeling dialog understanding as question answering task, it can benefit from a supplementary training on SQuAD2 dataset, especially on few-shot scenarios, which is a similar findings as our NonCat task. Result difference on Req task is minor, because it is a relatively easy task, adding any supplementary training did n't help much. Moreover, for Cat task, the sequence 2 of the input pair is the slot description with a categorical slot value, thus the meaning overlapping between the full dialog history and the slot/value is much smaller than SNLI tasks. On the other side, CLS token in SQuAD BERT is finetuned for null predictions via start and end token classifiers, which is different from the single CLS classifier in Cat task.
|
| 226 |
+
|
| 227 |
+
# 7 Impact of Description Styles (Q3)
|
| 228 |
+
|
| 229 |
+
Previous work on schema-guided dialog (Rastogi et al., 2020) are only based on the provided descriptions in SG-DST dataset. Recent work on modeling dialog state tracking as reading comprehension (Gao et al., 2019) only formulate the descriptions as simple question format with existing intent/slot names, it is unknown how it performs when compared to other description styles. Moreover, they only conduct homogeneous evaluation where training and test data share the same descrip
|
| 230 |
+
|
| 231 |
+
tion style. In this section, We also investigate how a model trained on one description style will perform on other different styles, especially in a scenario where chat bot developers may design their own descriptions. We first introduce different styles of descriptions in our study, and then we train models on each description style and evaluate on tests with corresponding homogeneous and heterogeneous styles of descriptions. Given the best performance of Cross-Encoder shown in the previous section and its popularity in DSTC8 challenges, we adopt it as our model architecture in this section.
|
| 232 |
+
|
| 233 |
+
# 7.1 Benchmarking Styles
|
| 234 |
+
|
| 235 |
+
For each intent/slot, we describe their functionalities by the following different descriptions styles:
|
| 236 |
+
|
| 237 |
+
Identifier. This is the least informative case of name-based description: we only use meaningless intent/slot identifiers, e.g. Intent_1, Slot_2. It means we don't use description from any schema component. We want to investigate how a simple identifier-based description performs in schemaguided dialog modeling, and the performance lower-bound on transferring to unseen services.
|
| 238 |
+
|
| 239 |
+
NameOnly. Using the original intent/slot names in SG-DST and MULTIWOZ 2.2 dataset as descriptions, to show whether name is enough for schema-guided dialog modeling.
|
| 240 |
+
|
| 241 |
+
Q-Name. This is corresponding to previous work by Gao et al. (2019). For each intent/slot, it generate a question to inquiry about the intent and slot value of the dialog. For each slot, it simply follows the template 'What is the value for slot i?' Besides that, our work also extend the intent description by following the template 'Is the user intending to intent j'.
|
| 242 |
+
|
| 243 |
+
Orig. The original descriptions in SG-DST and MULTIWOZ 2.2 dataset.
|
| 244 |
+
|
| 245 |
+
Q-Orig. Different from the Q-Name, firstly it is based on the original descriptions; secondly, rather than always use the "what is" template to inquiry the intent/slot value, We add "what", "which", "how many" or "when" depending on the entity type required for the slot. Same as Q-Name, we just add
|
| 246 |
+
|
| 247 |
+
prefixes as "Is the user intending to... " in front of the original description. In a sum, this description is just adding question format to original description. The motivation of this description is to see whether the question format is helpful or not for schema-guided dialog modeling.
|
| 248 |
+
|
| 249 |
+
To test the model robustness, we also create two paraphrased versions Name-Para and Orig-Para for NameOnly and Orig respectively. We first use nematus (Sennrich et al., 2017) to automatically paraphrase the description with back translation, from English to Chinese and then translate back, then we manually check the paraphrase to retain the main meaning. Appendix A.5.1 shows examples for different styles of schema descriptions.
|
| 250 |
+
|
| 251 |
+
# 7.2 Results on Description Styles
|
| 252 |
+
|
| 253 |
+
Unlike the composition used in Table 2, we don't use the service description to avoid its impact. For each style, we train separate models on 4 subtasks, then we evaluate them on different target styles. First, Table 5 summarizes the performance for homogeneous evaluation, while Table 6 shows how the question style description can benefit from SQuAD2 finetuning. Then we also conduct heterogeneous evaluation on the other styles<sup>7</sup> as shown in Table 7.
|
| 254 |
+
|
| 255 |
+
<table><tr><td rowspan="2">Style\Task</td><td colspan="4">SG-DST</td><td colspan="2">MULTIWOZ 2.2</td></tr><tr><td>Intent</td><td>Req</td><td>Cat</td><td>NonCat</td><td>Cat</td><td>NonCat</td></tr><tr><td>Identifier</td><td>61.16</td><td>91.48</td><td>62.47</td><td>30.19</td><td>34.25</td><td>52.28</td></tr><tr><td>NameOnly</td><td>94.24</td><td>98.84</td><td>74.01</td><td>75.63</td><td>53.72</td><td>56.18</td></tr><tr><td>Q-Name</td><td>93.31</td><td>98.86</td><td>74.36</td><td>74.86</td><td>54.19</td><td>56.17</td></tr><tr><td>Orig</td><td>93.01</td><td>98.55</td><td>74.51</td><td>75.76</td><td>52.19</td><td>57.20</td></tr><tr><td>Q-Orig</td><td>93.42</td><td>98.51</td><td>76.64</td><td>76.60</td><td>53.61</td><td>57.80</td></tr></table>
|
| 256 |
+
|
| 257 |
+
Table 5: Homogeneous evaluation results of different description style on SG-DST dataset and MULTIWOZ 2.2 datasets. The middle horizontal line separates the two name-based descriptions and two rich descriptions in our settings. All numbers in the table are mixed performance including both seen and unseen services.
|
| 258 |
+
|
| 259 |
+
# 7.2.1 Homogeneous Evaluation
|
| 260 |
+
|
| 261 |
+
Is name-based description enough? As shown in Table 5, Identifier is the worst case of using name description, its extremely bad performance indicates name-based description can be very unstable. However, we found that simple meaningful name-based description actually can perform the best in Intent and Req task, and they perform
|
| 262 |
+
|
| 263 |
+
worse on Cat and NonCat tasks comparing to the bottom two rich descriptions. After careful analysis on the intents in SG-DST datasets, we found that most services only contains two kinds of intents, an information retrieval intent with a name prefix "Find-," "Get-," "Search-"; another transaction intent like "Add-," "Reserve- or "Buy-". Interestingly, we found that all the intent names in the original schema-guided dataset strictly follows an action-object template with a composition of words without abbreviation, such as "FindEvents", "BuyEventTickets". This simple name template is good enough to describe the core functionality of an intent in SG-DST dataset. Additionally, Req is a relatively simper task, requesting information are related to specifial attributes, such as "has_live_music", "has_wifi", where keywords co-occurred in the slot name and in the user utterance, hence rich explanation cannot help further. On the other side, rich descriptions are more necessary for Cat and NonCat task. Because in many cases, slot names are too simple to represent the functionalities behind it, for example, slot name "passengers" cannot fully represent the meaning "number of passengers in the ticket booking".
|
| 264 |
+
|
| 265 |
+
Does question format help? As shown in Table 5, when comparing row Q-Orig v.s. Orig, we found extra question format can improve the performance on Cat and NonCat task on both SG-DST and MULTIWOZ 2.2 datasets, but not for Intent and Req tasks. We believe that question format helps the model to focus more on specific entities in the dialog history. However, when adding a simple question pattern to NameOnly, comparing row Q-Name and NameOnly, there is no consistent improvement on both of the two datasets. Further more, we are curious about whether BERT finetuned on SQuAD2 (SQuAD2-BERT) can further help on the question format. Because NonCat are similar with span-based question answering, we focus on NonCat here. Table 6 shows that, after applying the supplementary training on SQuAD2 (\$6), almost all models get improved on unseen splits however slightly dropped on seen services. Moreover, comparing to Q-Name, Q
|
| 266 |
+
|
| 267 |
+
<table><tr><td rowspan="2">Style/Dataset</td><td colspan="3">SG-DST</td><td colspan="3">MULTIWOZ 2.2</td></tr><tr><td>all</td><td>seen</td><td>unseen</td><td>all</td><td>seen</td><td>unseen</td></tr><tr><td>Orig</td><td>+1.99</td><td>-1.79</td><td>+3.25</td><td>+1.93</td><td>-2.21</td><td>+4.27</td></tr><tr><td>Q-Orig</td><td>+6.13</td><td>-2.01</td><td>+8.84</td><td>+1.06</td><td>-1.28</td><td>+3.06</td></tr><tr><td>NameOnly</td><td>-0.45</td><td>-1.49</td><td>-0.11</td><td>+1.75</td><td>+0.58</td><td>+1.77</td></tr><tr><td>Q-Name</td><td>+0.05</td><td>-2.98</td><td>+1.04</td><td>-0.04</td><td>-0.32</td><td>+1.25</td></tr></table>
|
| 268 |
+
|
| 269 |
+
Orig is more similar to the natural questions in the SQuAD2, we obverse that Q-Orig gains more than Q-Name from pretrained model on SQuAD2.
|
| 270 |
+
|
| 271 |
+
# 7.2.2 Heterogeneous
|
| 272 |
+
|
| 273 |
+
In this subsection, we first simulate a scenario when there is no recommended description style for the future unseen services. Hence, unseen services can follow any description style in our case. We average the evaluation performance on three other descriptions and summarized in Table 7. The $\Delta$ column shows the performance change compared to the homogeneous performance. It is not surprising that almost all models perform worse on heterogeneous styles than on homogeneous styles due to different distribution between training and evaluation. The bold number shows the best average performance on heterogeneous evaluation for each subtask. The trends are similar with the analysis in homogeneous evaluation 7.2.1, the name-based descriptions perform better than other rich descriptions on intent classification tasks. While on other tasks, the Orig description performs more robust, especially on NonCat task.
|
| 274 |
+
|
| 275 |
+
Furthermore, we consider another scenario where fixed description convention such as Name-Only and Orig are suggested to developers, they must obey the basic style convention but still can freely use their own words, such as abbreviation, synonyms, adding extra modifiers. We train each model on NameOnly and Orig, then evaluate on the corresponding paraphrased version respectively. In the last two rows of Table 7, the column 'para' shows performance on paraphrased schema, while $\Delta$ shows the performance change compared to the homogeneous evaluation. Orig still performs more robust than NameOnly when schema descriptions get paraphrased on unseen services.
|
| 276 |
+
|
| 277 |
+
Table 6: Performance changes when using BERT finetuned on SQuAD2 dataset to further finetuning on our NonCat task.
|
| 278 |
+
|
| 279 |
+
<table><tr><td rowspan="3">Style\Task</td><td colspan="7">SG-DST</td><td></td></tr><tr><td colspan="2">Intent(Acc)</td><td colspan="2">Req(F1)</td><td colspan="2">Cat(Joint Acc)</td><td>NonCat(Joint Acc)</td><td></td></tr><tr><td>mean</td><td>Δ</td><td>mean</td><td>Δ</td><td>mean</td><td>Δ</td><td>mean</td><td>Δ</td></tr><tr><td>NameOnly</td><td>82.47</td><td>-11.47</td><td>96.92</td><td>-1.64</td><td>61.37</td><td>-5.54</td><td>56.53</td><td></td></tr><tr><td>Q-Name</td><td>93.27</td><td>+0.58</td><td>97.88</td><td>-0.76</td><td>68.55</td><td>+2.63</td><td>62.92</td><td></td></tr><tr><td>Orig</td><td>79.47</td><td>-12.70</td><td>97.42</td><td>-0.74</td><td>68.58</td><td>-0.3</td><td>66.72</td><td></td></tr><tr><td>Q-Orig</td><td>84.57</td><td>-8.24</td><td>96.70</td><td>-1.45</td><td>68.40</td><td>-2.89</td><td>56.17</td><td></td></tr><tr><td></td><td>para</td><td>Δ</td><td>para</td><td>Δ</td><td>para</td><td>Δ</td><td>para</td><td></td></tr><tr><td>NameOnly</td><td>92.22</td><td>-1.74</td><td>97.69</td><td>-0.87</td><td>67.39</td><td>-0.7</td><td>67.17</td><td></td></tr><tr><td>Orig</td><td>91.54</td><td>-0.63</td><td>98.42</td><td>+0.26</td><td>71.74</td><td>+2.86</td><td>67.68</td><td></td></tr></table>
|
| 280 |
+
|
| 281 |
+
Table 7: Results on unseen service with heterogeneous description styles on SG-DST dataset. More results and qualitative analysis are in the appendix A.5
|
| 282 |
+
|
| 283 |
+
# 8 Conclusion
|
| 284 |
+
|
| 285 |
+
In this paper, we studied three questions on schemaguided dialog state tracking: encoder architectures, impact of supplementary training, and effective schema description styles. The main findings are as follows:
|
| 286 |
+
|
| 287 |
+
By caching the token embedding instead of the single CLS embedding, a simple partial-attention Fusion-Encoder can achieve much better performance than Dual-Encoder, while still infers two times faster than Cross-Encoder. We quantified the gain via supplementary training on two intermediate tasks. By carefully choosing representative description styles according to recent works, we are the first of doing both homogeneous/heterogeneous evaluations for different description style in schema-guided dialog. The results show that simple name-based description performs well on Intent and Req tasks, while NonCat tasks benefits from richer styles of descriptions. All tasks suffer from inconsistencies in description style between training and test, though to varying degrees.
|
| 288 |
+
|
| 289 |
+
Our study are mainly conducted on two datasets: SG-DST and MULTIWOZ 2.2, while the speed-accuracy balance of encoder architectures and the findings in supplementary training are expected to be dataset-agnostic, because they depend more on the nature of the subtasks than the datasets. Based on our proposed benchmarking descriptions suite, the homogeneous and heterogeneous evaluation has shed the light on the robustness of cross-style schema-guided dialog modeling, we believe our study will provide useful insights for future research.
|
| 290 |
+
|
| 291 |
+
# Acknowledgments
|
| 292 |
+
|
| 293 |
+
The authors wish to thank the anonymous reviewers and members of the Amazon LEX team for their valuable feedback.
|
| 294 |
+
|
| 295 |
+
# References
|
| 296 |
+
|
| 297 |
+
Daniel G Bobrow, Ronald M Kaplan, Martin Kay, Donald A Norman, Henry Thompson, and Terry Winograd. 1977. Gus, a frame-driven dialog system. Artificial intelligence, 8(2):155-173.
|
| 298 |
+
Antoine Bordes, Jason Weston, and Nicolas Usunier. 2014. Open question answering with weakly supervised embedding models. In Joint European conference on machine learning and knowledge discovery in databases, pages 165-180. Springer.
|
| 299 |
+
Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. Multiwoz-a largescale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016-5026.
|
| 300 |
+
Bill Byrne, Karthik Krishnamoorthi, Chinnadhurai Sankar, Arvind Neelakantan, Daniel Duckworth, Semih Yavuz, Ben Goodrich, Amit Dubey, Andy Cedilnik, and Kyu-Young Kim. 2019. Taskmaster-1: Toward a realistic and diverse dialog dataset. arXiv preprint arXiv:1909.05358.
|
| 301 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
|
| 302 |
+
Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyang Gao, Adarsh Kumar, Anuj Goyal, Peter Ku, and Dilek Hakkani-Tur. 2020. Multiwoz 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 422-428.
|
| 303 |
+
Shuyang Gao, Sanchit Agarwal, Tagyoung Chung, Di Jin, and Dilek Hakkani-Tur. 2020. From machine reading comprehension to dialogue state tracking: Bridging the gap. arXiv preprint arXiv:2004.05827.
|
| 304 |
+
Shuyang Gao, Abhishek Sethi, Sanchit Agarwal, Tagyoung Chung, Dilek Hakkani-Tur, and Amazon Alexa AI. 2019. Dialog state tracking: A neural reading comprehension approach. In 20th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 264.
|
| 305 |
+
Ting Han, Ximing Liu, Ryuichi Takanobu, Yixin Lian, Chongxuan Huang, Wei Peng, and Minlie Huang. 2020. Multiwoz 2.3: A multi-domain task-oriented dataset enhanced with annotation corrections and coreference annotation. arXiv e-prints, pages arXiv-2010.
|
| 306 |
+
|
| 307 |
+
Matthew Henderson, Blaise Thomson, and Steve Young. 2014. Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 292-299.
|
| 308 |
+
Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue. arXiv preprint arXiv:2005.00796.
|
| 309 |
+
Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Architectures and pre-training strategies for fast and accurate multi-sentence scoring. In ICLR.
|
| 310 |
+
John F Kelley. 1984. An iterative design methodology for user-friendly natural language office information applications. ACM Transactions on Information Systems (TOIS), 2(1):26-41.
|
| 311 |
+
Sungdong Kim, Sohee Yang, Gyuwan Kim, and Sang-Woo Lee. 2019. Efficient dialogue state tracking by selectively overwriting memory. arXiv preprint arXiv:1911.03906.
|
| 312 |
+
Hwaran Lee, Jinsik Lee, and Tae-Yoon Kim. 2019. Sumbt: Slot-utterance matching for universal and scalable belief tracking. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5478-5483.
|
| 313 |
+
Ryan Lowe, Nissan Pow, Iulian Vlad Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285-294.
|
| 314 |
+
Nikola Mrkšić, Diarmuid Žáeghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1777-1788.
|
| 315 |
+
Chetan Naik, Arpit Gupta, Hancheng Ge, Mathias Lambert, and Ruhi Sarikaya. 2018. Contextual slot carry-over for disparate schemas. Proc. Interspeech 2018, pages 596-600.
|
| 316 |
+
Mahdi Namazifar, Alexandros Papangelis, Gokhan Tur, and Dilek Hakkani-Tur. 2020. Language model is all you need: Natural language understanding as question answering. arXiv preprint arXiv:2011.03023.
|
| 317 |
+
Vahid Noroozi, Yang Zhang, Evelina Bakhturina, and Tomasz Kornuta. 2020. A fast and robust bert-based dialogue state tracker for schema-guided dialogue dataset. Workshop on Conversational Systems Towards Mainstream Adoption at KDD 2020.
|
| 318 |
+
|
| 319 |
+
Baolin Peng, Chunyuan Li, Jinchao Li, Shahin Shayan- deh, Lars Liden, and Jianfeng Gao. 2020. Soloist: Few-shot task-oriented dialog with a single pretrained auto-regressive model. arXiv, pages arXiv- 2005.
|
| 320 |
+
Jason Phang, Thibault Févry, and Samuel R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088.
|
| 321 |
+
Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2019. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. AAAI 2019.
|
| 322 |
+
Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Schemaguided dialogue state tracking task at dstc8. Workshop on DSTC8, AAAI 2020.
|
| 323 |
+
Rico Sennrich, Orhan First, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel Läubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nădejde. 2017. Nematus: a toolkit for neural machine translation. In Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 65-68, Valencia, Spain. Association for Computational Linguistics.
|
| 324 |
+
Pararth Shah, Dilek Hakkani-Tür, Bing Liu, and Gokhan Tur. 2018a. Bootstrapping a neural conversational agent with dialogue self-play, crowdsourcing and on-line reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers), pages 41-51, New Orleans - Louisiana. Association for Computational Linguistics.
|
| 325 |
+
Pararth Shah, Dilek Hakkani-Tür, Gokhan Tur, Abhinav Rastogi, Ankur Bapna, Neha Nayak, and Larry Heck. 2018b. Building a conversational agent overnight with dialogue self-play. arXiv preprint arXiv:1801.04871.
|
| 326 |
+
Nikhita Vedula, Nedim Lipka, Pranav Maneriker, and Srinivasan Parthasarathy. 2020. Open intent extraction from natural language interactions. In Proceedings of The Web Conference 2020, pages 2009-2020.
|
| 327 |
+
Jason D Williams, Antoine Raux, and Matthew Henderson. 2016. The dialog state tracking challenge series: A review. Dialogue & Discourse, 7(3):4-33.
|
| 328 |
+
Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state generator for task-oriented dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 808-819.
|
| 329 |
+
|
| 330 |
+
Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 496-505.
|
| 331 |
+
Liu Yang, Minghui Qiu, Chen Qu, Jiafeng Guo, Yongfeng Zhang, W Bruce Croft, Jun Huang, and Haiqing Chen. 2018. Response ranking with deep matching networks and external knowledge in information-seeking conversation systems. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 245-254.
|
| 332 |
+
Xiaoxue Zang, Abhinav Rastogi, Srinivas Sunkara, Raghav Gupta, Jianguo Zhang, and Jindong Chen. 2020. MultiWOZ 2.2: A dialogue dataset with additional annotation corrections and state tracking baselines. In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pages 109-117, Online. Association for Computational Linguistics.
|
| 333 |
+
Jian-Guo Zhang, Kazuma Hashimoto, Chien-Sheng Wu, Yao Wan, Philip S Yu, Richard Socher, and Caiming Xiong. 2019. Find or classify? dual strategy for slot-value predictions on multi-domain dialog state tracking. arXiv, pages arXiv-1910.
|
| 334 |
+
|
| 335 |
+
# A Appendices
|
| 336 |
+
|
| 337 |
+
# A.1 Experiment Setup
|
| 338 |
+
|
| 339 |
+
All models are based on BERT-base-cased model with 2 V100 GPUs (with 16GB GPU RAM each). We train each models for maximum 10 epoch, by using AdamW to schedule the learning rate with a warm-up portion of 0.1. During training, we evaluate checkpoints per 3000 steps on dev splits, and select the model with best performance on dev split on all seen and unseen services. In our experiments, our model achieves the best performance on around 2-4 epochs on Intent, Req. and Cat, while NonCat needs 5-8 epochs to get the best performance. For all subtasks, as we model all of them as sentence pair encoding during training, we use batch size as 16 for each GPU, and gradient accumulate for 8 steps, in total 256 batch size on 2 GPUs.
|
| 340 |
+
|
| 341 |
+
# A.2 Statistic on MultiWOZ 2.2 Remix
|
| 342 |
+
|
| 343 |
+
To evaluate performance on seen/unseen services with MultiWOZ, we remix the MULTIWOZ 2.2 dataset to include as seen services dialogs related to restaurant, attraction and train during training, and eliminate slots from other domains/services from training split. For dev, we add two new domains hotel and taxi as unseen services. For test, we add all
|
| 344 |
+
|
| 345 |
+
remaining domains as unseen, including those that have minimum overlap with seen services, such as hospital, police, bus. The statistics are as shown in Table 8
|
| 346 |
+
|
| 347 |
+
<table><tr><td rowspan="2">Domain</td><td colspan="6">#dialogs/#turns</td></tr><tr><td colspan="2">train</td><td colspan="2">dev</td><td colspan="2">test</td></tr><tr><td>restaurant</td><td>3900</td><td>37953</td><td>458</td><td>6979</td><td>451</td><td>7104</td></tr><tr><td>attraction</td><td>2716</td><td>28632</td><td>405</td><td>6198</td><td>400</td><td>6290</td></tr><tr><td>train</td><td>3001</td><td>29646</td><td>481</td><td>5897</td><td>491</td><td>6150</td></tr><tr><td>hotel</td><td>0</td><td>0</td><td>737</td><td>8509</td><td>718</td><td>7911</td></tr><tr><td>taxi</td><td>0</td><td>0</td><td>374</td><td>2692</td><td>364</td><td>2659</td></tr><tr><td>hospital</td><td>0</td><td>0</td><td>0</td><td>0</td><td>287</td><td>766</td></tr><tr><td>police</td><td>0</td><td>0</td><td>0</td><td>0</td><td>252</td><td>475</td></tr><tr><td>bus</td><td>0</td><td>0</td><td>0</td><td>0</td><td>6</td><td>132</td></tr></table>
|
| 348 |
+
|
| 349 |
+
# A.3 Composition of Descriptions
|
| 350 |
+
|
| 351 |
+
# A.3.1 Composition Settings
|
| 352 |
+
|
| 353 |
+
For each subtask, the key description element must be included, e.g., intent description for intent task, and value for categorical slot tasks. To show how each component helps schema-guided dialog state tracking, we incrementally add richer schema component one by one.
|
| 354 |
+
|
| 355 |
+
ID. This is the least informative case: we only use meaningless intent/slot identifiers, e.g. Intent_4, Slot_2. It means we don't use description from any schema component. We want to investigate how a simple identifier-based description performs in schema-guided dialog modeling, and the performance lower-bound on transferring to unseen services.
|
| 356 |
+
|
| 357 |
+
I/S Desc. Only using the original intent/slot description of intent/slot in SG-DST and MULTI-WOZ 2.2 dataset for corresponding tasks.
|
| 358 |
+
|
| 359 |
+
Service + I/S Desc. Adding a service description to the above original description. Service description summarize the functionalities of the whole service, hence may offer extra background information for intent and slots. For categorical slot value detection, we simply add the value after each of the above composition.
|
| 360 |
+
|
| 361 |
+
# A.3.2 Results on Description Compositions
|
| 362 |
+
|
| 363 |
+
Table 9 shows the results of using different description compositions. First, there are consistent findings across datasets and subtasks: (1) using meaningless identifier as intent/slot description shows the worse performance on all tasks of both datasets, and can not generalize well to unseen services. (2) using intent/slot descriptions can largely boost the performance, especially on unseen services.
|
| 364 |
+
|
| 365 |
+
Table 8: The total number of dialogs and turns related to each domain in train, dev and test split of MultiWOZ
|
| 366 |
+
|
| 367 |
+
<table><tr><td rowspan="2">Model\Task</td><td colspan="4">SG-DST</td><td colspan="2">MultiWOZ</td></tr><tr><td>Intent</td><td>Req</td><td>Cat</td><td>NonCat</td><td>Cat</td><td>NonCat</td></tr><tr><td colspan="7">Seen Service</td></tr><tr><td>Identifier</td><td>92.76</td><td>99.70</td><td>87.86</td><td>88.38</td><td>58.46</td><td>77.29</td></tr><tr><td>I/S Desc</td><td>95.35</td><td>99.74</td><td>92.10</td><td>93.52</td><td>85.84</td><td>83.67</td></tr><tr><td>Service + I/S Desc</td><td>95.28</td><td>99.74</td><td>93.19</td><td>92.34</td><td>85.07</td><td>80.56</td></tr><tr><td colspan="7">Unseen Service</td></tr><tr><td>Identifier</td><td>50.63</td><td>88.74</td><td>54.34</td><td>10.77</td><td>53.05</td><td>56.18</td></tr><tr><td>I/S Desc</td><td>92.17</td><td>98.16</td><td>68.88</td><td>69.84</td><td>56.49</td><td>61.39</td></tr><tr><td>Service + I/S Desc</td><td>86.95</td><td>97.99</td><td>67.08</td><td>71.30</td><td>60.58</td><td>59.63</td></tr></table>
|
| 368 |
+
|
| 369 |
+
Table 9: Models using different composition of schema, results on test set of SG-DST and our remixed MULTIWOZ 2.2
|
| 370 |
+
|
| 371 |
+
However, the impact of service description varies by tasks. For example, it largely hurts performance on intent classification task, but does not impact requested slot and categorical slot tasks. According to manual analysis of SG-DST and MultiWOZ 2.2 dataset, we found that service description consists of the main functions of the service, especially the meaning of the supported intents. Hence, using service description for intent causes confusion between the intent description information and other supported intents. Moreover, in categorical slot value prediction task, the most important information is the slot description and value. When adding extra information from service description, it improves marginally on seen service while not generalizing well on unseen services, which indicates the model learns artifacts that are not general useful for unseen services.
|
| 372 |
+
|
| 373 |
+
Finally, on non-categorical slot tasks, the impact of service description may also varies on datasets. On SG-DST, there are 16 domains and more than 30 services, the rich background context from service description contains both domain and service-specific information, which seems to help both seen and unseen services. However, on MULTIWOZ 2.2, it hurts the performance on seen service restaurant the most, while improving the performance on the unseen service hotel by 4 points. In this case, it works like a regularizer rather than a definitive clues. Because in MULTIWOZ 2.2, there are only 8 domains, and one service per domain, thus service descriptions just contain domain related information without much extra information, it will not help the model to detect the span for the slot.
|
| 374 |
+
|
| 375 |
+
# A.4 More Results of Supplementary Training
|
| 376 |
+
|
| 377 |
+
Table 10 shows the detailed performance when using different intermediate tasks as supplementary training. For SNLI tasks, as the pretrained model is uncased model (textattack/ bert-base-uncased-snli),
|
| 378 |
+
|
| 379 |
+
<table><tr><td rowspan="2"></td><td rowspan="2"></td><td colspan="12">sgd</td><td colspan="4">multiwoz</td><td></td><td></td></tr><tr><td colspan="3">intent</td><td colspan="3">req</td><td colspan="3">cat</td><td colspan="3">noncat</td><td colspan="3">cat</td><td colspan="2">noncat</td><td></td></tr><tr><td rowspan="3">snli</td><td>uncased</td><td>93.31</td><td>95.4</td><td>92.62</td><td>98.62</td><td>99.34</td><td>98.37</td><td>75.66</td><td>93.39</td><td>69.98</td><td>80.38</td><td>90.93</td><td>76.87</td><td>51.77</td><td>84.93</td><td>59.40</td><td>56.47</td><td>82.39</td><td>61.62</td></tr><tr><td>snli</td><td>93.82</td><td>95.42</td><td>93.3</td><td>98.43</td><td>99.72</td><td>97.99</td><td>74.03</td><td>90.52</td><td>68.75</td><td>75.68</td><td>90.83</td><td>70.62</td><td>53.82</td><td>85.53</td><td>58.70</td><td>60.11</td><td>83.44</td><td>66.46</td></tr><tr><td>ΔSNLI</td><td>+0.51</td><td>+0.02</td><td>+0.68</td><td>-0.19</td><td>+0.38</td><td>-0.38</td><td>-1.63</td><td>-2.87</td><td>-1.23</td><td>-4.7</td><td>-0.1</td><td>-6.25</td><td>+2.05</td><td>+0.6</td><td>-0.7</td><td>3.64</td><td>+1.05</td><td>+4.84</td></tr><tr><td rowspan="3">squad</td><td>cased</td><td>93.01</td><td>95.51</td><td>92.2</td><td>98.59</td><td>99.59</td><td>98.26</td><td>74.51</td><td>92.1</td><td>71.23</td><td>75.76</td><td>93.52</td><td>69.84</td><td>52.19</td><td>85.74</td><td>56.49</td><td>57.2</td><td>83.67</td><td>61.39</td></tr><tr><td>squad</td><td>91.2</td><td>95.34</td><td>90.88</td><td>98.34</td><td>99.58</td><td>97.93</td><td>71.64</td><td>89.08</td><td>66.06</td><td>77.75</td><td>91.73</td><td>73.09</td><td>52.23</td><td>85.03</td><td>56.90</td><td>59.13</td><td>81.46</td><td>65.66</td></tr><tr><td>ΔSQuAD</td><td>-1.81</td><td>-0.17</td><td>-1.32</td><td>-0.25</td><td>-0.01</td><td>-0.33</td><td>-2.87</td><td>-3.02</td><td>-5.17</td><td>1.99</td><td>-1.79</td><td>3.25</td><td>+0.04</td><td>-0.71</td><td>+0.41</td><td>1.93</td><td>-2.21</td><td>+4.27</td></tr></table>
|
| 380 |
+
|
| 381 |
+
Table 10: Results of different supplementary training on SG-DST and MULTIWOZ 2.2 dataset
|
| 382 |
+
|
| 383 |
+
<table><tr><td>style</td><td>Intent Description</td><td>Slot Description</td></tr><tr><td>Identifier</td><td>intent_1</td><td>slot_4</td></tr><tr><td>NameOnly</td><td>CheckBalance</td><td>account_type</td></tr><tr><td>Q-Name</td><td>Is the user intending to CheckBalance?</td><td>What is the value of acct count_type ?</td></tr><tr><td>Orig</td><td>Check the amount of money in a user's bank account</td><td>The account type of the user</td></tr><tr><td>Q-Origin</td><td>Does the user want to check the amount of money in the bank account ?</td><td>What is the account type of the user ?</td></tr><tr><td>Name-Para</td><td>CheckAccountBalance</td><td>user_account_type</td></tr><tr><td>Orig-Para</td><td>Check the balance of the user's bank account</td><td>Type of the user account</td></tr></table>
|
| 384 |
+
|
| 385 |
+
hence, we first train different models with BERT-base-uncased, then compare the performance with SNLI pretrained model. For SQuAD2, we use deepset/bert-base-cased-SQuAD2 model, hence, we compare it all cased model. To fairly compare with our original Cross-Encoder, we add extra speaker tokens [user:] and [system:] for encoding the multi-turn dialog histories.
|
| 386 |
+
|
| 387 |
+
# A.5 Homogeneous and Heterogenous Evaluation on Different Styles
|
| 388 |
+
|
| 389 |
+
# A.5.1 Examples for Different Description Styles
|
| 390 |
+
|
| 391 |
+
Table 11 shows examples for different styles of schema descriptions.
|
| 392 |
+
|
| 393 |
+
# A.5.2 More details on SQuAD2 Results on Different Styles
|
| 394 |
+
|
| 395 |
+
For homogeneous evaluation, Table 12 shows the detailed performance when we apply SQuAD2-finetuned BERT on our models.
|
| 396 |
+
|
| 397 |
+
# A.5.3 More Results On Homogeneous and Heterogeneous Evaluation
|
| 398 |
+
|
| 399 |
+
We list the detailed results for our evaluation across different styles. We use italic to show the homogeneous evaluation, where the results are shown in the diagonal of each table, and we underline the best homogeneous results in the diagonal. We use bold to show the best heterogeneous performance and the best performance gap in the last two columns Intent. The results on SG-DST dataset are shown in Table 13. Because there are very few intents in MULTIWOZ 2.2 dataset, we don't conduct intent classification on MULTIWOZ 2.2. All perfor
|
| 400 |
+
|
| 401 |
+
Table 11: Different extensions of schema descriptions
|
| 402 |
+
|
| 403 |
+
<table><tr><td rowspan="2">Style/Dataset</td><td colspan="3">SG-DST</td><td colspan="3">MULTIWOZ 2.2</td></tr><tr><td>all</td><td>seen</td><td>unseen</td><td>all</td><td>seen</td><td>unseen</td></tr><tr><td rowspan="3">Orig</td><td>75.76</td><td>93.52</td><td>69.84</td><td>57.2</td><td>83.67</td><td>61.39</td></tr><tr><td>77.75</td><td>91.73</td><td>73.09</td><td>59.13</td><td>81.46</td><td>65.66</td></tr><tr><td>+1.99</td><td>-1.79</td><td>+3.25</td><td>+1.93</td><td>-2.21</td><td>+4.27</td></tr><tr><td rowspan="3">Q-Orig</td><td>76.60</td><td>92.86</td><td>71.18</td><td>57.80</td><td>82.45</td><td>62.45</td></tr><tr><td>82.73</td><td>90.85</td><td>80.02</td><td>58.86</td><td>81.17</td><td>65.51</td></tr><tr><td>+6.13</td><td>-2.01</td><td>+8.84</td><td>+1.06</td><td>-1.28</td><td>+3.06</td></tr><tr><td rowspan="3">NameOnly</td><td>75.63</td><td>88.90</td><td>71.21</td><td>56.18</td><td>81.68</td><td>61.30</td></tr><tr><td>75.18</td><td>87.41</td><td>71.10</td><td>57.93</td><td>82.26</td><td>63.07</td></tr><tr><td>-0.45</td><td>-1.49</td><td>-0.11</td><td>+1.75</td><td>+0.58</td><td>+1.77</td></tr><tr><td rowspan="3">Q-Name</td><td>74.86</td><td>91.78</td><td>69.22</td><td>56.17</td><td>81.19</td><td>60.47</td></tr><tr><td>74.91</td><td>88.8</td><td>70.26</td><td>56.13</td><td>80.87</td><td>61.72</td></tr><tr><td>+0.05</td><td>-2.98</td><td>+1.04</td><td>-0.04</td><td>-0.32</td><td>+1.25</td></tr></table>
|
| 404 |
+
|
| 405 |
+
Table 12: Results on different description style on SG-DST and MULTIWOZ 2.2 dataset, when performing SQuAD2 supplementary training
|
| 406 |
+
|
| 407 |
+
mance get dropped when evaluating on heterogeneous descriptions styles. For both heterogeneous and homogeneous evaluation, adding rich description on intent classification tasks seems not bring much benefits than simply using the named-based description. As the discussion in §7.2.1, we believe the name template is good enough to describe the core functionality of an intent in SG-DST dataset.
|
| 408 |
+
|
| 409 |
+
Requested Slot. Table 14 shows the results on SG-DST dataset for the requested slots subtask. We ignore the requested slots in MULTIWOZ 2.2 dataset due to its sparsity. Overall, the requested slot subtask are relatively easy, performances on heterogeneous styles still drops but not much. For both heterogeneous and homogeneous evaluation, the performance are not sensible to rich description.
|
| 410 |
+
|
| 411 |
+
Categorical Slot. The results on SG-DST and MULTIWOZ 2.2 dataset are shown in Table 15.
|
| 412 |
+
|
| 413 |
+
<table><tr><td>Style</td><td>NameOnly</td><td>Q-Name</td><td>Orig</td><td>Q-Orig</td><td>mean</td><td>Δ</td></tr><tr><td>NameOnly</td><td>93.94</td><td>78.27</td><td>93.18</td><td>75.95</td><td>82.47</td><td>-11.47</td></tr><tr><td>Q-Name</td><td>93.18</td><td>92.69</td><td>93.26</td><td>93.36</td><td>93.27</td><td>+0.58</td></tr><tr><td>Orig</td><td>81.57</td><td>66.42</td><td>92.17</td><td>90.43</td><td>79.47</td><td>-12.70</td></tr><tr><td>Q-Orig</td><td>81.48</td><td>79.04</td><td>93.19</td><td>92.81</td><td>84.57</td><td>-8.24</td></tr></table>
|
| 414 |
+
|
| 415 |
+
Table 13: Accuracy of intent classification subtask with different description styles on unseen services. Train the model on SG-DST dataset for each description in each row, then evaluating on 4 different descriptions styles. The mean are average performance of the remaining 3 descriptions styles. The $\Delta$ means the performance gap between the mean and the homogeneous performance
|
| 416 |
+
|
| 417 |
+
<table><tr><td>Style</td><td>NameOnly</td><td>Q-Name</td><td>Orig</td><td>Q-Orig</td><td>mean</td><td>Δ</td></tr><tr><td>NameOnly</td><td>98.56</td><td>96.01</td><td>97.2</td><td>97.54</td><td>96.92</td><td>-1.64</td></tr><tr><td>Q-Name</td><td>98.37</td><td>98.64</td><td>97.8</td><td>97.48</td><td>97.88</td><td>-0.76</td></tr><tr><td>Orig</td><td>97.95</td><td>95.78</td><td>98.16</td><td>98.52</td><td>97.42</td><td>-0.74</td></tr><tr><td>Q-Orig</td><td>97.24</td><td>95.85</td><td>97.00</td><td>98.15</td><td>96.70</td><td>-1.45</td></tr></table>
|
| 418 |
+
|
| 419 |
+
When creating MULTIWOZ 2.2 (Zang et al., 2020), the slots with less than 50 different slot values are classified as categorical slots. We noticed that this leads inconsistent results with SG-DST dataset. It is hard to draw a consistent conclusion on the two datasets. According to the definition, we believe SG-DST are more suitable for categorical slot subtasks, we can further verify our guess when more datasets are created for the research of schemaguided dialog in the future.
|
| 420 |
+
|
| 421 |
+
Non-categorical Slot. We conduct noncategorical slot identification sub-tasks on both SG-DST and MULTIWOZ 2.2 dataset. The results are shown in Table 16. Overall, the rich description performs better on both homogeneous and heterogeneous evaluations.
|
| 422 |
+
|
| 423 |
+
# A.5.4 Qualitative Analysis On Heterogeneous Evaluation
|
| 424 |
+
|
| 425 |
+
We conduct qualitative analysis on heterogeneous evaluation on named-based description. Table 17 shows how paraphrasing the named-based description impact on the categorical and non-categorical slot prediction tasks.
|
| 426 |
+
|
| 427 |
+
The first 3 rows at the top are showing the cases of adding modifiers to the name. When the added
|
| 428 |
+
|
| 429 |
+
Table 14: F1 Score of requested slot classification subtask with different description styles on unseen services. We train the model on SG-DST dataset for the description style in each row, then evaluate on 4 different descriptions styles. The mean are average performance of the remaining 3 descriptions styles. The $\Delta$ means the performance gap between the mean and the homogeneous performance
|
| 430 |
+
|
| 431 |
+
<table><tr><td>Style</td><td>NameOnly</td><td>Q-Name</td><td>Orig</td><td>Q-Orig</td><td>mean</td><td>Δ</td></tr><tr><td colspan="7">SG-DST</td></tr><tr><td>NameOnly</td><td>68.09</td><td>58.41</td><td>63.49</td><td>62.21</td><td>61.37</td><td>-6.72</td></tr><tr><td>Q-Name</td><td>69.01</td><td>68.29</td><td>68.53</td><td>68.12</td><td>68.55</td><td>+0.26</td></tr><tr><td>Orig</td><td>70.19</td><td>65.91</td><td>68.88</td><td>69.64</td><td>68.58</td><td>-0.30</td></tr><tr><td>Q-Orig</td><td>69.98</td><td>65.97</td><td>69.26</td><td>71.29</td><td>68.40</td><td>-2.89</td></tr><tr><td colspan="7">MULTIWOZ 2.2</td></tr><tr><td>NameOnly</td><td>59.24</td><td>59.32</td><td>59.12</td><td>59.29</td><td>59.24</td><td>0.00</td></tr><tr><td>Q-Name</td><td>58.64</td><td>59.74</td><td>58.49</td><td>59.43</td><td>58.85</td><td>-0.89</td></tr><tr><td>Orig</td><td>59.26</td><td>59.91</td><td>56.49</td><td>58.97</td><td>59.38</td><td>+2.89</td></tr><tr><td>Q-Orig</td><td>60.00</td><td>60.70</td><td>51.18</td><td>58.95</td><td>57.29</td><td>-1.66</td></tr></table>
|
| 432 |
+
|
| 433 |
+
Table 15: Joint accuracy of categorical slot Subtask with different description styles on unseen services. Train the model on SG-DST and MULTIWOZ 2.2 datasets respectively for each description style in each row, then evaluate on all 4 descriptions styles. The mean are the average performance of the remaining 3 descriptions styles. The $\Delta$ means the performance gap between the mean and the homogeneous performance
|
| 434 |
+
|
| 435 |
+
<table><tr><td>Style</td><td>NameOnly</td><td>Q-Name</td><td>Orig</td><td>Q-Orig</td><td>mean</td><td>Δ</td></tr><tr><td colspan="7">SG-DST</td></tr><tr><td>NameOnly</td><td>71.21</td><td>49.85</td><td>59.8</td><td>59.95</td><td>56.53</td><td>-14.68</td></tr><tr><td>Q-Name</td><td>66.32</td><td>69.22</td><td>61.67</td><td>60.77</td><td>62.92</td><td>-6.30</td></tr><tr><td>Orig</td><td>78.73</td><td>51.57</td><td>69.84</td><td>69.87</td><td>66.72</td><td>-3.12</td></tr><tr><td>Q-Orig</td><td>62.6</td><td>36.44</td><td>69.49</td><td>71.18</td><td>56.18</td><td>-15.00</td></tr><tr><td colspan="7">MULTIWOZ 2.2</td></tr><tr><td>NameOnly</td><td>61.30</td><td>57.88</td><td>61.51</td><td>64.05</td><td>61.15</td><td>-0.15</td></tr><tr><td>Q-Name</td><td>60.62</td><td>60.47</td><td>60.6</td><td>62.58</td><td>61.27</td><td>+0.80</td></tr><tr><td>Orig</td><td>61.77</td><td>65.4</td><td>61.39</td><td>62.4</td><td>63.19</td><td>+1.80</td></tr><tr><td>Q-Orig</td><td>61.29</td><td>60.6</td><td>62.46</td><td>62.45</td><td>61.45</td><td>-1.00</td></tr></table>
|
| 436 |
+
|
| 437 |
+
Table 16: Joint accuracy of non-categorical slot Subtask with different description styles on unseen services. We train the model on SG-DST and MULTI-WOZ 2.2 datasets respectively for the description style in each row, then evaluate on all 4 different descriptions styles. The mean are the average performance of the remaining 3 descriptions styles. The $\Delta$ means the performance gap between the mean and the homogeneous performance
|
| 438 |
+
|
| 439 |
+
extra modifiers are keywords in other slots, e.g. "attraction" are the keywords also used in "attraction_name". The first shows "attraction_location" may wrongly predicted as "attraction_name". It seems the model does not understand the compound nouns well, and they seems just pay attention to each key words "attraction" and "movie" here.
|
| 440 |
+
|
| 441 |
+
The 3 rows in the middle are showing the cases of using synonyms. Changing "to" to "target", and changing "movie" to "film" will cause extra confusion, which shows the model may fail to the synonyms.
|
| 442 |
+
|
| 443 |
+
The last 4 rows at the bottom is showing using abbreviations. Changing "number" to "num" will not impact the model, while changing "subtitle" to "sub" may let the model miss the key meaning of subtitle. The performance drop in the later case may be due to the misuse of the "sub" prefix, in En
|
| 444 |
+
|
| 445 |
+
<table><tr><td>Service Name</td><td>Original Name</td><td>Paraphrased Name</td><td>Extra impact by the paraphrased name</td></tr><tr><td>Travel_1</td><td>location</td><td>attraction_location</td><td>Confused with other "attraction" prefixed slots, e.g. attraction_name</td></tr><tr><td>Movies_1</td><td>genre</td><td>movie_genre</td><td>Confused with movie_name</td></tr><tr><td>Movies_1</td><td>price</td><td>ticket_price, total_price</td><td>No impact</td></tr><tr><td>Bueses_3</td><td>to_city</td><td>target_city</td><td>The synonyms "target" is not understood well by model, confused with from_city</td></tr><tr><td>Movies_1</td><td>movie_name</td><td>film_name</td><td>The synonyms "film" is not understood well, getting wrong with theather_name</td></tr><tr><td>Hotels_2</td><td>where_to</td><td>house_location</td><td>Improved by specific "house" keywords</td></tr><tr><td>Flights_4</td><td>origin_airport</td><td>orig_city_airport</td><td>More frequently predicted to slot "destination_airport"</td></tr><tr><td>Flights_4</td><td>destination_airport</td><td>dest_city_airport</td><td>More frequently predicted to slot "origin_airport"</td></tr><tr><td>Media_3</td><td>subtitle_language</td><td>sub-lang</td><td>Missing keyword "subtitle" make the slot inactive</td></tr><tr><td>Flights_4</td><td>number_ofTickets</td><td>num_ofTickets</td><td>No impact</td></tr></table>
|
| 446 |
+
|
| 447 |
+
Table 17: We analyze the confusion matrix of above slots before and after using the paraphrased name. We summarize the extra impact for using each paraphrased name.
|
| 448 |
+
|
| 449 |
+
glish, it usually means "secondary, less important, parts". We also found the "orig" and "dest" abbreviations may also understand well by the model. The above abbreviations seems reasonable paraphrases people will use for naming, while the are not understood well in the given context. Hence, in the design of schema-guided dialog, if using named-based description, we should be careful for about abbreviations used in the naming.
|
acomparativestudyonschemaguideddialoguestatetracking/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:44f25c8c45db9128d0c415cee00875d121ded54bd0293253f81e58839940627f
|
| 3 |
+
size 690765
|
acomparativestudyonschemaguideddialoguestatetracking/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9294eb4d61e64e83f113cf8f5e0d1e2a733e2e89886e9d3563187a47a67df50b
|
| 3 |
+
size 444932
|
acontextdependentgatedmoduleforincorporatingsymbolicsemanticsintoeventcoreferenceresolution/6e63067d-fcb8-49b2-b6b7-811ec886b9d7_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5471f75bfa668997a1e22b46a90e66aa16f87e0606fed014b6ff9bf46575cc00
|
| 3 |
+
size 70999
|
acontextdependentgatedmoduleforincorporatingsymbolicsemanticsintoeventcoreferenceresolution/6e63067d-fcb8-49b2-b6b7-811ec886b9d7_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fedbf0c20bf9cbae78fb97fd8411e685ef255e64c4d0cda0cd7ba9b6f7ecd3f7
|
| 3 |
+
size 84378
|
acontextdependentgatedmoduleforincorporatingsymbolicsemanticsintoeventcoreferenceresolution/6e63067d-fcb8-49b2-b6b7-811ec886b9d7_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7225aafd09475626941196af91114ccd6a973967fa1c3a3f8e3d18c14e67a797
|
| 3 |
+
size 392414
|
acontextdependentgatedmoduleforincorporatingsymbolicsemanticsintoeventcoreferenceresolution/full.md
ADDED
|
@@ -0,0 +1,317 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Context-Dependent Gated Module for Incorporating Symbolic Semantics into Event Coreference Resolution
|
| 2 |
+
|
| 3 |
+
Tuan Lai, Heng Ji
|
| 4 |
+
University of Illinois at Urbana-Champaign
|
| 5 |
+
{tuanml2, hengji} @illinois.edu
|
| 6 |
+
Trung Bui, Quan Hung Tran, Franck Dernoncourt, Walter Chang
|
| 7 |
+
Adobe Research
|
| 8 |
+
{bui, qtran, franck.dernoncourt, wachang} @adobe.com
|
| 9 |
+
|
| 10 |
+
# Abstract
|
| 11 |
+
|
| 12 |
+
Event coreference resolution is an important research problem with many applications. Despite the recent remarkable success of pretrained language models, we argue that it is still highly beneficial to utilize symbolic features for the task. However, as the input for coreference resolution typically comes from upstream components in the information extraction pipeline, the automatically extracted symbolic features can be noisy and contain errors. Also, depending on the specific context, some features can be more informative than others. Motivated by these observations, we propose a novel context-dependent gated module to adaptively control the information flows from the input symbolic features. Combined with a simple noisy training method, our best models achieve state-of-the-art results on two datasets: ACE 2005 and KBP 2016.
|
| 13 |
+
|
| 14 |
+
# 1 Introduction
|
| 15 |
+
|
| 16 |
+
Within-document event coreference resolution is the task of clustering event mentions in a text that refer to the same real-world events (Lu and Ng, 2018). It is an important research problem, with many applications (Vanderwende et al., 2004; Ji and Grishman, 2011; Choubey et al., 2018). Since the trigger of an event mention is typically the word or phrase that most clearly describes the event, virtually all previous approaches employ features related to event triggers in one form or another. To achieve better performance, many methods also need to use a variety of additional symbolic features such as event types, attributes, and arguments (Chen et al., 2009; Chen and Ji, 2009; Zhang et al., 2015; Sammons et al., 2015; Lu and Ng, 2016; Chen and Ng, 2016; Duncan et al., 2017). Previous neural methods (Nguyen et al., 2016; Choubey and Huang, 2017; Huang et al., 2019) also use noncontextual word embeddings such as word2vec
|
| 17 |
+
|
| 18 |
+
... we are seeing these soldiers $\{\mathrm{head~out}\}_{\mathrm{ev1}}$ ...
|
| 19 |
+
... these soldiers were set to $\{\mathrm{leave}\}_{\mathrm{ev2}}$ in January ...
|
| 20 |
+
ev1 (Movement:Transport): Modality $=$ ASSERTED
|
| 21 |
+
ev2 (Movement:Transport): Modality $=$ OTHER
|
| 22 |
+
|
| 23 |
+
Table 1: An example of using the modality attribute to improve event coreference resolution.
|
| 24 |
+
|
| 25 |
+
(Mikolov et al., 2013) or GloVe (Pennington et al., 2014).
|
| 26 |
+
|
| 27 |
+
With the recent remarkable success of language models such as BERT (Devlin et al., 2019) and SpanBERT (Joshi et al., 2020), one natural question is whether we can simply use these models for coreference resolution without relying on any additional features. We argue that it is still highly beneficial to utilize symbolic features, especially when they are clean and have complementary information. Table 1 shows an example in the ACE 2005 dataset, where our baseline SpanBERT model incorrectly predicts the highlighted event mentions to be coreferential. The event triggers are semantically similar, making it challenging for our model to distinguish. However, notice that the event $\{\text{head out}\}_{\text{ev1}}$ is mentioned as if it was a real occurrence, and so its modality attribute is ASSERTED (LDC, 2005). In contrast, because of the phrase "were set to", we can infer that the event $\{\text{leave}\}_{\text{ev2}}$ did not actually happen (i.e., its modality attribute is OTHER). Therefore, our model should be able to avoid the mistake if it utilizes additional symbolic features such as the modality attribute in this case.
|
| 28 |
+
|
| 29 |
+
There are several previous methods that use contextual embeddings together with type-based or argument-based information (Lu et al., 2020; Yu et al., 2020). For example, Lu et al. (2020) proposes a new mechanism to better exploit event type information for coreference resolution. Despite their impressive performance, these methods are specific to one particular type of additional information.
|
| 30 |
+
|
| 31 |
+
In this paper, we propose general and effective methods for incorporating a wide range of sym
|
| 32 |
+
|
| 33 |
+
bolic features into event coreference resolution. Simply concatenating symbolic features with contextual embeddings is not optimal, since the features can be noisy and contain errors. Also, depending on the context, some features can be more informative than others. Therefore, we design a novel context-dependent gated module to extract information from the symbolic features selectively. Combined with a simple regularization method that randomly adds noise into the features during training, our best models achieve state-of-the-art results on ACE 2005 (Walker et al., 2006) and KBP 2016 (Mitamura et al., 2016) datasets. To the best of our knowledge, our work is the first to explicitly focus on dealing with various noisy symbolic features for event coreference resolution.
|
| 34 |
+
|
| 35 |
+
# 2 Methods
|
| 36 |
+
|
| 37 |
+
# 2.1 Preliminaries
|
| 38 |
+
|
| 39 |
+
We focus on within-document event coreference resolution. The input to our model is a document $D$ consisting of $n$ tokens and $k$ (predicted) event mentions $\{m_1, m_2, \ldots, m_k\}$ . For each $m_i$ , we denote the start and end indices of its trigger by $s_i$ and $e_i$ respectively. We assume the mentions are ordered based on $s_i$ (i.e., if $i \leq j$ then $s_i \leq s_j$ ).
|
| 40 |
+
|
| 41 |
+
We also assume each $m_{i}$ has $K$ (predicted) categorical features $\{c_i^{(1)},c_i^{(2)},\ldots ,c_i^{(K)}\}$ , with each $c_{i}^{(u)}\in \{1,2,\dots,N_u\}$ taking one of $N_{u}$ different discrete values. Table 2 lists the symbolic features we consider in this work. The definitions of the features and their possible values are in ACE and Rich ERE guidelines (LDC, 2005; Mitamura et al., 2016). The accuracy scores of the symbolic feature predictors are also shown in Table 2. We use OneIE (Lin et al., 2020) to identify event mentions along with their subtypes. For other symbolic features, we train a joint classification model based on SpanBERT. The appendix contains more details.
|
| 42 |
+
|
| 43 |
+
# 2.2 Single-Mention Encoder
|
| 44 |
+
|
| 45 |
+
Given a document $D$ , our model first forms a contextualized representation for each input token using a Transformer encoder (Joshi et al., 2020). Let $\mathbf{X} = (\mathbf{x}_1, \dots, \mathbf{x}_n)$ be the output of the encoder, where $\mathbf{x}_i \in \mathbb{R}^d$ . Then, for each mention $m_i$ , its trigger's representation $\mathbf{t}_i$ is defined as the average of its token embeddings:
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
\mathbf {t} _ {i} = \sum_ {j = s _ {i}} ^ {e _ {i}} \frac {\mathbf {x} _ {j}}{e _ {i} - s _ {i} + 1} \tag {1}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
<table><tr><td>Dataset</td><td>Features</td><td>Acc. (Train)</td><td>Acc. (Dev)</td><td>Acc. (Test)</td></tr><tr><td rowspan="5">ACE 2005</td><td>Type</td><td>0.999</td><td>0.945</td><td>0.953</td></tr><tr><td>Polarity</td><td>0.999</td><td>0.994</td><td>0.988</td></tr><tr><td>Modality</td><td>0.999</td><td>0.856</td><td>0.884</td></tr><tr><td>Genericity</td><td>0.999</td><td>0.865</td><td>0.872</td></tr><tr><td>Tense</td><td>0.984</td><td>0.802</td><td>0.763</td></tr><tr><td rowspan="2">KBP 2016</td><td>Type</td><td>0.960</td><td>0.874</td><td>0.818</td></tr><tr><td>Realis</td><td>0.979</td><td>0.845</td><td>0.840</td></tr></table>
|
| 52 |
+
|
| 53 |
+
Table 2: Symbolic features we consider in this work.
|
| 54 |
+
|
| 55 |
+
Next, by using $K$ trainable embedding matrices, we convert the symbolic features of $m_{i}$ into $K$ vectors $\{\mathbf{h}_i^{(1)},\mathbf{h}_i^{(2)},\ldots ,\mathbf{h}_i^{(K)}\}$ , where $\mathbf{h}_i^{(u)}\in \mathbb{R}^l$ .
|
| 56 |
+
|
| 57 |
+
# 2.3 Mention-Pair Encoder and Scorer
|
| 58 |
+
|
| 59 |
+
Given two event mentions $m_{i}$ and $m_{j}$ , we define their trigger-based pair representation as:
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\mathbf {t} _ {i j} = \operatorname {F F N N} _ {\mathrm {t}} \left(\left[ \mathbf {t} _ {i}, \mathbf {t} _ {j}, \mathbf {t} _ {i} \circ \mathbf {t} _ {j} \right]\right) \tag {2}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where $\mathrm{FFNN_t}$ is a feedforward network mapping from $\mathbb{R}^{3\times d}\to \mathbb{R}^p$ , and $\circ$ is element-wise multiplication. Similarly, we can compute their featurebased pair representations $\{\mathbf{h}_{ij}^{(1)},\mathbf{h}_{ij}^{(2)},\dots ,\mathbf{h}_{ij}^{(K)}\}$ as follows:
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
\mathbf {h} _ {i j} ^ {(u)} = \operatorname {F F N N} _ {\mathrm {u}} \left(\left[ \mathbf {h} _ {i} ^ {(u)}, \mathbf {h} _ {j} ^ {(u)}, \mathbf {h} _ {i} ^ {(u)} \circ \mathbf {h} _ {j} ^ {(u)} \right]\right) \tag {3}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
where $u\in \{1,2,\ldots ,K\}$ , and $\mathrm{FFNN_u}$ is a feedforward network mapping from $\mathbb{R}^{3\times l}\to \mathbb{R}^p$
|
| 72 |
+
|
| 73 |
+
Now, the most straightforward way to build the final pair representation $\mathbf{f}_{ij}$ of $m_{i}$ and $m_{j}$ is to simply concatenate the trigger-based representation and all the feature-based representations together:
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
\mathbf {f} _ {i j} = \left[ \mathbf {t} _ {i j}, \mathbf {h} _ {i j} ^ {(1)}, \mathbf {h} _ {i j} ^ {(2)}, \dots , \mathbf {h} _ {i j} ^ {(K)} \right] \tag {4}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
However, this approach is not always optimal. First, as the symbolic features are predicted, they can be noisy and contain errors. The performance of most symbolic feature predictors is far from perfect (Table 2). Also, depending on the specific context, some features can be more useful than others.
|
| 80 |
+
|
| 81 |
+
Inspired by studies on gated modules (Lin et al., 2019; Lai et al., 2019), we propose Context-Dependent Gated Module (CDGM), which uses a gating mechanism to extract information from the input symbolic features selectively (Figure 1). Given two mentions $m_{i}$ and $m_{j}$ , we use their trigger feature vector $\mathbf{t}_{ij}$ as the main controlling context to compute the filtered representation $\overline{\mathbf{h}}_{ij}^{(u)}$ :
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\overline {{\mathbf {h}}} _ {i j} ^ {(u)} = \operatorname {C D G M} ^ {(u)} \left(\mathbf {t} _ {i j}, \mathbf {h} _ {i j} ^ {(u)}\right) \tag {5}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
Figure 1: Overall architecture of our mention-pair encoder, which uses CDGMs to incorporate symbolic features.
|
| 89 |
+
|
| 90 |
+
where $u\in \{1,2,\ldots ,K\}$ . More specifically:
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
\mathbf {g} _ {i j} ^ {(u)} = \sigma \left(\operatorname {F F N N} _ {\mathrm {g}} ^ {(u)} \left(\left[ \mathbf {t} _ {i j}, \mathbf {h} _ {i j} ^ {(u)} \right]\right)\right)
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
\mathbf {o} _ {i j} ^ {(u)}, \mathbf {p} _ {i j} ^ {(u)} = \operatorname {D E C O M P O S E} \left(\mathbf {t} _ {i j}, \mathbf {h} _ {i j} ^ {(u)}\right) \tag {6}
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
\overline {{\mathbf {h}}} _ {i j} ^ {(u)} = \mathbf {g} _ {i j} ^ {(u)} \circ \mathbf {o} _ {i j} ^ {(u)} + (1 - \mathbf {g} _ {i j} ^ {(u)}) \circ \mathbf {p} _ {i j} ^ {(u)}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
where $\sigma$ denotes sigmoid function. $\mathrm{FFNN}_{\mathbf{g}}^{(u)}$ is a mapping from $\mathbb{R}^{2\times p}\rightarrow \mathbb{R}^p$ . At a high level, $\mathbf{h}_{ij}^{(u)}$ is decomposed into an orthogonal component and a parallel component, and $\overline{\mathbf{h}}_{ij}^{(u)}$ is simply the fusion of these two components. In order to find the optimal mixture, $\mathbf{g}_{ij}$ is used to control the composition. The decomposition unit is defined as:
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\text {P a r a l l e l} \quad \mathbf {p} _ {i j} ^ {(u)} = \frac {\mathbf {h} _ {i j} ^ {(u)} \cdot \mathbf {t} _ {i j}}{\mathbf {t} _ {i j} \cdot \mathbf {t} _ {i j}} \mathbf {t} _ {i j} \tag {7}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
\text {O r t h o g o n a l} \quad \mathbf {o} _ {i j} ^ {(u)} = \mathbf {h} _ {i j} ^ {(u)} - \mathbf {p} _ {i j} ^ {(u)}
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
where $\cdot$ denotes dot product. The parallel component $\mathbf{p}_{ij}^{(u)}$ is the projection of $\mathbf{h}_{ij}^{(u)}$ on $\mathbf{t}_{ij}$ . It can be viewed as containing information that is already part of $\mathbf{t}_{ij}$ . In contrast, $\mathbf{o}_{ij}^{(u)}$ is orthogonal to $\mathbf{t}_{ij}$ and so it can be viewed as containing new information. Intuitively, when the original symbolic feature vector $\mathbf{h}_{ij}^{(u)}$ is very clean and has complementary information, we want to utilize the new information in $\mathbf{o}_{ij}^{(u)}$ (i.e., we want $\mathbf{g}_{ij}^{(u)} \approx \mathbf{1}$ ), and vice versa.
|
| 115 |
+
|
| 116 |
+
Finally, after using CDGMs to distill symbolic features, the final pair representation $\mathbf{f}_{ij}$ of $m_{i}$ and $m_j$ can be computed as follows:
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
\mathbf {f} _ {i j} = \left[ \mathbf {t} _ {i j}, \overline {{\mathbf {h}}} _ {i j} ^ {(1)}, \overline {{\mathbf {h}}} _ {i j} ^ {(2)}, \dots , \overline {{\mathbf {h}}} _ {i j} ^ {(K)} \right] \tag {8}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
And the coreference score $s(i,j)$ of $m_{i}$ and $m_{j}$ is:
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
s (i, j) = \operatorname {F F N N} _ {\mathrm {a}} \left(\mathbf {f} _ {i j}\right) \tag {9}
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
where $\mathrm{FFNN_a}$ is a mapping from $\mathbb{R}^{(K + 1)\times p}\to \mathbb{R}$ .
|
| 129 |
+
|
| 130 |
+
# 2.4 Training and Inference
|
| 131 |
+
|
| 132 |
+
Algorithm 1: Noise Addition for Symbolic Features
|
| 133 |
+
|
| 134 |
+
Input: Document $D$
|
| 135 |
+
|
| 136 |
+
Hyperparameters: $\{\epsilon_1, \epsilon_2, \dots, \epsilon_K\}$
|
| 137 |
+
|
| 138 |
+
for $i = 1\ldots k$ do
|
| 139 |
+
|
| 140 |
+
for $u = 1\ldots K$ do With prob. $\epsilon_{u}$ , replace $c_{i}^{(u)}$ by $\hat{c}_i^{(u)}\sim \mathrm{Uniform}(N_u)$ end
|
| 141 |
+
|
| 142 |
+
Training We use the same loss function as in (Lee et al., 2017). Also, notice that the training accuracy of a feature predictor is typically much higher than its accuracy on the dev/test set (Table 2). If we simply train our model without any regularization, our CDGMs will rarely come across noisy symbolic features during training. Therefore, to encourage our CDGMs to actually learn to distill reliable signals, we also propose a simple but effective noisy training method. Before passing a training data batch to the model, we randomly add noise to the predicted features. More specifically, for each document $D$ in the batch, we go through every symbolic feature of every event mention in $D$ and consider sampling a new value for the feature. The operation is described in Algorithm 1 (we use the same notations mentioned in Section 2.1). $\{\epsilon_1,\epsilon_2,\dots ,\epsilon_K\}$ are hyperparameters determined by validation. In general, the larger the discrepancy between the train and test accuracies, the larger $\epsilon$
|
| 143 |
+
|
| 144 |
+
Inference For each (predicted) mention $m_{i}$ , our model will assign an antecedent $a_{i}$ from all pre
|
| 145 |
+
|
| 146 |
+
<table><tr><td>ACE (Cross-Validation)</td><td>CoNLL</td><td>AVG</td></tr><tr><td>SSED + SupervisedExtended (2016)</td><td>55.23</td><td>52.53</td></tr><tr><td>SSED + MSEP (2016)</td><td>53.80</td><td>51.38</td></tr><tr><td>ACE (Test Data)</td><td>CoNLL</td><td>AVG</td></tr><tr><td>Baseline</td><td>58.93</td><td>55.78</td></tr><tr><td>Simple (All Features)</td><td>57.55</td><td>54.79</td></tr><tr><td>CDGM (All Features)</td><td>58.99</td><td>56.32</td></tr><tr><td>Noise (All Features)</td><td>60.43</td><td>57.85</td></tr><tr><td>CDGM + Noise (All Features)</td><td>62.07</td><td>59.76</td></tr></table>
|
| 147 |
+
|
| 148 |
+
Table 3: End-to-end results on ACE 2005 (using predicted triggers and predicted symbolic features).
|
| 149 |
+
|
| 150 |
+
<table><tr><td>System</td><td>CoNLL</td><td>AVG</td></tr><tr><td>UTD's system (2015)</td><td>32.69</td><td>30.08</td></tr><tr><td>Joint Learning (2017b)</td><td>35.77</td><td>33.08</td></tr><tr><td>E³C (2020)</td><td>41.97</td><td>38.66</td></tr><tr><td>Baseline</td><td>40.57</td><td>37.59</td></tr><tr><td>Simple (All Features)</td><td>41.40</td><td>38.58</td></tr><tr><td>CDGM + Noise (All Features)</td><td>43.55</td><td>40.61</td></tr></table>
|
| 151 |
+
|
| 152 |
+
ceding mentions or a dummy antecedent $\epsilon$ : $a_{i} \in Y(i) = \{\epsilon, m_{1}, m_{2}, \ldots, m_{i-1}\}$ . Basically, $a_{i} = \arg \max_{j < i} s(i, j)$ . The dummy antecedent $\epsilon$ represents two possible cases: (1) $m_{i}$ is not actually an event mention (2) $m_{i}$ is indeed an event mention but it is not coreferent with any previous extracted mentions. In addition, we fix $s(i, \epsilon)$ to be 0.
|
| 153 |
+
|
| 154 |
+
# 3 Experiments and Results
|
| 155 |
+
|
| 156 |
+
Data and Experiments Setup We evaluate our methods on two English datasets: ACE2005 (Walker et al., 2006) and KBP2016 (Ji et al., 2016; Mitamura et al., 2016). We report results in terms of F1 scores obtained using the CoNLL and AVG metrics. By definition, these metrics are the summary of other standard coreference metrics, including $\mathbf{B}^3$ , MUC, $\mathrm{CEAF}_e$ , and BLANC (Lu and Ng, 2018). We use SpanBERT (spanbert-base-cased) as the Transformer encoder (Wolf et al., 2020a; Joshi et al., 2020). More details about the datasets and hyperparameters are in the appendix. We refer to models that use only trigger features as [Baseline]. In a baseline model, $\mathbf{f}_{ij}$ is simply $\mathbf{t}_{ij}$ (Eq. 2). We refer to models that use only the simple concatenation strategy as [Simple] (Eq. 4), and models that use the simple concatenation strategy and the noisy training method as [Noise].
|
| 157 |
+
|
| 158 |
+
Overall Results (on Predicted Mentions) Table 3 and Table 4 show the overall end-to-end results on ACE2005 and KBP2016, respectively. We
|
| 159 |
+
|
| 160 |
+
Table 4: End-to-end results on KBP 2016 (using predicted triggers and predicted symbolic features).
|
| 161 |
+
|
| 162 |
+
<table><tr><td>ACE (Test Data)</td><td>CoNLL</td><td>AVG</td></tr><tr><td>PAIREDRL (2020)</td><td>84.65</td><td>-</td></tr><tr><td>Baseline</td><td>81.62</td><td>81.49</td></tr><tr><td>Simple (All Features)</td><td>75.32</td><td>74.94</td></tr><tr><td>CDGM + Noise (All Features)</td><td>84.76</td><td>83.95</td></tr></table>
|
| 163 |
+
|
| 164 |
+
Table 5: Results on ACE 2005 using gold triggers and predicted symbolic features.
|
| 165 |
+
|
| 166 |
+
<table><tr><td>ACE (Test Data)</td><td>CoNLL</td><td>AVG</td></tr><tr><td>Baseline</td><td>81.62</td><td>81.49</td></tr><tr><td>Simple (All Features)</td><td>85.75</td><td>85.40</td></tr><tr><td>CDGM (All Features)</td><td>87.90</td><td>88.30</td></tr><tr><td>CDGM + Noise (All Features)</td><td>85.40</td><td>85.38</td></tr></table>
|
| 167 |
+
|
| 168 |
+
Table 6: Results on ACE 2005 using gold triggers and ground-truth symbolic features.
|
| 169 |
+
|
| 170 |
+
use OneIE (Lin et al., 2020) to extract event mentions and their types. Other features are predicted by a simple Transformer model. Overall, our full model outperforms the baseline model by a large margin and significantly outperforms state-of-the-art on KBP 2016. Our ACE 2005 scores are not directly comparable with previous work, as Peng et al. (2016) conducted 10-fold cross-validation and essentially used more training data. Nevertheless, the magnitude of the differences in scores between our best model and the state-of-the-art methods indicates the effectiveness of our methods.
|
| 171 |
+
|
| 172 |
+
Overall Results (on Ground-truth Triggers) The overall results on ACE 2005 using ground-truth triggers and predicted symbolic features are shown in Table 5. The performance of our full model is comparable with previous state-of-the-art result in (Yu et al., 2020). To better analyze the usefulness of symbolic features as well as the effectiveness of our methods, we also conduct experiments using ground-truth triggers and ground-truth symbolic features (Table 6). First, when the symbolic features are clean, incorporating them using the simple concatenation strategy can already boost the performance significantly. The symbolic features contain information complementary to that in the SpanBERT contextual embeddings. Second, we also see that the noisy training method is not helpful when the symbolic features are clean. Unlike other regularization methods such as dropout (Srivastava et al., 2014) and weight decay (Krogh and Hertz, 1992), the main role of our noisy training method is not to reduce overfitting in the traditional sense. Its main function is to help CDGMs learn to distill reliable signals from noisy features.
|
| 173 |
+
|
| 174 |
+
<table><tr><td>Features</td><td>AVG (Simple)</td><td>AVG (CDGM + Noise)</td><td>ΔAVG</td></tr><tr><td>Subtype</td><td>56.41</td><td>57.02</td><td>+0.61</td></tr><tr><td>Polarity</td><td>56.06</td><td>57.03</td><td>+0.97</td></tr><tr><td>Modality</td><td>54.81</td><td>58.54</td><td>+3.73</td></tr><tr><td>Genericity</td><td>54.70</td><td>57.82</td><td>+3.12</td></tr><tr><td>Tense</td><td>54.28</td><td>56.62</td><td>+2.34</td></tr></table>
|
| 175 |
+
|
| 176 |
+
Impact of Different Symbolic Features Table 7 shows the results of incorporating different types of symbolic features on the ACE 2005 dataset. Overall, our methods consistently perform better than the simple concatenation strategy across all feature types. The gains are also larger for more noisy features than clean features (feature prediction accuracies were shown in Table 2). This suggests that our methods are particularly useful in situations where the symbolic features are noisy.
|
| 177 |
+
|
| 178 |
+
Comparison with Multi-Task Learning We also investigate whether we can incorporate symbolic semantics into coreference resolution by simply doing multi-task training. We train our baseline model to jointly perform coreference resolution and symbolic feature prediction. The test AVG score on ACE 2005 is only 56.5. In contrast, our best model achieves an AVG score of 59.76 (Table 3).
|
| 179 |
+
|
| 180 |
+
Qualitative Examples Table 8 shows few examples from the ACE 2005 dataset that illustrate how incorporating symbolic features using our proposed methods can improve the performance of event conference resolution. In each example, our baseline model incorrectly predicts the highlighted event mentions to be coreferential.
|
| 181 |
+
|
| 182 |
+
Remaining Challenges Previous studies suggest that there exist different types and degrees of event coreference (Recasens et al., 2011; Hovy et al., 2013). Many methods (including ours) focus on the full strict coreference task, but other types of coreference such as partial coreference have remained underexplored. Hovy et al. (2013) defines two core types of partial event coreference relations: subevent relations and membership relations. Subevent relations form a stereotypical sequence of events, whereas membership relations represent instances of an event collection. We leave tackling the partial coreference task to future work.
|
| 183 |
+
|
| 184 |
+
# 4 Related Work
|
| 185 |
+
|
| 186 |
+
Several previous approaches to within-document event coreference resolution operate by first ap
|
| 187 |
+
|
| 188 |
+
Table 7: Impact of Symbolic Features (ACE 2005)
|
| 189 |
+
|
| 190 |
+
<table><tr><td>...{Negotiations}ev1 between Washington and ...</td></tr><tr><td>...think that this will affect the{elections}ev2 unless ...</td></tr><tr><td>ev1(Contact:Meet)</td></tr><tr><td>ev2(Personnel:Elect)</td></tr><tr><td>...since you are not directly{elected}ev1, it would be ...</td></tr><tr><td>...Az-Zaman daily that{elections}ev2 should be held ...</td></tr><tr><td>ev1(Personnel:Elect): Polarity = NEGATIVE</td></tr><tr><td>ev2(Personnel:Elect): Polarity = POSITIVE</td></tr><tr><td>...told reporters after his{appeal}ev1 was rejected ...</td></tr><tr><td>...most junior of the court of{appeal}ev2, and its ...</td></tr><tr><td>ev1(Justice:Apeal): Genericity = SPECIFIC</td></tr><tr><td>ev2(Justice:Apeal): Genericity = GENERIC</td></tr></table>
|
| 191 |
+
|
| 192 |
+
Table 8: Examples of using symbolic features to improve event coreference resolution.
|
| 193 |
+
|
| 194 |
+
plying a mention-pair model to compute pairwise distances between event mentions, and then they apply a clustering algorithm such as agglomerative clustering or spectral graph clustering (Chen et al., 2009; Chen and Ji, 2009; Chen and Ng, 2014; Nguyen et al., 2016; Huang et al., 2019). In addition to trigger features, these methods use a variety of additional symbolic features such as event types, attributes, arguments, and distance. These approaches do not use contextual embeddings such as BERT and SpanBERT (Devlin et al., 2019; Joshi et al., 2020). Recently, there are several studies that use contextual embeddings together with type-based or argument-based information (Lu et al., 2020; Yu et al., 2020). These methods design networks or mechanisms that are specific to only one type of symbolic features. In contrast, our work is more general and can be effectively applied to a wide range of symbolic features.
|
| 195 |
+
|
| 196 |
+
# 5 Conclusions and Future Work
|
| 197 |
+
|
| 198 |
+
In this work, we propose a novel gated module to incorporate symbolic semantics into event coreference resolution. Combined with a simple noisy training technique, our best models achieve competitive results on ACE 2005 and KBP 2016. In the future, we aim to extend our work to address more general problems such as cross-lingual cross-document coreference resolution.
|
| 199 |
+
|
| 200 |
+
# Acknowledgement
|
| 201 |
+
|
| 202 |
+
This research is based upon work supported in part by U.S. DARPA KAIROS Program No. FA8750-19-2-1004, U.S. DARPA AIDA Program No. FA8750-18-2-0014, and Air Force No. FA8650-17-C-7715. The views and conclusions contained herein are those of the authors and should not be
|
| 203 |
+
|
| 204 |
+
interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
|
| 205 |
+
|
| 206 |
+
# References
|
| 207 |
+
|
| 208 |
+
C. Chen and Vincent Ng. 2014. Sinocoreferencer: An end-to-end chinese event coreference resolver. In LREC.
|
| 209 |
+
Chen Chen and Vincent Ng. 2016. Joint inference over a lightly supervised information extraction pipeline: Towards event coreference resolution for resource-scarce languages. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, page 2913-2920. AAAI Press.
|
| 210 |
+
Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multipooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 167-176, Beijing, China. Association for Computational Linguistics.
|
| 211 |
+
Zheng Chen and Heng Ji. 2009. Graph-based event coreference resolution. In Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing (TextGraphs-4), pages 54-57, Suntec, Singapore. Association for Computational Linguistics.
|
| 212 |
+
Zheng Chen, Heng Ji, and Robert Haralick. 2009. A pairwise event coreference model, feature impact and evaluation for event coreference resolution. In Proceedings of the Workshop on Events in Emerging Text Types, pages 17-22, Borovets, Bulgaria. Association for Computational Linguistics.
|
| 213 |
+
Prafulla Kumar Choubey and Ruihong Huang. 2017. Event coreference resolution by iteratively unfolding inter-dependencies among events. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2124-2133, Copenhagen, Denmark. Association for Computational Linguistics.
|
| 214 |
+
Prafulla Kumar Choubey, Kaushik Raju, and Ruihong Huang. 2018. Identifying the most dominant event in a news article by mining event coreference relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 340-345, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 215 |
+
|
| 216 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 217 |
+
Chase Duncan, Liang-Wei Chan, Haoruo Peng, Hao Wu, Shyam Upadhyay, Nitish Gupta, Chen-Tse Tsai, Mark Sammons, and Dan Roth. 2017. UI CCG TACKBP2017 submissions: Entity discovery and linking, and event nugget detection and co-reference. In Proceedings of the 2017 Text Analysis Conference, TAC 2017, Gaithersburg, Maryland, USA, November 13-14, 2017. NIST.
|
| 218 |
+
C. Harris, K. J. Millman, S. Walt, Ralf Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, Nathaniel J. Smith, R. Kern, Matti Picus, S. Hoyer, M. Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fern'andez del R'io, Mark Wiebe, P. Peterson, Pierre G'erard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, Christoph Gohlke, and T. E. Oliphant. 2020. Array programming with numpy. Nature, 585 7825:357-362.
|
| 219 |
+
Eduard Hovy, Teruko Mitamura, Felisa Verdejo, Jun Araki, and Andrew Philpot. 2013. Events are not simple: Identity, non-identity, and quasi-identity. In Workshop on Events: Definition, Detection, Coreference, and Representation, pages 21-28, Atlanta, Georgia. Association for Computational Linguistics.
|
| 220 |
+
Yin Jou Huang, Jing Lu, Sadao Kurohashi, and Vincent Ng. 2019. Improving event coreference resolution by learning argument compatibility from unlabeled data. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 785-795, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 221 |
+
Heng Ji and Ralph Grishman. 2011. Knowledge base population: Successful approaches and challenges. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1148-1158, Portland, Oregon, USA. Association for Computational Linguistics.
|
| 222 |
+
Heng Ji, Joel Nothman, H Trang Dang, and Sydney Informatics Hub. 2016. Overview of tac-kbp2016 trilingual edl and its impact on end-to-end cold-start kbp. Proceedings of TAC.
|
| 223 |
+
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8(0):64-77.
|
| 224 |
+
|
| 225 |
+
Anders Krogh and John Hertz. 1992. A simple weight decay can improve generalization. In Advances in Neural Information Processing Systems, volume 4. Morgan-Kaufmann.
|
| 226 |
+
Tuan Lai, Quan Hung Tran, Trung Bui, and Daisuke Kihara. 2019. A gated self-attention memory network for answer selection. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5953-5959, Hong Kong, China. Association for Computational Linguistics.
|
| 227 |
+
LDC. 2005. English annotation guidelines for events version 5.4.3. Linguistic Data Consortium, 1.
|
| 228 |
+
Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197, Copenhagen, Denmark. Association for Computational Linguistics.
|
| 229 |
+
Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7999-8009, Online. Association for Computational Linguistics.
|
| 230 |
+
Ying Lin, Liyuan Liu, Heng Ji, Dong Yu, and Jiawei Han. 2019. Reliability-aware dynamic feature composition for name tagging. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 165-174, Florence, Italy. Association for Computational Linguistics.
|
| 231 |
+
J. Lu and Vincent Ng. 2017a. Learning antecedent structures for event coreference resolution. 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pages 113-118.
|
| 232 |
+
Jing Lu and Vincent Ng. 2015. Utd's event nugget detection and coreference system at KBP 2015. In Proceedings of the 2015 Text Analysis Conference, TAC 2015, Gaithersburg, Maryland, USA, November 16-17, 2015, 2015. NIST.
|
| 233 |
+
Jing Lu and Vincent Ng. 2016. Event coreference resolution with multi-pass sieves. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3996-4003, Porto-rož, Slovenia. European Language Resources Association (ELRA).
|
| 234 |
+
Jing Lu and Vincent Ng. 2017b. Joint learning for event coreference resolution. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 90-101, Vancouver, Canada. Association for Computational Linguistics.
|
| 235 |
+
|
| 236 |
+
Jing Lu and Vincent Ng. 2018. Event coreference resolution: A survey of two decades of research. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 5479-5486. International Joint Conferences on Artificial Intelligence Organization.
|
| 237 |
+
Yaojie Lu, Hongyu Lin, Jialong Tang, Xianpei Han, and Le Sun. 2020. End-to-end neural event coreference resolution. arXiv preprint arXiv:2009.08153.
|
| 238 |
+
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS'13, page 3111-3119, Red Hook, NY, USA. Curran Associates Inc.
|
| 239 |
+
Teruko Mitamura, Zhengzhong Liu, and Eduard H. Hovy. 2016. Overview of TAC-KBP 2016 event nugget track. In Proceedings of the 2016 Text Analysis Conference, TAC 2016, Gaithersburg, Maryland, USA, November 14-15, 2016. NIST.
|
| 240 |
+
Thien Huu Nguyen, Adam Meyers, and Ralph Grishman. 2016. New york university 2016 system for KBP event nugget: A deep learning approach. In Proceedings of the 2016 Text Analysis Conference, TAC 2016, Gaithersburg, Maryland, USA, November 14-15, 2016. NIST.
|
| 241 |
+
Adam Paszke, S. Gross, Francisco Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, Alban Desmaison, Andreas Kopf, E. Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, B. Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. ArXiv, abs/1912.01703.
|
| 242 |
+
Haoruo Peng, Yangqiu Song, and Dan Roth. 2016. Event detection and co-reference with minimal supervision. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 392-402, Austin, Texas. Association for Computational Linguistics.
|
| 243 |
+
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
|
| 244 |
+
M. Recasens, E. Hovy, and M. Martí. 2011. Identity, non-identity, and near-identity: Addressing the complexity of coreference. *Lingua*, 121:1138–1152.
|
| 245 |
+
Mark Sammons, Haoruo Peng, Yangqiu Song, Shyam Upadhyay, Chen-Tse Tsai, Pavankumar Reddy, Subhro Roy, and Dan Roth. 2015. Illinois CCG TAC 2015 event nugget, entity discovery and linking, and slot filler validation systems. In Proceedings of the
|
| 246 |
+
|
| 247 |
+
2015 Text Analysis Conference, TAC 2015, Gaithersburg, Maryland, USA, November 16-17, 2015, 2015. NIST.
|
| 248 |
+
|
| 249 |
+
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(56):1929-1958.
|
| 250 |
+
|
| 251 |
+
Lucy Vanderwende, Michele Banko, and Arul Menezes. 2004. Event-centric summary generation. In Working notes of the Document Understanding Conference 2004. ACL.
|
| 252 |
+
|
| 253 |
+
Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia, 57:45.
|
| 254 |
+
|
| 255 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020a. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
|
| 256 |
+
|
| 257 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020b. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
|
| 258 |
+
|
| 259 |
+
Xiaodong Yu, Wenpeng Yin, and Dan Roth. 2020. Paired representation learning for event and entity coreference. arXiv preprint arXiv:2010.12808.
|
| 260 |
+
|
| 261 |
+
Tongtao Zhang, Hongzhi Li, Heng Ji, and Shih-Fu Chang. 2015. Cross-document event coreference resolution based on cross-media features. In Proc. Conference on Empirical Methods in Natural Language Processing (EMNLP2015).
|
| 262 |
+
|
| 263 |
+
# A Appendix
|
| 264 |
+
|
| 265 |
+
Section A.1 describes our symbolic feature predictors. Section A.2 provides the details of the datasets we used. Section A.3 describes the hyperparameters and their value ranges that were explored. Section A.4 presents our reproducibility checklist.
|
| 266 |
+
|
| 267 |
+
<table><tr><td>Dataset</td><td>Train (# Docs)</td><td>Dev (# Docs)</td><td>Test (# Docs)</td></tr><tr><td>ACE-2005</td><td>529</td><td>30</td><td>40</td></tr><tr><td>KBP 2016</td><td>509</td><td>139</td><td>169</td></tr></table>
|
| 268 |
+
|
| 269 |
+
Table 9: Basic statistics of the datasets.
|
| 270 |
+
|
| 271 |
+
# A.1 Symbolic Feature Predictors
|
| 272 |
+
|
| 273 |
+
In an end-to-end setting, we train and use OneIE (Lin et al., 2020) to identify event mentions along with their subtypes. For other symbolic features, we train a simple joint model. More specifically, given a document, our joint model first forms contextualized representations for the input tokens using SpanBERT (Joshi et al., 2020). Each event mention's representation is then defined as the average of the embeddings of the tokens in its trigger. After that, we feed the mentions' representations into classification heads for feature value prediction. Each classification head is a standard multi-layer feedforward network with softmax output units.
|
| 274 |
+
|
| 275 |
+
The event detection performance of OneIE on the test set of ACE 2005 is 74.7 (Type-F1 score). OneIE's performance on KBP 2016 is 55.20 (Type-F1 score). For reference, the performance of the event detection component of $\mathrm{E}^3\mathrm{C}$ (Lu et al., 2020) on KBP 2016 is 55.38 (Type-F1 score).
|
| 276 |
+
|
| 277 |
+
# A.2 Datasets Description
|
| 278 |
+
|
| 279 |
+
In this work, we use two English within-document coreference datasets: ACE 2005 and KBP 2016. The ACE 2005 English corpus contains fine-grained event annotations for 599 articles from a variety of sources. We use the same split as that stated in (Chen et al., 2015), where there are 529/30/40 documents in the train/dev/test split. In ACE, a strict notion of event coreference is adopted, which requires two event mentions to be coreferential if and only if they had the same agent(s), patient(s), time, and location. For KBP 2016, we follow the setup of (Lu and Ng, 2017a), where there are 648 documents that can be used for training and 169 documents for testing. We train our model on 509 documents randomly chosen from the training documents and tune parameters on the remaining 139 training documents. Different from ACE, KBP adopts a more relaxed definition of event coreference, where two event mentions can be coreferent as long as they intuitively refer to the same real-world event. Table 9 summarizes the basic statistics of the datasets.
|
| 280 |
+
|
| 281 |
+
# A.3 Hyperparameters
|
| 282 |
+
|
| 283 |
+
We use SpanBERT (spanbert-base-cased) as the Transformer encoder (Wolf et al., 2020a; Joshi et al., 2020). We did hyperparameter tuning using the datasets' dev sets. For all the experiments, we pick the model which achieves the best AVG score on the dev set, and then evaluate it on the test set. For each of our models, two different learning rates are used, one for the lower pretrained Transformer encoder and one for the upper layer. The optimal hyperparameter values are variant-specific, and we experimented with the following range of possible values: $\{8,16\}$ for batch size, $\{3e - 5,4e - 5,$ $5e - 5\}$ for lower learning rate, $\{1e - 4,2.5e - 4,5e - 4\}$ for upper learning rate, and $\{50,100\}$ for number of training epochs. Table 10 shows the value of $\epsilon$ we used for each symbolic feature type. In general, the larger the discrepancy between the train and test accuracies, the larger the value of $\epsilon$ .
|
| 284 |
+
|
| 285 |
+
<table><tr><td>Dataset</td><td>Features</td><td>ε (predicted mentions)</td><td>ε (gold mentions)</td></tr><tr><td rowspan="5">ACE 2005</td><td>Type</td><td>0.00</td><td>0.10</td></tr><tr><td>Polarity</td><td>0.00</td><td>0.02</td></tr><tr><td>Modality</td><td>0.15</td><td>0.20</td></tr><tr><td>Genericity</td><td>0.15</td><td>0.20</td></tr><tr><td>Tense</td><td>0.25</td><td>0.30</td></tr><tr><td rowspan="2">KBP 2016</td><td>Type</td><td>0.05</td><td>-</td></tr><tr><td>Realis</td><td>0.10</td><td>-</td></tr></table>
|
| 286 |
+
|
| 287 |
+
Table 10: The value of $\epsilon$ for each feature type.
|
| 288 |
+
|
| 289 |
+
# A.4 Reproducibility Checklist
|
| 290 |
+
|
| 291 |
+
We present the reproducibility information of the paper. Due to license reason, we cannot provide downloadable links for ACE 2005 and KBP 2016.
|
| 292 |
+
|
| 293 |
+
Implementation Dependencies Libraries Pytorch 1.6.0 (Paszke et al., 2019), Transformers 3.0.2 (Wolf et al., 2020b), Numpy 1.19.1 (Harris et al., 2020), CUDA 10.2.
|
| 294 |
+
|
| 295 |
+
Computing Infrastructure The experiments were conducted on a server with Intel(R) Xeon(R) Gold 5120 CPU @ 2.20GHz and NVIDIA Tesla V100 GPUs. The allocated RAM is 187G. GPU memory is 16G.
|
| 296 |
+
|
| 297 |
+
Average Runtime Table 11 shows the estimated average run time of our full model.
|
| 298 |
+
|
| 299 |
+
Number of Model Parameters The number of parameters in a baseline model is about 109.7M
|
| 300 |
+
|
| 301 |
+
<table><tr><td>Dataset</td><td>One Training Epoch</td><td>Evaluation (Dev Set)</td><td>Evaluation (Test Set)</td></tr><tr><td>ACE 2005</td><td>65.5 seconds</td><td>2.3 seconds</td><td>2.4 seconds</td></tr><tr><td>KBP 2016</td><td>103.6 seconds</td><td>10.7 seconds</td><td>11.8 seconds</td></tr></table>
|
| 302 |
+
|
| 303 |
+
Table 11: Estimated average runtime of our full model.
|
| 304 |
+
|
| 305 |
+
parameters. The number of parameters in a full model trained on the KBP 2016 dataset is about 111.4M parameters. The number of parameters in a full model trained on the ACE 2005 dataset is about 113.8M parameters.
|
| 306 |
+
|
| 307 |
+
Hyperparameters of Best-Performing Models Table 12 summarizes the hyperparameter configurations of best-performing models. Note that Table 10 already showed the hyperparameters used for the noisy training method.
|
| 308 |
+
|
| 309 |
+
<table><tr><td>Hyperparameters</td><td>ACE 2005 (end-to-end)</td><td>KBP 2016 (end-to-end)</td><td>ACE 2005 (gold mentions)</td></tr><tr><td>Symbolic Features Used</td><td>All (CDGM)</td><td>All (CDGM)</td><td>All (CDGM)</td></tr><tr><td>Noisy Training</td><td>Yes</td><td>Yes</td><td>Yes</td></tr><tr><td>Lower Learning Rate</td><td>4e-5</td><td>5e-5</td><td>5e-5</td></tr><tr><td>Upper Learning Rate</td><td>2.5e-4</td><td>5e-4</td><td>5e-4</td></tr><tr><td>Batch Size</td><td>16</td><td>8</td><td>8</td></tr><tr><td>Number Epochs</td><td>100</td><td>50</td><td>50</td></tr></table>
|
| 310 |
+
|
| 311 |
+
Table 12: Hyperparameters for best-performing models (refer to Table 10 for the hyperparameters used for the noisy training method).
|
| 312 |
+
|
| 313 |
+
Expected Validation Performance We repeat training five times for each best-performing model. We show the average validation performance in Table 13. Our validation scores on KBP 2016 are not comparable to that of (Lu et al., 2020), because we split the original 648 training documents into the final train set and dev set randomly. We still use the same test set. For each best-performing model, we report the test performance of the checkpoint with the best AVG score in the main paper.
|
| 314 |
+
|
| 315 |
+
<table><tr><td>Dataset</td><td>Avg. CoNLL score</td><td>Avg. AVG score</td></tr><tr><td>ACE 2005 (end-to-end)</td><td>60.20</td><td>58.80</td></tr><tr><td>KBP 2016 (end-to-end)</td><td>54.86</td><td>48.83</td></tr><tr><td>ACE 2005 (gold mentions)</td><td>81.9</td><td>83.02</td></tr></table>
|
| 316 |
+
|
| 317 |
+
Table 13: Average validation performance.
|
acontextdependentgatedmoduleforincorporatingsymbolicsemanticsintoeventcoreferenceresolution/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3cca3db50cf35d4bd7c217854c8b5dece70311ef43726e77e987b2dee2bb2950
|
| 3 |
+
size 401401
|
acontextdependentgatedmoduleforincorporatingsymbolicsemanticsintoeventcoreferenceresolution/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:58443ae097d6c48d4e2baa02ae8f3c86f37b65bc6fe63c47d0af2829c60a95b9
|
| 3 |
+
size 352522
|
actionbasedconversationsdatasetacorpusforbuildingmoreindepthtaskorienteddialoguesystems/8c2190d2-e7f4-4542-9c71-875144996065_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:60a2ef9dd2bf36339827266f99df156270643bfd50b5b97d9f474f931e7a7609
|
| 3 |
+
size 117226
|
actionbasedconversationsdatasetacorpusforbuildingmoreindepthtaskorienteddialoguesystems/8c2190d2-e7f4-4542-9c71-875144996065_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:08c7f125950f518af06e39c9f0cb0c30af632f594e24c49f5b3692d8ec059ce6
|
| 3 |
+
size 139164
|
actionbasedconversationsdatasetacorpusforbuildingmoreindepthtaskorienteddialoguesystems/8c2190d2-e7f4-4542-9c71-875144996065_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d9f584b54e358b3ddd381e6faede0915d96dd110fe273ad7d81a06764b5086ba
|
| 3 |
+
size 1418755
|
actionbasedconversationsdatasetacorpusforbuildingmoreindepthtaskorienteddialoguesystems/full.md
ADDED
|
@@ -0,0 +1,520 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems
|
| 2 |
+
|
| 3 |
+
Derek Chen†, Howard Chen†, Yi Yang†, Alexander Lin†, Zhou Yu‡
|
| 4 |
+
|
| 5 |
+
†ASAPP, New York, NY 10007
|
| 6 |
+
|
| 7 |
+
$^{\ddagger}$ Columbia University, NY
|
| 8 |
+
|
| 9 |
+
$\dagger$ {dchen,hchen,yyang,alin}@asapp.com,zy2461@columbia.edu
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
Existing goal-oriented dialogue datasets focus mainly on identifying slots and values. However, customer support interactions in reality often involve agents following multi-step procedures derived from explicitly-defined company policies as well. To study customer service dialogue systems in more realistic settings, we introduce the Action-Based Conversations Dataset (ABCD), a fully-labeled dataset with over 10K human-to-human dialogues containing 55 distinct user intents requiring unique sequences of actions constrained by policies to achieve task success.
|
| 14 |
+
|
| 15 |
+
We propose two additional dialog tasks, Action State Tracking and Cascading Dialogue Success, and establish a series of baselines involving large-scale, pre-trained language models on this dataset. Empirical results demonstrate that while more sophisticated networks outperform simpler models, a considerable gap (50.8% absolute accuracy) still exists to reach human-level performance on ABCD.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
The broad adoption of virtual assistants and customer service chatbots in recent years has been driven in no small part by the usefulness of these tools, whereby actions are taken on behalf of the user to accomplish their desired targets (Amazon, 2019; Google, 2019). Research into task-oriented dialogue has concurrently made tremendous progress on natural language understanding of user needs (Wu et al., 2019; Rastogi et al., 2020b; Liang et al., 2020). However, selecting actions in real life requires not only obeying user requests, but also following practical policy limitations which may be at odds with those requests. For example, while a user may ask for a refund on their purchase, an agent should only honor such a request if it is valid with regards to the store's return policy. Described in actions, before an agent
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: An interaction from ABCD (left) starts with the customer receiving a prompt (top right) to ground the dialogue. The agent follows the guidelines (bottom right) to identify the customer intent and to assist them in resolving the issue through a series of actions.
|
| 23 |
+
|
| 24 |
+
can [Offer Refund], they must first [Validate Purchase]. Furthermore, resolving customer issues often concerns multiple actions completed in succession with a specific order since prior steps may influence future decision states. (See Figure 1)
|
| 25 |
+
|
| 26 |
+
To more closely model real customer service agents, we present the Action-Based Conversations Dataset (ABCD) consisting of 10,042 conversations containing numerous actions with precise procedural requirements. These actions differ from typical dialogue acts because tracking them necessitates striking a balance between external user requests and internally-imposed guidelines. Thus, the major difference between ABCD and other dialogue datasets, such as MultiWOZ (Budzianowski et al., 2018), is that it asks the agent to adhere to a set of policies while simultaneously dealing with customer requests.
|
| 27 |
+
|
| 28 |
+
While the prevalent data collection paradigm involves Wizard-of-Oz techniques, our situation
|
| 29 |
+
|
| 30 |
+
containing asymmetric speakers compelled the design of a novel Expert Live Chat system. Our dataset includes asymmetric speakers because, unlike customers, agents must undergo extensive training to be able to navigate the Agent Guidelines during real-time conversations. This makes a naive pairing process untenable since arbitrary matching might lead to chats containing two users who share the same role.
|
| 31 |
+
|
| 32 |
+
Based on the unique aspects of ABCD, we propose two new tasks. To start, Action State Tracking (AST) closely mirrors the format of Dialogue State Tracking where the user intent is inferred from the dialogue history. AST then differs since the correct state must also be reconciled with the requirements outlined in the Agent Guidelines. As a second task, Cascading Dialogue Success (CDS) extends this notion across the entire conversation. At each turn, the agent decides to take an action, respond with an utterance or end the chat. As needed, the agent should also predict the right action or select the best utterance.
|
| 33 |
+
|
| 34 |
+
For each task, we build various models to establish baseline performance and to highlight the importance of each constraint. Experiments show that in addition to conversation history, conditioning on the Agent Guidelines further boosts performance, with top models relying on both aspects to reach $31.9\%$ accuracy. Additional results show removing action context hurts performance, implying the importance of taking into account the sequential nature of actions. Lastly, human evaluation reaches $82.7\%$ , demonstrating ample room for future improvement.
|
| 35 |
+
|
| 36 |
+
The contribution of this work is three-fold: (1) We provide a novel, large-scale dataset containing context-dependent, procedural actions along with corresponding Agent Guidelines. (2) We establish a new technique called Expert Live Chat for capturing natural dialogue between two unequal interlocutors. (3) We propose two metrics, Action State Tracking and Cascading Dialogue Success, for measuring dialogue comprehension with policy constraints. Finally, we build on pretrained neural models to serve as baselines for these tasks.
|
| 37 |
+
|
| 38 |
+
# 2 Related Work
|
| 39 |
+
|
| 40 |
+
Traditional Dialogue Datasets In recent years, dialogue datasets have grown in size from hundreds of conversations to the tens of thousands (Henderson et al., 2014; Budzianowski
|
| 41 |
+
|
| 42 |
+
et al., 2018; Peskov et al., 2019). Unlike open-domain chatbots often built for entertainment, task-oriented dialogue systems trained on such datasets are intended for solving user issues. The resolution of these issues implicitly requires taking actions, where an action is a non-utterance decision that depends on both user and system inputs. Despite the tremendous number of dialogues, examples in previous benchmarks fixate on the single knowledge base (KB) lookup action where the agent searches for an item that matches the user's desires and is available in the KB. By sticking to this sole interaction, conversations can be generated through rules (Weston et al., 2016), paraphrased from templates (Byrne et al., 2019) or taken from static text scenarios (Zhang et al., 2018), leading to dialogues that are predominantly homogeneous in nature.
|
| 43 |
+
|
| 44 |
+
Many datasets have scaled to more domains as well (Eric et al., 2017; Budzianowski et al., 2018; Peskov et al., 2019) Since each new domain introduces a KB lookup requiring different slot-values, the number of unique actions grows as a linear function of the number of domains covered. Rather than expanding wider, ABCD instead focuses deeper by increasing the count and diversity of actions within a single domain.
|
| 45 |
+
|
| 46 |
+
Exploring Other Avenues Multiple aspects are explored by conversational datasets attempting to mimic reality. Rashkin et al. (2019) studies the ability of a dialogue model to handle empathy, while Zhou et al. (2018) focuses on commonsense reasoning. Another approach is to augment dialogues with multi-modality including audio (Castro et al., 2019) or visual (Das et al., 2017a) components. Other researchers have explored grounding conversations with external data sources such as personas (Zhang et al., 2018), online reviews (Ghazvininejad et al., 2018) or large knowledge bases (Dinan et al., 2019). Intricate dialogues can also appear when studying collaboration (He et al., 2017; Kim et al., 2019) or negotiation (Lewis et al., 2017; He et al., 2018) which strongly encourage interaction with the other participant. In comparison, ABCD aims to make dialogue more realistic by considering distinct constraints from policies.
|
| 47 |
+
|
| 48 |
+
Dialogues with Policies Procedural actions following strict guidelines naturally emerge in dialogue research geared towards real-world appli
|
| 49 |
+
|
| 50 |
+
<table><tr><td>Subflows</td><td>recover-username,1 recover-password,1 reset-2fa,1 status-service-added,2 status-service-removed,2 status-shipping-question,2 status-credit-missing,2 manage-change-address,2 manage-change-name,2 manage-change-phone,2 manage-payment-method,2 status-mystery-fee,3 status-delivery-time,3 status-payment-method,3 status-quantity,3 manage-upgrade,3 manage-downgrade,3 manage-create,3 manage-cancel,3 refund-initiate,4 refund-update,4 refund-status,4 return-stain,4 return-color,4 return-size,4 bad-price-competitor,5 bad-price-yesterday,5 out-of-stock-general,5 out-of-stock-one-item,5 promo-code-valid,5 promo-code-out-of-date,5 mistimed-billing-already-returned,5 mistimed-billing-never-bought,5 status,6 manage,6 missing,6 cost,6 boots,7 shirt,7 jeans,7 jacket,7 pricing,8 membership,8 timing,8 policy,8 status-active,9 status-due-amount,9 status-due-date,9 manage-pay-bill,9 manage-extension,9 manage-dispute-bill,9 credit-card,10 shopping-cart,10 search-results,10 slow-speed10</td></tr><tr><td>Actions</td><td>verify-identity, ask-the-oracle, validate-purchase, make-password, promo-code, subscription-status, offer-refund, make-purchase, record-reason, enter-details, shipping-status, update-order, pull-up-account, update-account, send-link, notify-team, membership, search-faq, try-again, log-out-in, instructions, search-jeans, search-shirt, search-boots, search-jacket, search-pricing, search-membership, search-timing, search-policy, select-faq</td></tr></table>
|
| 51 |
+
|
| 52 |
+
Table 1: Full ontology of Agent Guidelines decomposable into high-level flows describing the overall category and subflows defining a unique set of intents. All actions are also shown. Upper script numeral indicates the flow that the subflow belongs to. 1: account access, 2: manage account, 3: order issue, 4: product defect, 5: purchase dispute, 6: shipping issue, 7: single item query, 8: storewide query, 9: subscription inquiry, 10: troubleshoot site
|
| 53 |
+
|
| 54 |
+
cations. Hybrid Code Networks encode business logic through masking templates since various behaviors become nonsensical in certain situations (Williams et al., 2017). Research from Moiseeva et al. (2020) studies multi-purpose virtual assistants that attempt to distinguish among thirteen explicit actions. The closest prior work to ABCD is the Schema Guided Dialogue (SGD) dataset, which contains dozens of API calls that can be interpreted as individual actions sending commands to a SQL engine (Rastogi et al., 2020b). The functionality of these actions is occasionally restricted to reflect constraints of real-life services. The action restrictions within ABCD are made explicit by the Agent Guidelines manual.
|
| 55 |
+
|
| 56 |
+
# 3 Action-Based Conversation Dataset
|
| 57 |
+
|
| 58 |
+
In this section, we describe the task setting of ABCD by following along with the example dialog shown in Figure 1.
|
| 59 |
+
|
| 60 |
+
# 3.1 Customer
|
| 61 |
+
|
| 62 |
+
During data collection, customers are given a simple prompt (such as "You want to keep your subscription another year.") instead of step-by-step instructions, which reflects how real-world customers innately understand their own issue, but only have a rough idea of how to resolve said issue. Accordingly, customers within ABCD remain oblivious towards what values apply to which actions, nor are they aware that actions exist in first place. This ambiguity forces the agent and customer to collaboratively uncover the correct latent intent through back and forth communication, naturally leading to longer dialogues.
|
| 63 |
+
|
| 64 |
+
# 3.2 Customer Service Agent
|
| 65 |
+
|
| 66 |
+
Following the standard dialog setup, the agent starts by parsing the dialogue history to capture the customer intent, which in Figure 1 is a subscription extension. ABCD then diverges as the next step involves interpreting the Agent Guidelines, a document representing the internal policies of a company in the online retail domain (See Table 1). Using the guidelines, the trained agent should find the one unique subflow corresponding to the customer intent. Each subflow in turn is defined by exactly one unique sequence of actions.
|
| 67 |
+
|
| 68 |
+
While identifying a subflow may seem straightforward, information asymmetry prevents the customers from directly revealing the name of their intent. For example, a customer might inquire about the status of their recent purchase, but an agent has over a dozen different subflows related to order statuses, so selecting the right one suddenly becomes highly non-trivial.
|
| 69 |
+
|
| 70 |
+
In our case, the agent eventually figures out the correct subflow and begins to execute actions, which consists of recording values given by the customer, namely the customer's full name or account ID in order to [Pull up Account]. As the third action, the guidelines instruct the agent to ask for the customer's membership level. After the customer supplies this information, the agent enters the "guest" value into the agent dashboard by clicking the [Membership] button. Buttons have variable slots that may or may not need to be filled, depending on the context (See Table 1 for a full list). Dialogue success demands that agents execute a chain of such actions in the right order with the right values, while simultaneously engaging the customer in natural language conversation.
|
| 71 |
+
|
| 72 |
+
There are three reasons that make carrying out a series of actions more difficult than the task lets on. To start, the permitted actions in a given state are determined not only by Agent Guidelines, but also by the user's desire, which may be in conflict. For example, the customer in Figure 1 wanted to extend their subscription, but the guidelines prevented the agent from doing so. Secondly, actions must be completed in order. This procedural requirement comes from the realization that completing actions out of order (or with missing steps) do not make sense in many real-world scenarios. For example, it is critical to [Verify Identity] before resetting someone's password, not after. Finally, actions themselves induce stochastic outcomes, preventing agents from memorizing patterns of subflow resolution. As an example, [Ask the Oracle] often determines if a customer complaint was valid. In the case of a company error, the agent is compelled to immediately resolve the issue, whereas a misunderstanding made by the customer warrants a different set of responses.
|
| 73 |
+
|
| 74 |
+
# 4 Data Collection Methodology
|
| 75 |
+
|
| 76 |
+
This section outlines how we collect and annotate our dataset with context-dependent actions.
|
| 77 |
+
|
| 78 |
+
# 4.1 Agent Training
|
| 79 |
+
|
| 80 |
+
Managing complex guidelines requires filtering for top agents, which we do by certifying Mechanical Turk (MTurk) workers through an extensive 20-question quiz touching on all aspects of task completion. Keeping the bar high, we set a minimum threshold of $80\%$ accuracy of the quiz which resulted in a low $20\%$ pass rate. After passing the exam, we offered the answer key to agents which further improved understanding. We also created short, 10-minute tutorial videos to showcase how to handle the most difficult aspects of the task. A group chat app was also deployed to offer live feedback for agents, simulating how supervisors coach customer service representatives in real life. Finally, we carefully designed an incentive structure that rewards agents for correctly identifying the user intent to encourage clarification behavior. (Appendix A covers more details.)
|
| 81 |
+
|
| 82 |
+
# 4.2 Expert Live Chat
|
| 83 |
+
|
| 84 |
+
Rather than utilizing Wizard-of-Oz techniques (such as in MultiWOZ), we developed Expert Live Chat which contains three unique aspects:
|
| 85 |
+
|
| 86 |
+
(1) Conversations are conducted continuously in real-time. (2) Users involved are not interchangeable. (3) Players are informed that all participants are human – no wizard behind the scenes.
|
| 87 |
+
|
| 88 |
+
# 4.2.1 Synchronous Two-person Dialogue
|
| 89 |
+
|
| 90 |
+
Normal human conversations occur in real-time, but coordinating multiple users in this manner is resource-intensive, so other datasets often employed workarounds to avoid this difficulty. For example, other works have applied rules (Bordes et al., 2017), templates (Byrne et al., 2019) or paraphrasing (Shah et al., 2018) to produce conversations. Wizard-of-Oz (WoZ) techniques incorporate humans into the mix by allowing one of them to play the system role as a wizard behind the scenes (Kelley, 1984). In particular, (Budzianowski et al., 2018) decomposed dialogues into individual turns, where for each turn a new author is responsible for reading the context and generating the next plausible response. Despite the time-consuming nature, some datasets have produced synchronous dialogues between two humans (Lewis et al., 2017). However, the skill sets of ABCD workers are notably unequal, exacerbating the matching problem.
|
| 91 |
+
|
| 92 |
+
# 4.2.2 Pairing Users of Unequal Capability
|
| 93 |
+
|
| 94 |
+
Expert Live Chat matches a highly trained agent with a knowledgeable, yet otherwise average customer in real-time. Since the backgrounds are uneven, unlike other datasets with concurrent users (Lewis et al., 2017; Zhang et al., 2018; Das et al., 2017b), incoming Turkers cannot simply be randomly assigned a role. In other words, having twenty participants does not necessarily equate to ten conversations since it's possible that only a quarter of them are qualified as agents. When such an imbalance inevitably arises, one group must wait until someone from the other side becomes available. However, leaving either side waiting for too long leads to serious consequences since idle time directly affects their pay rate.
|
| 95 |
+
|
| 96 |
+
To minimize the likelihood of such an outcome, we first ensure that a reasonable pool of agents are always available. Then, we increase the number of active customers by methodically inviting a subset of customers one batch at a time. To do so, we established a qualification exam for customers to ensure their availability during a specified time period. Finally, we also redesigned the chat application to make the waiting room experience more
|
| 97 |
+
|
| 98 |
+

|
| 99 |
+
Figure 2: The Agent Dashboard is split into three sections. KB Query actions always have system output, while actions in the Interaction Zone require user input. The FAQ/Policy section is associated with describing company policies and technical troubleshooting.
|
| 100 |
+
|
| 101 |
+
palatable. (See Appendix B for full breakdown.) With these changes, we successfully increased the pairing rate from 18 out of 80 active users up to 72 out of 83, an increase of nearly $400\%$ , while maintaining wait times under 10 minutes.
|
| 102 |
+
|
| 103 |
+
# 4.2.3 Interaction Framework
|
| 104 |
+
|
| 105 |
+
Besides pairing, we increased the likelihood of collecting rich dialogues without the need for extensive instructions by optimizing the chat experience itself. In particular, we observed the greatest gains by grounding the conversation to the relatable scenario of online shopping, which provided immediate context to participants without requiring any extra training.
|
| 106 |
+
|
| 107 |
+
For example, the Agent Dashboard was arranged to closely reflect actual agent workspaces (Figure 2). On the customer side, scenarios in the Customer Panel included an image of the product being discussed, along with other meta-data such as the brand or price to match a true shopping experience as much as possible (Appendix H). We also explicitly told customers the other speaker was human to encourage natural responses over confined commands meant for machines. Most importantly, customers were given dynamically generated, natural-language prompts that did not include information about the values needed to resolve their issue. As a general framework, Ex
|
| 108 |
+
|
| 109 |
+
pert Live Chat can be applied in any real-world scenario involving an expert and novice. Indeed, increasing the verisimilitude of the experience is precisely what allowed higher quality dialogues to be generated by the workers.
|
| 110 |
+
|
| 111 |
+
# 4.3 Annotation of Actions and Values
|
| 112 |
+
|
| 113 |
+
The flows and subflows are automatically annotated since we have the provenance of each intent when generating the customer prompt. Additionally, given the ground truth subflow of each conversation, we can deterministically map them to the correct section within the Agent Guidelines outlining the correct actions. Calculating accuracy then becomes a simple exercise to align the predicted actions with the ones required by the manual. In this way, we capture a key benefit of machine-generated text (Shah et al., 2018) without sacrificing the benefit of engaging real users.
|
| 114 |
+
|
| 115 |
+
# 5 Dataset Statistics and Analysis
|
| 116 |
+
|
| 117 |
+
We validate all dialogues to pass quality thresholds such as including a minimum number of actions and avoiding copy/paste behavior. After filtering, we end up with 10,042 total conversations with an average of 22.1 turns - the highest turn count among all compared datasets. Unsurprisingly, ABCD includes more actions per dialogue than other datasets, by at least a factor of two. ABCD also contains a lower absolute number of tokens, but also has the highest variance in the number of tokens per turn. (See Table 2.)
|
| 118 |
+
|
| 119 |
+
Since each subflow represents a unique customer intent, ABCD contains 55 user intents evenly distributed through the dataset. By interpreting buttons as domains, the dataset contains 30 domains and 231 associated slots, compared to 7 domains and 24 slots within MultiWOZ (Budzianowski et al., 2018).
|
| 120 |
+
|
| 121 |
+
By grounding to the relatable scenario of chatting with customer support of an online retail company, speakers often showcase various forms of natural dialogue, such as offering diverse reasons for shopping or asking detailed follow-up questions. Furthermore, the unconstrained nature of Expert Live Chat allows users to chat with each other in a free-form style. Dialogues exhibited normal texting behavior such as users speaking for many turns in a row or fixing typos with a star in the subsequent line. Other examples of linguistic phenomenon can be observed in Table 5.
|
| 122 |
+
|
| 123 |
+
<table><tr><td>Metric</td><td>DSTC2</td><td>M2M</td><td>KVRET</td><td>MultiWOZ</td><td>SGD</td><td>MultiDoGO</td><td>ABCD</td></tr><tr><td>Num of Dialogues</td><td>1,612</td><td>1,500</td><td>2,425</td><td>8,438</td><td>16,142</td><td>40,576</td><td>8,034</td></tr><tr><td>Num of Turns</td><td>23,354</td><td>14,796</td><td>12,732</td><td>113,556</td><td>329,964</td><td>813,834</td><td>177,407</td></tr><tr><td>Num of Tokens</td><td>199,431</td><td>121,977</td><td>102,077</td><td>1,490,615</td><td>3,217,369</td><td>9,901,235</td><td>1,626,160</td></tr><tr><td>Avg. Turns / Dialogue</td><td>14.49</td><td>9.86</td><td>5.25</td><td>13.46</td><td>20.44</td><td>20.06</td><td>22.08</td></tr><tr><td>Avg. Tokens / Turn</td><td>8.54</td><td>8.24</td><td>8.02</td><td>13.13</td><td>9.75</td><td>12.16</td><td>9.17</td></tr><tr><td>Std Dev. Tokens / Turn</td><td>2.95</td><td>5.99</td><td>6.07</td><td>6.19</td><td>6.48</td><td>-*</td><td>6.80</td></tr><tr><td>Avg. Actions / Dialogue</td><td>1.0</td><td>1.0</td><td>1.0</td><td>1.81</td><td>1.24</td><td>-*</td><td>3.73</td></tr><tr><td>No. Unique Tokens</td><td>986</td><td>1,008</td><td>2,842</td><td>23,689</td><td>30,352</td><td>70,003</td><td>23,686</td></tr><tr><td>No. Unique Slots</td><td>8</td><td>14</td><td>13</td><td>24</td><td>214</td><td>73</td><td>231</td></tr><tr><td>No. Slot Values</td><td>212</td><td>138</td><td>1,363</td><td>4,510</td><td>14,139</td><td>55,816</td><td>12,047</td></tr><tr><td>No. Domains</td><td>1</td><td>2</td><td>3</td><td>7</td><td>16</td><td>6</td><td>30</td></tr></table>
|
| 124 |
+
|
| 125 |
+
Table 2: Comparison of ABCD to similar dialogue datasets. Numbers reported are for the train split on all datasets, with bold values indicating the top score for each metric. *MultiDoGO is not public, unable to calculate new stats.
|
| 126 |
+
|
| 127 |
+
# 6 ABCD as a Dialogue Benchmark
|
| 128 |
+
|
| 129 |
+
The novel features in ABCD brings two new dialog tasks, Action State Tracking and Cascading Dialogue Success. We also build baseline systems that are variants of standard dialogue models and report their results on ABCD.
|
| 130 |
+
|
| 131 |
+
# 6.1 Action State Tracking
|
| 132 |
+
|
| 133 |
+
Action State Tracking (AST) aims at detecting the pertinent intent by interpreting customer utterances while taking into account constraints from the Agent Guidelines, an aspect not considered in traditional dialog state tracking (DST). For example, a conceivable dialogue task might entail helping a customer [Reset Password] once this intent has been identified. In contrast, the appropriate next step within AST is governed by the Agent Guidelines, which might require [Verify Identity] of the customer first, or any number of other actions, before executing the password reset.
|
| 134 |
+
|
| 135 |
+
Each series of actions is considered a unique subflow that belongs to a number of high-level conversational flows. Each individual action includes the active button $b$ to click and its corresponding slots $s$ and values $v$ . The task consists of executing an action, which constitutes a single agent turn. More specifically, given a context $C_t = [x_1, x_2, \ldots, x_t]$ where $x_t$ can be a customer utterance $x_t^c$ , an agent utterance $x_t^a$ , or a prior action $x_t^b$ , a model should predict the button of the current action as well as the relevant slots and values, if any exist $\{x_{t+1}^b = (b, s, v) \in \mathcal{B} \times \mathcal{S} \times \mathcal{V}\}$ .
|
| 136 |
+
|
| 137 |
+
This structure is designed to mimic DST where each user intent is broken down into domains, slots and values $(d,s,v)$ . For both AST and DST, the higher level domain or button can have vary-
|
| 138 |
+
|
| 139 |
+
ing slots. The reverse is also true - a given slot can be associated with multiple domains or buttons. Lastly, both contain values that can be enumerable (i.e. payment types or shipping statuses) or non-enumerable (phone numbers or email addresses). Following the pattern set by Rastogi et al. (2020b), enumerable values are given in the ontology to be accessible by a model, whereas the non-enumerable items are not.
|
| 140 |
+
|
| 141 |
+
Despite the similar structure, AST deviates from DST since predicting the right action requires not only parsing the customer utterance, but also adhering to Agent Guidelines. Suppose a customer is entitled to a discount which will be offered by issuing a [Promo Code]. The customer might request $30\%$ off, but the guidelines stipulate only $15\%$ is permitted, which would make "30" a reasonable, but ultimately flawed slot-value. To measure a model's ability to comprehend such nuanced situations, we adopt overall accuracy as the evaluation metric for AST.
|
| 142 |
+
|
| 143 |
+
# 6.2 Cascading Dialogue Success
|
| 144 |
+
|
| 145 |
+
Since the appropriate action often depends on the situation, we propose the Cascading Dialogue Success (CDS) task to measure a model's ability to understand actions in context. Whereas AST assumes an action occurs in the current turn, CDS gives an agent the additional options of responding with an utterance or ending the conversation. Moreover, proficiency is no longer measured as success over isolated turns but rather as success over sequences of consecutive turns.
|
| 146 |
+
|
| 147 |
+
Formally, given $C_t = [x_1, x_2, \ldots, x_t]$ as a context composed of utterances $x^c, x^a \in \mathcal{U}$ and actions $x^b \in \mathcal{A}$ , a model should predict all remaining steps $x_{>t}$ along with their realized forms. Pos-
|
| 148 |
+
|
| 149 |
+
sible next steps are to take an action, respond with text or end the task. When the next step is an action $x_{t+1}^b$ , the model should predict the button with its slots and values as in AST. If the agent speaks in the next step $x_{t+1}^a$ , the model should rank the true utterance highest, as measured by recall metrics. Finally, the model should recognize when to end the conversation.
|
| 150 |
+
|
| 151 |
+
Rewarding the model only when it predicts every step correctly is counter-productive because minor variations in sentence order do not alter overall customer satisfaction. Therefore, CDS is scored using a variation on Cascading Evaluation (Suhr et al., 2019). Rather than receiving a single score for each conversation, cascaded evaluation allows the model to receive "partial credit" whenever it successfully predicts each successive step in the chat. This score is calculated on every turn, and the model is evaluated based on the percent of remaining steps correctly predicted, averaged across all available turns. (See Appendix C for more details.)
|
| 152 |
+
|
| 153 |
+
# 6.3 Baseline Models
|
| 154 |
+
|
| 155 |
+
We also run several baselines on these new tasks. The backbone of all our baseline systems is a pre-trained Transformer-based model acting as a context encoder. More specifically, given the dialogue history as a series of utterances, we first join the utterances together with a [SEP] token and then tokenize the entire input using WordPiece (Schuster and Nakajima, 2012). Next, we feed the entire input into a BERT model and perform a learned pooling on the hidden states in the final layer, which results in a fixed-length latent vector $h_{enc} \in \mathbb{R}^{128}$ (Wolf et al., 2019). Afterwards, we attach a variety of prediction heads conditioned on the $h_{enc}$ vector to generate the final output. Details of the prediction heads for the two proposed tasks are described next.
|
| 156 |
+
|
| 157 |
+
We break down Action State Tracking (AST) into two sub-problems, button-slot prediction and value-filling. Given the ontology, button prediction is a straightforward classification task over 231 known options, so the prediction head is just a linear classifier with a softmax activation for normalization: $P_{b\cdot slot} = \mathrm{Softmax}(W_a h_{enc}^\top + b_a)$ .
|
| 158 |
+
|
| 159 |
+
To handle value-filling, we further decompose
|
| 160 |
+
|
| 161 |
+
the task into predicting enumerable and non-enumerable values. The ontology lists out all $|E|$ enumerable values, so the prediction head $p^{enum}$ simply maps the hidden state $h_{enc}$ into the appropriate dimensions. To handle non-enumerable values, we follow the insight from (Ma et al., 2019) which notes that practically all such values are stated by the customer in conversation, so a model can copy these values from the tokenized context. During pre-processing, we extract up to $|N|$ unique tokens from the natural language customer utterances, where $p^{copy}$ then represents the distribution over these possible options.
|
| 162 |
+
|
| 163 |
+
We imitate the TRADE architecture from (Wu et al., 2019), where conditioned on the action, the model chooses to either copy from the context $p_{copy}$ or select from the enumerable entities $p_{enum}$ based on a gating mechanism. The gate is conditioned on the hidden state $h_{enc}$ as well as a learned context vector $c_i$ . Concretely,
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
p ^ {e n u m} = \operatorname {S o f t m a x} \left(W _ {e} h _ {e n c} ^ {\top} + b _ {e}\right) \in \mathbb {R} ^ {| E |}
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
p ^ {c o p y} = \operatorname {S o f t m a x} \left(W _ {c} h _ {e n c} ^ {\top} + b _ {c}\right) \in \mathbb {R} ^ {| N |}
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
c _ {i} = W _ {c} ^ {\top} \cdot p ^ {c o p y} \in \mathbb {R} ^ {h i d}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
p ^ {g a t e} = \sigma \left(W _ {g} \cdot \left[ h _ {e n c}; c _ {i} \right]\right) \in \mathbb {R} ^ {1}
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
P _ {v a l} = [ p ^ {g a t e} \times p ^ {c o p y}; (1 - p ^ {g a t e}) \times p ^ {e n u m} ] \in \mathbb {R} ^ {| E + N |}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
where $\sigma$ represents the Sigmoid function and $[\cdot ;\cdot ]$ is the concatenation operation. The final value predictions are the argmax of $P_{val}$ which merge the probabilities of $p^{enum}$ and $p^{\text{copy}}$ together.
|
| 186 |
+
|
| 187 |
+
For Cascading Dialogue Success (CDS), we also tackle next step selection, utterance ranking, and intent classification. Next step selection is a choice between retrieve utterance, take action and end conversation. Intent classification consists of choosing from the 55 available subflows. Given this basic setting, both tasks use the same setup of a linear layer followed by a softmax, albeit with their own respective weights $W_{NS} \in \mathbb{R}^{3 \times hid}$ and $W_{IC} \in \mathbb{R}^{55 \times hid}$ . When the next step is to take action, the AST model is reused to determine the button-slot and value. When end conversation is selected, all future predictions are ignored, much like an <EOS> symbol signifies stopping.
|
| 188 |
+
|
| 189 |
+
This leaves us with utterance ranking, which is only evaluated when retrieve utterance is chosen as the next step. Our ranker reproduces the design
|
| 190 |
+
|
| 191 |
+
from (Guu et al., 2020), where the encoded context $h_{ctx}$ is compared against each encoded candidate response $h_{cand}$ to produce a ranking score. To embed each $j^{th}$ candidate $d_j$ we first create its input $d_j^{input}$ . Following standard practice, we prepend the candidate text $d_j$ with [CLS], separate the individual utterances $u_i$ within the candidate response using a [SEP] token, and append a final [SEP] token afterwards. (Devlin et al., 2019). This input $d_j^{input}$ is then fed into a static pretrained BERT model to get an initial hidden state, which is finally projected using a learned weight $W_{d_j} \in \mathbb{R}^{128 \times hid}$ to produce $h_{cand}$ . To obtain $h_{ctx}$ we start with the hidden state $h_{enc}$ from before and apply a projection matrix $W_{UR} \in \mathbb{R}^{128 \times hid}$ to reach the desired dimensionality.
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
d _ {j} ^ {\text {i n p u t}} = [ \mathrm {C L S} ] u _ {1} [ \mathrm {S E P} ] u _ {2} [ \mathrm {S E P} ] \dots [ \mathrm {S E P} ] u _ {n} [ \mathrm {S E P} ]
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
$$
|
| 198 |
+
h _ {c a n d} = W _ {d _ {j}} \mathrm {B E R T} _ {\mathrm {b a s e}} \left(d _ {j} ^ {\text {i n p u t}}\right) ^ {\top} \in \mathbb {R} ^ {1 2 8}
|
| 199 |
+
$$
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
h _ {c t x} = W _ {U R} h _ {e n c} ^ {\top} \in \mathbb {R} ^ {1 2 8}
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
f (x _ {i}, d _ {j}) = h _ {c t x} ^ {\top} h _ {\mathrm {c a n d}}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
P _ {j} ^ {\text {r a n k}} = \frac {\exp (f (x _ {i} , d _ {j}))}{\Sigma_ {d _ {j} ^ {\prime}} \exp f (x _ {i} , d _ {j} ^ {\prime})}
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
The final rank is given by normalizing each $j^{th}$ score against all other candidate scores. We use the training objective from (Henderson et al., 2019) to calculate the loss:
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
\mathbb {J} = \sum_ {j = 1} ^ {M = 1 0 0} P (x _ {i}, d _ {j}) - \sum_ {i = 1} ^ {M} \log \sum_ {j = 1} ^ {M} \exp^ {f (x _ {i}, d _ {j})}
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
where $M$ is the size of the total candidate set.
|
| 220 |
+
|
| 221 |
+
# 6.4 Experiments
|
| 222 |
+
|
| 223 |
+
We performed experiments on the two newly proposed tasks, AST and CDS. AST consists of two subtasks, button-slot prediction and value-filling, while CDS builds on this with three additional subtasks of next step selection, utterance ranking, and intent classification. For both tasks, we experimented with two types of frameworks, a pipeline version and an end-to-end version. The pipeline version trains each subtask separately while the end-to-end optimizes all tasks jointly (Liang et al., 2020; Rastogi et al., 2020a; Ham et al., 2020).
|
| 224 |
+
|
| 225 |
+
The pipeline model uses a BERT model trained with the RAdam optimizer (Liu et al., 2020). To test the performance of different pretrained models under the end-to-end framework, we
|
| 226 |
+
|
| 227 |
+
<table><tr><td>Metric</td><td>Pipeline</td><td>BERT</td><td>Albert</td><td>RoBERTa</td></tr><tr><td>B-Slot</td><td>86.7%</td><td>89.9%</td><td>90.9%</td><td>93.6%</td></tr><tr><td>Value</td><td>42.1%</td><td>61.6%</td><td>61.0%</td><td>67.2%</td></tr><tr><td>Action</td><td>32.3%</td><td>59.5%</td><td>59.2%</td><td>65.8%</td></tr></table>
|
| 228 |
+
|
| 229 |
+
Table 3: Metrics for Action-State Tracking. Pipeline values come from models trained on individual subtasks, other models are trained jointly end-to-end.
|
| 230 |
+
|
| 231 |
+
experiment with three additional encoders, AlBERT (Lan et al., 2020), RoBERTa (Liu et al., 2019) and RoBERTa-Large. AlBERT model has an inter-sentence coherence task and a lighter memory footprint compared to BERT, while RoBERTa model has substantially more data and hyper-parameter tuning in pretraining than BERT. In the future, we also plan to include GPT-based models, such as DialogGPT (Zhang et al., 2020) in our comparison.
|
| 232 |
+
|
| 233 |
+
# 6.5 Results
|
| 234 |
+
|
| 235 |
+
For both tasks, moving from the pipeline architecture to a jointly trained method displayed noticeable improvement in accuracy. As hinted at in prior works (Liang et al., 2020), we suspect the group effort gives each subtask extra supervision from other subtasks for more data efficient training. In the AST task, we found steady improvements as we move from the older to the newer models with vanilla BERT at $59.5\%$ accuracy and RoBERTa doing the best at $65.8\%$ . For the CDS task, we found a similar trend where RoBERTa-Large outperforms BERT, but only by a mere $0.6\%$ . We hypothesize this small gap between models is due to the fact that none were particularly trained on dialogue data which impacts their ability to produce a useful encoding (Wu and Xiong, 2020).
|
| 236 |
+
|
| 237 |
+
Separately, we evaluate CDS subtask difficulty by asking human volunteers to select the correct label from a list of possible options. As an example, workers would be presented with 55 different classes for Intent Classification and asked to choose the right one. Since humans typically struggle when choosing from large collections of items, fine-tuned models performed roughly on par or better compared to humans in this unnatural setting. On the other hand, human evaluation for the overall CDS task was judged by measuring the success rate in a standard conversational scenarios where behavioral instincts are activated, so humans were able to excel on this environment.
|
| 238 |
+
|
| 239 |
+
<table><tr><td>Model</td><td>Intent</td><td>Nextstep</td><td>B-Slot</td><td>Value</td><td>Recall@1/5/10</td><td>Cascading Eval</td></tr><tr><td>Human</td><td>85.5%</td><td>84.0%</td><td>79.0%</td><td>77.5%</td><td>N/A</td><td>82.7%</td></tr><tr><td>Pipeline</td><td>90.4%</td><td>83.8%</td><td>86.7%</td><td>42.1%</td><td>26.2/51.7/63.1</td><td>18.2%</td></tr><tr><td>BERT-base</td><td>89.3%</td><td>87.6%</td><td>85.9%</td><td>73.1%</td><td>21.7/46.6/58.7</td><td>31.3%</td></tr><tr><td>AlBERT</td><td>88.5%</td><td>87.2%</td><td>86.1%</td><td>70.4%</td><td>22.1/47.4/58.9</td><td>31.2%</td></tr><tr><td>RoBERTa</td><td>89.7%</td><td>87.8%</td><td>87.6%</td><td>73.1%</td><td>21.6/46.7/58.6</td><td>31.5%</td></tr><tr><td>RoBERTa-Large</td><td>90.5%</td><td>87.5%</td><td>88.5%</td><td>73.3%</td><td>22.0/47.8/59.1</td><td>31.9%</td></tr><tr><td>BERT-base w/o Action Info</td><td>88.4%</td><td>76.8%</td><td>83.7%</td><td>63.4%</td><td>18.6/43.0/57.9</td><td>29.2%</td></tr><tr><td>BERT-base w/ Guidelines</td><td>83.2%</td><td>87.5%</td><td>85.6%</td><td>72.4%</td><td>21.8/46.9/58.5</td><td>30.6%</td></tr><tr><td>BERT-base w/ Intent Info</td><td>100%</td><td>88.6%</td><td>88.9%</td><td>73.8%</td><td>22.2/47.6/59.1</td><td>32.3%</td></tr><tr><td>BERT-base w/ Intent + Guide</td><td>100%</td><td>89.2%</td><td>89.3%</td><td>74.0%</td><td>22.6/48.1/59.4</td><td>32.7%</td></tr></table>
|
| 240 |
+
|
| 241 |
+
Table 4: Cascading dialogue success task performance with breakdown of all five subtasks. Numbers displayed are the average of three seeds. Human evaluation conducted with size of 100 samples per person.
|
| 242 |
+
|
| 243 |
+
# 6.6 Ablation Study
|
| 244 |
+
|
| 245 |
+
We perform an ablation study to test the significance of the key features in ABCD. Recall, actions are characterized by their dual nature of requiring signals from both the customer and the company guidelines. To that end, we provided the ground truth intent to measure the impact of the customer side. Conversely, we also test the company side by masking out invalid buttons based on the insight that the Agent Guidelines are useful for narrowing down the range of possible actions. In both situations, we would expect that providing such oracle guidance would boost performance. Lastly, note that the appropriate action depends on the outcomes of prior actions, so for a final experiment we removed prior actions and their explanations from the context to test their impact on task success. (See Appendix E for details.)
|
| 246 |
+
|
| 247 |
+
We observe that supplying the intent information to the BERT model causes a noticeable boost in dialog success, bringing the score to $32.3\%$ . However, augmenting the model with knowledge of the guidelines unexpectedly dropped performance down to $30.6\%$ . Further analysis revealed the imperfect intent classifier would occasionally mask out valid buttons, leaving only incorrect ones to choose from. As a result, the downstream action predictor would be prevented from doing its job, causing errors to accumulate. To test this hypothesis, we ran another model (Intent+Guide) which had access to guidelines along with an oracle intent classifier. This model reached the peak observed performance of $32.7\%$ , highlighting the importance of both components. As a final result, removing action information away from action-based conversations unsurprisingly causes a major performance drop (Table 4).
|
| 248 |
+
|
| 249 |
+
# 7 Conclusion and Future Work
|
| 250 |
+
|
| 251 |
+
In conclusion, we have presented ABCD which includes over 10K dialogues that incorporate procedural, dual-constrained actions. Additionally, we established a scalable method for collecting live human conversations with unequal partners. We found that pre-trained models perform decent on Action State Tracking, but there is a large gap between humans agents and the top systems for Cascading Dialogue Success.
|
| 252 |
+
|
| 253 |
+
We plan to incorporate GPT-related models (Hosseini-Asl et al., 2020), as alternate forms of preprocessing have shown promise in other NLP tasks. Other techniques could also be used to incorporate speaker info, action semantics and other meta-data. Wholly new systems that attend to the Agent Guidelines in a fully differentiable manner are also worth exploring. By grounding dialogues to in-depth scenarios with explicit policies, we hope to have pushed towards a better understanding of dialogue success.
|
| 254 |
+
|
| 255 |
+
# Acknowledgments
|
| 256 |
+
|
| 257 |
+
The authors would like to thank Tao Lei, Felix Wu and Anmol Kabra for their feedback and support. We would also like to thank the anonymous NAACL 2021 reviewers for pointing out specific areas of confusion in our submission, which we have tried our best to clarify.
|
| 258 |
+
|
| 259 |
+
# Ethical Considerations
|
| 260 |
+
|
| 261 |
+
This paper presents a new dataset which was collected through the use of crowdworkers. All agent workers were compensated a fair wage based on their local standard of living, where their location was determined during the vetting process. (Please refer to Appendix A for more details.)
|
| 262 |
+
|
| 263 |
+
# References
|
| 264 |
+
|
| 265 |
+
Amazon. 2019. Alexa Skills Kit.
|
| 266 |
+
Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In ICLR. OpenReview.net.
|
| 267 |
+
Pawel Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. Multiwoz - a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 5016-5026. Association for Computational Linguistics.
|
| 268 |
+
Bill Byrne, Karthik Krishnamoorthi, Chinnadhurai Sankar, Arvind Neelakantan, Ben Goodrich, Daniel Duckworth, Semih Yavuz, Amit Dubey, Kyu-Young Kim, and Andy Cedilnik. 2019. Taskmaster-1: Toward a realistic and diverse dialog dataset. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 4515-4524. Association for Computational Linguistics.
|
| 269 |
+
Santiago Castro, Devamanyu Hazarika, Verónica Pérez-Rosas, Roger Zimmermann, Rada Mihalcea, and Soujanya Poria. 2019. Towards multimodal sarcasm detection (an _obviously_ perfect paper). In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4619-4629. Association for Computational Linguistics.
|
| 270 |
+
Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José MF Moura, Devi Parikh, and Dhruv Batra. 2017a. Visual dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 326-335.
|
| 271 |
+
Abhishek Das, Satwik Kottur, José MF Moura, Stefan Lee, and Dhruv Batra. 2017b. Learning cooperative visual dialog agents with deep reinforcement learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 2951-2960.
|
| 272 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.
|
| 273 |
+
Emily Dinan, Stephen Roller, Kurt Shuster 0001, Angela Fan, Michael Auli, and Jason Weston. 2019.
|
| 274 |
+
|
| 275 |
+
Wizard of wikipedia: Knowledge-powered conversational agents. In ICLR. OpenReview.net.
|
| 276 |
+
Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In SIGDIAL Conference, pages 37-49. Association for Computational Linguistics.
|
| 277 |
+
Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Thirty-Second AAAI Conference on Artificial Intelligence.
|
| 278 |
+
Google. 2019. Actions on Google Assistant.
|
| 279 |
+
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Realm: Retrievalaugmented language model pre-training. CoRR, abs/2002.08909.
|
| 280 |
+
Donghoon Ham, Jeong-Gwan Lee, Youngsoo Jang, and Kee-Eung Kim. 2020. End-to-end neural pipeline for goal-oriented dialogue system using gpt-2. In Proc. of the 34th AAAI Conference on Artificial Intelligence. ACL.
|
| 281 |
+
He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1766-1776. Association for Computational Linguistics.
|
| 282 |
+
He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and generation in negotiation dialogues. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2333-2343. Association for Computational Linguistics.
|
| 283 |
+
Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263-272.
|
| 284 |
+
Matthew Henderson, Ivan Vulic, Daniela Gerz, Inigo Casanueva, Pawel Budzianowski, Sam Coope, Georgios Spithourakis, Tsung-Hsien Wen, Nikola Mrksic, and Pei-Hao Su. 2019. Training neural response selection for task-oriented dialogue systems. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5392-5404. Association for Computational Linguistics.
|
| 285 |
+
Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue. arXiv preprint arXiv:2005.00796.
|
| 286 |
+
|
| 287 |
+
John F Kelley. 1984. An iterative design methodology for user-friendly natural language office information applications. ACM Transactions on Information Systems (TOIS), 2(1):26-41.
|
| 288 |
+
Jin-Hwa Kim, Nikita Kitaev, Xinlei Chen, Marcus Rohrbach, Byoung-Tak Zhang, Yuandong Tian, Dhruv Batra, and Devi Parikh. 2019. Codraw: Collaborative drawing as a testbed for grounded goal-driven communication. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6495-6513.
|
| 289 |
+
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In ICLR. OpenReview.net.
|
| 290 |
+
Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning of negotiation dialogues. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2443-2453. Association for Computational Linguistics.
|
| 291 |
+
Weixin Liang, Youzhi Tian, Chengcai Chen, and Zhou Yu. 2020. Moss: End-to-end dialog system framework with modular supervision. In AAAI, pages 8327-8335. AAAI Press.
|
| 292 |
+
Chia-Wei Liu, Ryan Lowe, Iulian Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2122-2132. The Association for Computational Linguistics.
|
| 293 |
+
Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2020. On the variance of the adaptive learning rate and beyond. In Proceedings of the Eighth International Conference on Learning Representations (ICLR 2020).
|
| 294 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. CoRR, abs/1907.11692.
|
| 295 |
+
Yue Ma, Zengfeng Zeng, Dawei Zhu, Xuan Li, Yiying Yang, Xiaoyuan Yao, Kaijie Zhou, and Jianping Shen. 2019. An end-to-end dialogue state tracking system with machine reading comprehension and wide & deep classification. CoRR, abs/1912.09297.
|
| 296 |
+
Alena Moiseeva, Dietrich Trautmann, and Hinrich Schütze. 2020. Multipurpose intelligent process
|
| 297 |
+
|
| 298 |
+
automation via conversational assistant. CoRR, abs/2001.02284.
|
| 299 |
+
Denis Peskov, Nancy Clarke, Jason Krone, Brigi Fodor, Yi Zhang, Adel Youssef, and Mona Diab. 2019. Multi-domain goal-oriented dialogues (multidogo): Strategies toward curating and annotating large scale dialogue data. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4518-4528.
|
| 300 |
+
Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open-domain conversation models: A new benchmark and dataset. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 5370-5381. Association for Computational Linguistics.
|
| 301 |
+
Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020a. Schema-guided dialogue state tracking task at dstc8. arXiv preprint arXiv:2002.01359.
|
| 302 |
+
Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020b. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. In AAAI, pages 8689-8696. AAAI Press.
|
| 303 |
+
Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2012, Kyoto, Japan, March 25-30, 2012, pages 5149-5152. IEEE.
|
| 304 |
+
Pararth Shah, Dilek Hakkani-Tür, Gökhan Tür, Abhinav Rastogi, Ankur Bapna, Neha Nayak, and Larry P. Heck. 2018. Building a conversational agent overnight with dialogue self-play. CoRR, abs/1801.04871.
|
| 305 |
+
Alane Suhr, Claudia Yan, Jacob Schluger, Stanley Yu, Hadi Khader, Marwa Mouallem, Iris Zhang, and Yoav Artzi. 2019. Executing instructions in situated collaborative interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 2119-2130. Association for Computational Linguistics.
|
| 306 |
+
Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2016. Towards ai-complete question answering: A set of prerequisite toy tasks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.
|
| 307 |
+
Jason D. Williams, Kavosh Asadi, and Geoffrey Zweig. 2017. Hybrid code networks: practical and efficient
|
| 308 |
+
|
| 309 |
+
end-to-end dialog control with supervised and reinforcement learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 665-677. Association for Computational Linguistics.
|
| 310 |
+
Thomas Wolf, Victor Sanh, Julien Chaumont, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. CoRR, abs/1901.08149.
|
| 311 |
+
Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state generator for task-oriented dialogue systems. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 808-819. Association for Computational Linguistics.
|
| 312 |
+
Chien-Sheng Wu and Caiming Xiong. 2020. Probing task-oriented dialogue representation from language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Virtual Online, November 16 - November 20, 2020, pages 5036-5051, Online. Association for Computational Linguistics.
|
| 313 |
+
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 2204-2213. Association for Computational Linguistics.
|
| 314 |
+
Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. Dialogpt: Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, ACL 2020, Online, July 5-10, 2020, pages 270-278. Association for Computational Linguistics.
|
| 315 |
+
Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu 0001. 2018. Commonsense knowledge aware conversation generation with graph attention. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4623-4629. ijcai.org.
|
| 316 |
+
|
| 317 |
+
# A Agent Training Details
|
| 318 |
+
|
| 319 |
+
Optimizing agents performance can be split into preparation before the HIT (Human Intelligence Task), improving HIT itself, and ongoing training afterwards. Starting with the pre-HIT phase, the major steps largely center around multiple rounds of qualifications to filter for the highest quality workers available. During the post-HIT phase, effort shifts to ensuring that each worker becomes increasingly comfortable with the task.
|
| 320 |
+
|
| 321 |
+
Pre-HIT Phase Qualifications take the form of online quizzes which serve the purpose of training motivated workers in addition to simply removing unqualified ones. When designing the qualification, the number and style of questions were iterated on to limit the feeling of a tight time constraint, while still remaining quite difficult. In fact, some agents who had previously had actual customer service jobs mentioned they felt like they were right back at the office. This difficulty resulted in a high rejection rate, which was costly because we paid Turkers $2 regardless of passing the exam (with a larger $8 bonus for passing). Although, the cost was well worth the trade-off since having high quality agents would pay dividends down the road.
|
| 322 |
+
|
| 323 |
+
To move efficiently, we leaned heavily on multiple choice questions and MTurk APIs to help automate grading and assignment of qualifications. Finally, we learned that including screenshots of the Agent Dashboard in the quizzes was a great way to familiarize the agents with the platform before performing the actual task.
|
| 324 |
+
|
| 325 |
+
During-HIT Phase The HIT itself was priced at $1.50 for completing the conversation with an extra$ 1.00 bonus for identifying the correct customer intent at the end-of-chat survey. Since agents are naturally focused on getting done as quickly as possible, they would often only take the customer's requests into account, bypassing a key characteristic of what makes ABCD unique. However, by encouraging agents to focus on the customer intent, they were forced to peruse the Agent Guidelines for the associated subflow. Thus, we found this incentive critical for aligning agent behaviors with optimal outcomes.
|
| 326 |
+
|
| 327 |
+
Post-HIT Phase For ongoing training, we began producing small lists of bulletpoints to the agents on areas they could improve on. Fur
|
| 328 |
+
|
| 329 |
+

|
| 330 |
+
Figure 3: Crowdworker feedback in chat platform after the completion of the final batch of data collection.
|
| 331 |
+
|
| 332 |
+
thermore, we would highlight examples of good and bad decision-making and appropriate behavior when representing the fictitious "AcmeBrands" retail company. Finally, we also recorded videos which gave agents a view of how an "ideal" agent would behave at every step of the chat. We found that by engaging with the Turkers through the group chat and respecting their feedback, they were very willing to work on improving despite not having extra monetary incentive to do so.
|
| 333 |
+
|
| 334 |
+
In total, the agents were quite wonderful to work with and their end-of-task feedback strongly suggests they enjoyed the process as well. (See Figure 3) We credit this to the training details mentioned in this section and the development of the Expert Live Chat procedure.
|
| 335 |
+
|
| 336 |
+
# B Optimizing Available Workers
|
| 337 |
+
|
| 338 |
+
In a regular Mechanical Turk (MTurk) setup, HITs are made available to a large audience who can pick up as many or as few as they want. Expert Live Chat dictates a dialogue between two speakers, so we need two types of workers: agents and customers. Let us consider the number of agents available as $A$ and the number of customers available as $C$ . Given budget constraints, we can only pay some maximum number of workers $M$ . Simultaneously, given time constraints, we need a minimum number of conversations collected per week, which is a function of the number of available workers $N = f(A, C)$ . This leads to three issues that must be considered in conjunction:
|
| 339 |
+
|
| 340 |
+
$\mathbf{N} < \mathbf{A} + \mathbf{C} < \mathbf{M}$ Operating the Agent Dashboard requires a highly skilled worker, so efficient data collection is limited by the number of available agents. Although the customer side of ABCD is a simpler task, there is still a minimum bar to be met to prevent (a) customers who spam with random text (b) customers who fake scenarios or (c) customers who hoard HITs and never show up to the chat. Thus, there needs to be a sufficient amount of both agent $A$ and customers $C$ qualified and available in order to surpass the minimum threshold set by $N$ . However, simply paying more per HIT bumps up against the limits set by $M$ .
|
| 341 |
+
|
| 342 |
+
$\mathbf{C} >> \mathbf{A}$ Since training agents is more resource intensive than training customers, it makes sense to simply have more of the latter. Yet by doing so leads to an issue where customers wait around for agents when they arrive in the waitroom. In a typical scenario, a customer might leave the tab open to work on other tasks, but when they are eventually paired, the customer is often busy doing something else, leaving the chat to flounder. In the worst case, the customer starts to verbally abuse the agent about the long wait time when they are finally paired.
|
| 343 |
+
|
| 344 |
+
A $>>$ C Finding as many agents as possible is not the solution either because now the agents will end up waiting around for customers. If the waiting periods are too long, agents will abandon the task and disparage your reputation on various forms of social media. Since the task is difficult, the pool of workers who may eventually qualify as agents is finite, so too many poor interactions can halt the data collection process completely.
|
| 345 |
+
|
| 346 |
+
To resolve this situation, we begin with the maximum number of workers $M$ as the starting constraint given a fixed budget. If we qualify too many workers, then we will not have enough budget left for the actual conversations, so instead we qualify workers in mini-batches. Since the pool of potential workers who may meet the strict requirements for agents $A$ is more limited than customer candidates $C$ , we start on the agent side. Given some amount of qualified agents, only a percentage of them will show up at the desired time slot to perform the task. Thus, we increment the batch size until the number of available workers passes the minimum $A > N / 2$ .
|
| 347 |
+
|
| 348 |
+
To limit the number of customers who show up, we filter for users by location, number of completed HITS, and sufficient rating. We also establish an exam that is purposely very easy (to minimize costs), but just hard enough to deter bots and spammers. To raise the likelihood that the customer will show up, we include a question in the quiz which simply asks when the customer is available to perform the HIT. We really emphasize this question and make it required, so workers are aware of its significance. This allows us to tune the customer count such that $C \approx A$ .
|
| 349 |
+
|
| 350 |
+
Note that due to the higher pay rate, agents are more likely to show up than customers. Therefore, there needs to be a higher ratio of customers to account for this imbalance. For some intuition on where to start, we found that a good rule of thumb was to consider the appearance ratio as inversely proportional to the ratio of pay. One final insight is to make the HITs heavily dependent on bonus pay, with base pay very low. This will keep spammers away since they will end up with a pittance when attempting to game the system.
|
| 351 |
+
|
| 352 |
+
To improve the waitroom experience, we added a feature where a user's place in the queue would be updated live, along with a timer indicating the expected wait. For Turkers willing to wait around, helpful and encouraging messages would also be displayed to keep them occupied. Alternatively, for Turkers who were multi-tasking, visual and audio notifications were added to signify the start of a chat, allowing them to attend to other tasks in the meantime. We believe our modifications have only scratched the surface and that improving the user experience for data collection offers an interesting line of HCI research to explore.
|
| 353 |
+
|
| 354 |
+
# C Cascading Evaluation
|
| 355 |
+
|
| 356 |
+
To motivate cascading dialogue success (CDS) over typical other accuracy metrics, consider the scenario where a model gets $80\%$ of turns correct, while still achieving $0\%$ accuracy on the conversation level because it always messes up somewhere right at the end of the dialogue. A turn-based metric would over-estimate performance since such a metric fails to capture the model's consistent shortcomings in closing conversations. On the other hand, conversation-based metrics under-estimate the model's performance because such measures fail to account for the fact that the system is mostly successful. Moreover, each evaluation would be limited to occurring only once per conversation, which makes inefficient use of scarce data as a resource.
|
| 357 |
+
|
| 358 |
+
Instead, cascading dialogue success creates an evaluation example for the remainder of each conversation starting from each turn. For example, suppose a chat contained 4 turns: [A, B, C, and D], training instances can be created with this data that include: [A, B, C, D], [B, C, D], [C, D] and [D] by itself. Now imagine the model consistently predicted turn $C$ incorrectly, and everything else correct. Then its scores would be $2/4$ , $1/3$ , 0 and 1, respectively. Averaging across all turns would yield a final cascading success rate of $45.8\%$ . A turn-based metric would yield $75\%$ while a conversation-based metric would yield $0\%$ . Thus, CDS allows a model to earn partial credit on what it has learned without severe penalties in either direction.
|
| 359 |
+
|
| 360 |
+
# D Model Training Hyperparameters
|
| 361 |
+
|
| 362 |
+
When training the best model for Action State Tracking, we ended up with a learning rate of 3e-5, hidden dimension of 1024, weight decay of 0.05 and a batch size of 10 examples. Training lasted for 14 epochs, where we early stopped if overall accuracy failed to improve for three epochs in a row. The RAdam optimizer had a linear warm-up for three epochs, with hyperparameters kept at their defaults of 0.9 and 0.999. We also add the lexicalized slots into the vocabulary of the tokenizer.
|
| 363 |
+
|
| 364 |
+
For Cascading Dialogue Success, our best model had a 1e-5 learning rate, 1024 hidden dimension and no weight decay. The batch size was shrunk to 3 examples, but this was due purely to memory rather than performance reasons. Train
|
| 365 |
+
|
| 366 |
+
ing was set to 21 epochs, and again we early stopped if overall accuracy failed to improve for three epochs in a row. Finally, the optimizer again had a linear warm-up for three epochs with hyperparameters kept at their defaults.
|
| 367 |
+
|
| 368 |
+
# E Intent Info and Guidelines
|
| 369 |
+
|
| 370 |
+
We augment the model with access to intent information in two ways. First, the subflow is translated into an index which is concatenated to all input contexts so the model can leverage this information. Second, the intent classifier is directly fed the solution, which is what allows it to trivially reach perfect accuracy.
|
| 371 |
+
|
| 372 |
+
We leverage the Agent Guidelines by using it to mask invalid action predictions. More specifically, given a predicted subflow, the guidelines outline all possible actions and values within that subflow. With this information, a mask is created before training and applied during evaluation to only allow valid actions.
|
| 373 |
+
|
| 374 |
+
# F Conversation Examples
|
| 375 |
+
|
| 376 |
+
Since ABCD was collected using Expert Live Chat rather than templates, we observe various linguistic diversity in the chats. These phenomena limit the ability of models to memorize artificial patterns when making predictions.
|
| 377 |
+
|
| 378 |
+
# Co-reference
|
| 379 |
+
|
| 380 |
+
CUS: I'd like to return something
|
| 381 |
+
|
| 382 |
+
AGT: OK
|
| 383 |
+
|
| 384 |
+
AGT: Can I get your full name
|
| 385 |
+
|
| 386 |
+
AGT: Also user name, email address, order id
|
| 387 |
+
|
| 388 |
+
AGT: Membership level and reason for return
|
| 389 |
+
|
| 390 |
+
CUS: Alessandro Phoenix, aphoenix872@email.com, order ID is 4024067912
|
| 391 |
+
|
| 392 |
+
CUS: I'm at the Gold level. I'm returning it because it's the wrong size
|
| 393 |
+
|
| 394 |
+
# Chit-Chat
|
| 395 |
+
|
| 396 |
+
AGT: Do you need any more help?
|
| 397 |
+
|
| 398 |
+
CUS: a break, I need a coffee break.
|
| 399 |
+
|
| 400 |
+
CUS: but no, nothing from you
|
| 401 |
+
|
| 402 |
+
CUS: thanks for the save
|
| 403 |
+
|
| 404 |
+
AGT: Haha have a good break! And have an even better day.
|
| 405 |
+
|
| 406 |
+
# Emotion
|
| 407 |
+
|
| 408 |
+
AGT: Ok, there was a mistake made. Do you have the Shipping Status?
|
| 409 |
+
|
| 410 |
+
CUS: It's in transit
|
| 411 |
+
|
| 412 |
+
AGT: Ok, that means it's already out for shipment.
|
| 413 |
+
|
| 414 |
+
CUS: so two are being sent?
|
| 415 |
+
|
| 416 |
+
AGT: Yes. Unfortunately that means when you get the item you will need to call back and make a return
|
| 417 |
+
|
| 418 |
+
CUS: oh you gotta be kidding me!
|
| 419 |
+
|
| 420 |
+
Table 5: Examples of linguistic phenomenon.
|
| 421 |
+
|
| 422 |
+
# G Agent Guidelines
|
| 423 |
+
|
| 424 |
+
(screenshot on following page)
|
| 425 |
+
|
| 426 |
+
# H Customer Panel
|
| 427 |
+
|
| 428 |
+
(screenshot on following page)
|
| 429 |
+
|
| 430 |
+
# Manage Account Flow
|
| 431 |
+
|
| 432 |
+
# Status - Service Added
|
| 433 |
+
|
| 434 |
+
The customer believes that an unknown service was added to their account.
|
| 435 |
+
|
| 436 |
+
- [Interaction] Ask the customer for their Full name or Account ID with [Pull up Account]. This loads information in the background with information related to this user.
|
| 437 |
+
|
| 438 |
+
- [Interaction] Find out how the customer heard about this issue: Options are one of the three ["email", "spouse", "news"]
|
| 439 |
+
|
| 440 |
+
- Enter this into the input box and [Record Reason]
|
| 441 |
+
|
| 442 |
+
- [KB Query] To decide whether or not this is a valid error:
|
| 443 |
+
|
| 444 |
+
Tell the customer you will check your system about this problem.
|
| 445 |
+
|
| 446 |
+
- You do not have to enter anything into the field.
|
| 447 |
+
|
| 448 |
+
- Then [Ask the Oracle] if the customer made an error, which will return Yes or No.
|
| 449 |
+
|
| 450 |
+
- <Communication> If the customer is mistaken (Oracle says No), then explain:
|
| 451 |
+
|
| 452 |
+
If the customer was sent an email
|
| 453 |
+
|
| 454 |
+
- Let them know this was a scam email, and if they check the sender, it isn't from the official AcmeCorp marketing team.
|
| 455 |
+
|
| 456 |
+
If the customer was told this from their spouse
|
| 457 |
+
|
| 458 |
+
let them know they must have misheard. They have nothing to worry about and there are no extra charges.
|
| 459 |
+
|
| 460 |
+
If the customer saw this in the news
|
| 461 |
+
|
| 462 |
+
then let them know nicely that they must have misunderstood. They have nothing to worry about and there are no extra charges.
|
| 463 |
+
|
| 464 |
+
- [Interaction] If the customer is correct (Oracle says Yes), then remove the extra service
|
| 465 |
+
|
| 466 |
+
Enter in "remove service" and use the [Update Account] button.
|
| 467 |
+
|
| 468 |
+
Tell the customer that the extra service has been removed.
|
| 469 |
+
|
| 470 |
+
- [Interaction] If it is a legitimate error, then you can offer a refund to the customer
|
| 471 |
+
|
| 472 |
+
The amount should be for $40
|
| 473 |
+
Enter "40" in the field and then click on [Offer Refund]
|
| 474 |
+
|
| 475 |
+
If nothing works, just apologize and end the conversation.
|
| 476 |
+
|
| 477 |
+
# Customer Service Site
|
| 478 |
+
|
| 479 |
+

|
| 480 |
+
Figure 4: A example subflow Status - Service Added under the Mange Account Flow in the Agent Guidelines
|
| 481 |
+
Leave Early
|
| 482 |
+
|
| 483 |
+
If asked about your preferences, pick anything you like. You want to get a refund because you just bought the item described below, but changed your mind. Explain your problem to the agent, provide any information that is requested and attempt to get your refund before the item ships. Your refund amount is the price of the product below.
|
| 484 |
+
|
| 485 |
+
# Personal Info
|
| 486 |
+
|
| 487 |
+
Name: Albert Sanders
|
| 488 |
+
|
| 489 |
+
Phone Number: (099) 109-0570
|
| 490 |
+
|
| 491 |
+
Member Level: Silver
|
| 492 |
+
|
| 493 |
+
Email Address: as205981@email.com
|
| 494 |
+
|
| 495 |
+
Account ID: WCUSTYQERI
|
| 496 |
+
|
| 497 |
+
Username: as205981
|
| 498 |
+
|
| 499 |
+
# Order Statement
|
| 500 |
+
|
| 501 |
+
Purchase Date: 2020-01-02
|
| 502 |
+
|
| 503 |
+
Order ID: 6449343909
|
| 504 |
+
|
| 505 |
+
Payment Method: Credit Card
|
| 506 |
+
|
| 507 |
+
Address: 9990 Kennedy St
|
| 508 |
+
|
| 509 |
+
La Fayette, CA 58913
|
| 510 |
+
|
| 511 |
+
In Original Packaging: Yes
|
| 512 |
+
|
| 513 |
+
Product: Jeans
|
| 514 |
+
|
| 515 |
+
Brand: Guess
|
| 516 |
+
|
| 517 |
+
Amount: $49
|
| 518 |
+
|
| 519 |
+

|
| 520 |
+
Figure 5: The customer chat interface (left) shows an on-going conversation with customer messages in grey, agent messages in blue, and actions in green. The customer prompt (right top) grounds the customer to a specific issue and backstory. The info sections (right middle and bottom) contains values that the customer has to provide in the conversation as well as other meta-data such as product information.
|
actionbasedconversationsdatasetacorpusforbuildingmoreindepthtaskorienteddialoguesystems/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ded62f65f0de45a05efd82390a8975c412176a3ba22f17b04be96c8c8699520f
|
| 3 |
+
size 539966
|
actionbasedconversationsdatasetacorpusforbuildingmoreindepthtaskorienteddialoguesystems/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:da875f629e666e437a7705e7bb2dd2c0a9c9fc94d3d5f3f9c8b21b805b111f5f
|
| 3 |
+
size 509702
|
active2learningactivelyreducingredundanciesinactivelearningmethodsforsequencetaggingandmachinetranslation/a3588c8d-885e-4bb1-a6ad-11f288633471_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:59219b65b376fe94d1d6b078df9445842bf51c57a8b9940c5ed9283a3fe822e3
|
| 3 |
+
size 91459
|
active2learningactivelyreducingredundanciesinactivelearningmethodsforsequencetaggingandmachinetranslation/a3588c8d-885e-4bb1-a6ad-11f288633471_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f778e654872e79b01da3d0692de3d81824768e18a62852b36ae3bec2e9a5ca17
|
| 3 |
+
size 105975
|
active2learningactivelyreducingredundanciesinactivelearningmethodsforsequencetaggingandmachinetranslation/a3588c8d-885e-4bb1-a6ad-11f288633471_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:716a1fa993043c6d6eb8fe930a07e7f40b354d2d2790d016198f3076f2f83bdb
|
| 3 |
+
size 4853333
|
active2learningactivelyreducingredundanciesinactivelearningmethodsforsequencetaggingandmachinetranslation/full.md
ADDED
|
@@ -0,0 +1,382 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Active $^2$ Learning: Actively reducing redundancies in Active Learning methods for Sequence Tagging and Machine Translation
|
| 2 |
+
|
| 3 |
+
Rishi Hazra, Parag Dutta, Shubham Gupta, Mohammed Abdul Qaathir, Ambedkar Dukkipati
|
| 4 |
+
|
| 5 |
+
{rishihazra, parag Dutta, shubhamg,
|
| 6 |
+
|
| 7 |
+
mohammedq, ambedkar}@iisc.ac.in
|
| 8 |
+
|
| 9 |
+
Indian Institute of Science, Bangalore
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
While deep learning is a powerful tool for natural language processing (NLP) problems, successful solutions to these problems rely heavily on large amounts of annotated samples. However, manually annotating data is expensive and time-consuming. Active Learning (AL) strategies reduce the need for huge volumes of labeled data by iteratively selecting a small number of examples for manual annotation based on their estimated utility in training the given model. In this paper, we argue that since AL strategies choose examples independently, they may potentially select similar examples, all of which may not contribute significantly to the learning process. Our proposed approach, Active $^2$ Learning $(\mathrm{A}^2\mathrm{L})$ , actively adapts to the deep learning model being trained to eliminate such redundant examples chosen by an AL strategy. We show that $\mathrm{A}^2\mathrm{L}$ is widely applicable by using it in conjunction with several different AL strategies and NLP tasks. We empirically demonstrate that the proposed approach is further able to reduce the data requirements of state-of-the-art AL strategies by $\approx 3 - 25\%$ on an absolute scale on multiple NLP tasks while achieving the same performance with virtually no additional computation overhead.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Active Learning (AL) (Freund et al., 1997; McCallum and Nigam, 1998) reduces the need for large quantities of labeled data by intelligently selecting unlabeled examples for expert annotation in an iterative process. Many Natural Language Processing (NLP) tasks like sequence tagging (NER, POS) and Neural Machine Translation (NMT) are very data-intensive and require a meticulous, time-consuming, and costly annotation process. On the other hand, unlabeled data is practically unlimited. Due to this, many researchers have explored applications of active learning for NLP (Thompson et al., 1999; Figueroa et al., 2012). A general AL method
|
| 18 |
+
|
| 19 |
+
proceeds as follows: (i) The partially trained model for a given task is used to (possibly incorrectly) annotate the unlabeled examples. (ii) An active learning strategy selects a subset of the newly labeled examples via a criterion that quantifies the perceived utility of examples in training the model. (iii) The experts verify/improve the annotations for the selected examples. (iv) These examples are added to the training set, and the process repeats. AL strategies differ in the criterion used in step (ii).
|
| 20 |
+
|
| 21 |
+
We claim that all AL strategies select redundant examples in step (ii). If one example satisfies the selection criterion, then many other similar examples will also satisfy it (see the next paragraph for details). As the examples are selected independently, AL strategies redundantly choose all of these examples even though, in practice, it is enough to label only a few of them (ideally just one) for training the model. This leads to higher annotation costs, wastage of resources, and reduces the effectiveness of AL strategies. This paper addresses this problem by proposing a new approach called $\mathrm{A}^2\mathrm{L}$ (read as active-squared learning) that further reduces the redundancies of existing AL strategies.
|
| 22 |
+
|
| 23 |
+
Any approach for eliminating redundant examples must have the following qualities: (i) The redundancy should be evaluated in the context of the trained model. (ii) The approach should apply to a wide variety of commonly used models in NLP. (iii) It should be compatible with several existing AL strategies. The first point merits more explanation. As a model is trained, depending on the downstream task, it learns to focus on certain properties of the input. Examples that share these properties (for instance, the sentence structure) are similar from the model's perspective. If the model is confused about one such example, it will likely be confused about all of them. We refer to a similarity measure that is computed in the context of a model as a model-aware similarity (Section 3.1).
|
| 24 |
+
|
| 25 |
+
Contributions: (i) We propose a Siamese twin
|
| 26 |
+
|
| 27 |
+
(Bromley et al., 1994; Mueller and Thyagarajan, 2016) based method for computing model-aware similarity to eliminate redundant examples chosen by an AL strategy. This Siamese network actively adapts itself to the underlying model as the training progresses. We then use clustering based on similarity scores to eliminate redundant examples. (ii) We develop a second, computationally more efficient approach that approximates the first one with a minimal drop in performance by avoiding the clustering step. Both of these approaches have the desirable properties mentioned above. (iii) We experiment with several AL strategies and NLP tasks to empirically demonstrate that our approaches are widely applicable and significantly reduce the data requirements of existing AL strategies while achieving the same performance. To the best of our knowledge, we are the first to identify the importance of model-aware similarity and exploit it to address the problem of redundancy in AL.
|
| 28 |
+
|
| 29 |
+
# 2 Related Work
|
| 30 |
+
|
| 31 |
+
Active learning has a long and successful history in the field of machine learning (Dasgupta et al., 2009; Awasthi et al., 2017). However, as the learning models have become more complex, especially with the advent of deep learning, the known theoretical results for active learning are no longer applicable (Shen et al., 2018). This has prompted a diverse range of heuristics to adapt the active learning framework to deep learning models (Shen et al., 2018). Many AL strategies have been proposed (Sha and Saul, 2007; Haffari et al., 2009; Bloodgood and Callison-Burch, 2010; Blundell et al., 2015; Gal and Ghahramani, 2016a), however, since they choose the examples independently, the problem of redundancy (Section 1) applies to all.
|
| 32 |
+
|
| 33 |
+
We experiment with various NLP tasks like named entity recognition (NER) (Nadeau and Sekine, 2007), part-of-speech tagging (POS) (Marcus et al., 1993), neural machine translation (NMT) (Hutchins, 2004; Nepveu et al., 2004; Bahdanau et al., 2014; Cho et al., 2014; Sutskever et al., 2014; Ortiz-Martínez, 2016) and so on (Landes et al., 1998; Tjong Kim Sang and Buchholz, 2000). The tasks chosen by us form the backbone of many practical problems and are known to be computationally expensive during both training and inference. Many deep learning models have recently advanced the state-of-art for these tasks (Bahdanau et al., 2014; Lample et al., 2016; Siddhant and Lip
|
| 34 |
+
|
| 35 |
+
ton, 2018). Our proposed approach is compatible with any NLP model, provided it supports the usage of an AL strategy.
|
| 36 |
+
|
| 37 |
+
Existing approaches have used model-independent similarity scores to promote diversity in the chosen examples. For instance, in Chen et al. (2015), the authors use cosine similarity to pre-calculate pairwise similarity between examples. We instead argue in favor of model-aware similarity scores and learn an expressive notion of similarity using neural networks. We compare our approach with a modified version of this baseline using cosine similarity on Inferent embeddings (Conneau et al., 2017).
|
| 38 |
+
|
| 39 |
+
# 3 Proposed Approaches
|
| 40 |
+
|
| 41 |
+
We use $\mathcal{M}$ to denote the model being trained for a given task. $\mathcal{M}$ has a module called encoder for encoding the input sentences. For instance, the encoder in $\mathcal{M}$ may be modeled by an LSTM (Hochreiter and Schmidhuber, 1997).
|
| 42 |
+
|
| 43 |
+
# 3.1 Model-Aware Similarity Computation
|
| 44 |
+
|
| 45 |
+
A measure of similarity between examples is required to discover redundancy. The simplest solution is to compute the cosine similarity between input sentences (Chen et al., 2015; Shen et al., 2018) using, for instance, the InferSent encodings (Connieu et al., 2017). However, sentences that have a low cosine similarity may still be similar in the context of the downstream task. Model $\mathcal{M}$ has no incentive to distinguish among such examples. A good strategy is to label a diverse set of sentences from the perspective of the model. For example, it is unnecessary to label sentences that use different verb forms but are otherwise similar if the task is agnostic to the tense of the sentence. A straightforward extension of cosine similarity to the encodings generated by model $\mathcal{M}$ achieves this. However, a simplistic approach like this would likely be incapable of discovering complex similarity patterns in the data. Next, we describe two approaches that use more expressive model-aware similarity measures.
|
| 46 |
+
|
| 47 |
+
# 3.2 Model-Aware Siamese
|
| 48 |
+
|
| 49 |
+
In this approach, we use a Siamese twin's network (Bromley et al., 1994) to compute the pairwise similarity between encodings obtained from model $\mathcal{M}$ . A Siamese twin's network consists of an encoder (called the Siamese encoder) that feeds on the output of model $\mathcal{M}$ 's encoder. The outputs of the
|
| 50 |
+
|
| 51 |
+
Algorithm 1: Active $^2$ Learning
|
| 52 |
+
Data: $\mathcal{D}_1$ task dataset; $\mathcal{D}_2$ auxiliary similarity dataset
|
| 53 |
+
Input: $\mathcal{D}\gets 2\%$ of dataset $\mathcal{D}_1$ . $\mathrm{D}\leftarrow \mathcal{D}_1 - \mathcal{D}$ ; // unlabeled data
|
| 54 |
+
Output: Labeled data
|
| 55 |
+
Initialization: $\mathcal{D}$ ..
|
| 56 |
+
$\mathcal{M}\longleftarrow$ TRAIN(D);
|
| 57 |
+
$\mathcal{M}_{A^2L}\longleftarrow$ TRAIN(M(D2));
|
| 58 |
+
for $i\gets 1$ to $l$ do
|
| 59 |
+
$\begin{array}{rl} & {\mathcal{S}\gets \mathcal{AL}(\mathrm{D});\quad / / \mathrm{top}~2\% \mathrm{confused}}\\ & {\mathrm{samples}}\\ & {\mathrm{if} / / \mathrm{Model - Aware~Siamese}}\\ & {\mathrm{then}}\\ & {\mathrm{for~each~pair~(s_m,s_n)~in~}\mathcal{S}\mathrm{~do}}\\ & {\mathrm{\quad S[m,n]~\leftarrow~}\mathcal{M}_{A^2L}(s_m,s_n);}\\ & {\mathrm{\quad R\leftarrow~CLUSTER(S)}}\\ & {\mathrm{else}}\\ & {\mathrm{\quad / / ~Integrated~Clustering}}\\ & {\mathrm{\quad R\leftarrow~}\mathcal{M}_{A^2L}(\mathcal{S});}\\ & {\mathrm{\quad R\leftarrow~ANNOTATE}(\mathcal{R});}\\ & {\mathrm{\quad D\leftarrow~}\mathcal{D}\cup \mathcal{R};}\\ & {\mathrm{\quad M\leftarrow~RETRAIN}(\mathcal{D})} \end{array}$
|
| 60 |
+
|
| 61 |
+
Siamese encoder are used for computing the similarity between each pair of examples $a$ and $b$ as:
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
\sin (a, b) = \exp \left(- \| \mathbf {o} ^ {a} - \mathbf {o} ^ {b} \| _ {2}\right), \tag {1}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
where $\mathbf{o}^a$ and $\mathbf{o}^b$ are the outputs of the Siamese encoder for sentences $a$ and $b$ respectively. Let $N$ denote the number of examples chosen by an AL strategy. We use the Siamese network to compute the entries of an $N\times N$ similarity matrix $\mathbf{S}$ where the entry $S_{ab} = \mathrm{sim}(a,b)$ . We then use the spectral clustering algorithm (Ng et al., 2002) on the similarity matrix $\mathbf{S}$ to group similar examples. A fixed number of examples from each cluster are added to the training dataset after annotation by experts.
|
| 68 |
+
|
| 69 |
+
We train the Siamese encoder to predict the similarity between sentences from the SICK (Sentences Involving Compositional Knowledge) dataset (Marelli et al., 2014) using mean squared error. This dataset contains pairs of sentences with manually annotated similarity scores. The sentences are encoded using the encoder in $\mathcal{M}$ and then passed on to the Siamese encoder for computing similarities. The encoder in $\mathcal{M}$ is kept fixed while training the Siamese encoder. The trained Siamese encoder is then used for computing simi
|
| 70 |
+
|
| 71 |
+
larity between sentences selected by an AL strategy for the given NLP task as described above. As $\mathcal{M}$ is trained over time, the distribution of its encoder output changes, and hence we periodically retrain the Siamese network to sustain its model-awareness.
|
| 72 |
+
|
| 73 |
+
The number of clusters and the number of examples drawn from each cluster are user-specified hyper-parameters. The similarity computation can be done efficiently by computing the output of the Siamese encoder for all $N$ examples before evaluating equation 1, instead of running the Siamese encoder $O(N^2)$ times. The clustering algorithm runs in $O(N^3)$ time. For an AL strategy to be useful, it should select a small number of examples to benefit from interactive and intelligent labeling. We expect $N$ to be small for most practical problems, in which case the computational complexity added by our approach would only be a small fraction of the overall computational complexity of training the model with active learning (see Figure 1).
|
| 74 |
+
|
| 75 |
+
# 3.3 Integrated Clustering Model
|
| 76 |
+
|
| 77 |
+
While the approach described in Section 3.2 works well for small to moderate values of $N$ , it suffers from a computational bottleneck when $N$ is large. We integrate the clustering step into the similarity computation step to remedy this (see Figure 1) and call the resultant approach as Integrated Clustering Model (Int Model). Here, the output of model $\mathcal{M}$ 's encoder is fed to a clustering neural network $\mathcal{C}$ that has $K$ output units with the softmax activation function. These units correspond to the $K$ clusters, and each example is directly assigned to one of the clusters based on the softmax output.
|
| 78 |
+
|
| 79 |
+
To train the network $\mathcal{C}$ , we choose a pair of similar examples (say $a$ and $b$ ) and randomly select a negative example (say $c$ ). We experimented with both SICK and Quora Pairs dataset<sup>3</sup>. All examples are encoded via the encoder of model $\mathcal{M}$ and then passed to network $\mathcal{C}$ . The unit with the highest probability value for $a$ is treated as the ground-truth class for $b$ . Minimizing the objective given below maximizes the probability of $b$ belonging to its ground truth class while minimizing the probability of $c$ belonging to the same class:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\begin{array}{l} \mathcal {L} (a, b, c) = - \lambda_ {1} \log p _ {i _ {a}} ^ {b} - \lambda_ {2} \log \left(1 - p _ {i _ {a}} ^ {c}\right) \\ + \lambda_ {3} \sum_ {k = 1} ^ {K} p _ {k} ^ {b} \log p _ {k} ^ {b}. \tag {2} \\ \end{array}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
Here $\lambda_1, \lambda_2$ , and $\lambda_3$ are user-specified hyperparameters, $p_j^x$ is the softmax output of the $j^{th}$ unit
|
| 86 |
+
|
| 87 |
+
for example $x, j = 1,2,\ldots ,K$ , $x = a,b,c$ , and $i_{a} = \arg \max_{j\in \{1,2,\dots K\}}p_{j}^{a}$ . The third term encourages the utilization of all the $\mathbf{K}$ units across examples in the dataset. As before, a trained network $\mathcal{C}$ is used for clustering examples chosen by an AL strategy, and we select a fixed number of examples from each cluster for manual annotation.
|
| 88 |
+
|
| 89 |
+
It is important to note that: (i) These methods are not AL strategies. Rather, they can be used in conjunction with any existing AL strategy. Moreover, given a suitable Siamese encoder or clustering network $\mathcal{C}$ , they apply to any model $\mathcal{M}$ . (ii) Our methods compute model-aware similarity since the input to the Siamese or the clustering network is encoded using the model $\mathcal{M}$ . The proposed networks also adapt to the underlying model as the training progresses. Algorithm 1 describes our general approach called Active Learning.
|
| 90 |
+
|
| 91 |
+
# 4 Experiments
|
| 92 |
+
|
| 93 |
+
We establish the effectiveness of our approaches by demonstrating that they: (i) work well across a variety of NLP tasks and models, (ii) are compatible with several popular AL strategies, and (iii) further reduce the data requirements of existing AL strategies, while achieving the same performance. In particular, we experiment<sup>1</sup> with two broad categories of NLP tasks: (a) Sequence Tagging (b) Neural Machine Translation. Table 1 lists these tasks and information about the corresponding datasets (including the two auxiliary datasets for training the Siamese network (Section 3.2)) used in our experiments. We begin by describing the AL strategies for the two kinds of NLP tasks.
|
| 94 |
+
|
| 95 |
+
# 4.1 Active Learning Strategies for Sequence Tagging
|
| 96 |
+
|
| 97 |
+
Margin-based strategy: Let $s(\mathbf{y}) = \mathrm{P}_{\theta}(\mathbf{Y} = \mathbf{y}|\mathbf{X} = \mathbf{x})$ be the score assigned by a model $\mathcal{M}$ with parameters $\theta$ to output $\mathbf{y}$ for a given example $\mathbf{x}$ . Margin is defined as the difference in scores obtained by the best scoring output $\mathbf{y}$ and the second best scoring output $\mathbf{y}'$ , i.e.:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\mathrm {M} _ {\text {m a r g i n}} = \max _ {\mathbf {y}} s (\mathbf {y}) - \max _ {\mathbf {y} ^ {\prime} \neq \mathbf {y} _ {\max }} s \left(\mathbf {y} ^ {\prime}\right), \tag {3}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
where, $\mathbf{y}_{\mathrm{max}} = \arg \max_{\mathbf{y}}s(\mathbf{y})$ . The strategy selects examples for which $\mathrm{M}_{\mathrm{margin}}\leq \tau_1$ , where $\tau_{1}$ is a hyper-parameter. We use Viterbi's algorithm (Ryan and Nudd, 1993) to compute the scores $s(\mathbf{y})$
|
| 104 |
+
|
| 105 |
+
Entropy-based strategy: All the NLP tasks that we consider require the model $\mathcal{M}$ to produce an output for each token in the sentence. Let $\mathbf{x}$ be an input sentence that contains $n(\mathbf{x})$ tokens and define $\bar{s}_j = \max_{o\in \mathcal{O}}\mathrm{P}_{\theta}(y_j = o|\mathbf{X} = \mathbf{x})$ to be the probability of the most likely output for the $j^{th}$ token in $\mathbf{x}$ . Here $\mathcal{O}$ is set of all possible outputs and $y_{j}$ is the output corresponding to the $j^{th}$ token in $\mathbf{x}$ . We define the normalized entropy score as:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\mathrm {M} _ {\text {e n t r o p y}} = - \frac {1}{n (\mathbf {x})} \sum_ {j = 1} ^ {n (\mathbf {x})} \bar {s} _ {j} (\mathbf {y}) \log \bar {s} _ {j} (\mathbf {y}). \tag {4}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
A length normalization $n(\mathbf{x})$ is added to avoid bias due to the example length as it may be undesirable to annotate longer length examples (Claveau and Kijak, 2017). The strategy selects examples with $\mathrm{M}_{\text{entropy}} \geq \tau_2$ , where $\tau_2$ is a hyper-parameter.
|
| 112 |
+
|
| 113 |
+
Bayesian Active Learning by Disagreement (BALD): Due to stochasticity, models that use dropout (Srivastava et al., 2014) produce a different output each time they are executed. BALD (Houlsby et al., 2011) exploits this variability in the predicted output to compute model uncertainty. Let $\mathbf{y}^{(t)}$ denote the best scoring output for $\mathbf{x}$ in the $t^{th}$ forward pass, and let $N$ be the number of forward passes with a fixed dropout rate, then:
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\mathrm {M} _ {\text {b a l d}} = 1 - \frac {\operatorname {c o u n t} \left(\operatorname {m o d e} \left(\mathbf {y} ^ {(1)} , \dots , \mathbf {y} ^ {(N)}\right)\right)}{N}. \tag {5}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
Here the mode(.) operation finds the output which is repeated most often among $\mathbf{y}^{(1)},\ldots ,\mathbf{y}^{(N)}$ , and the count(. ) operation counts the number of times this output was encountered. This strategy selects examples with $\mathrm{M}_{\mathrm{bald}}\geq \tau_3$ (hyper-parameter).
|
| 120 |
+
|
| 121 |
+
# 4.2 Active Learning Strategies for Neural Machine Translation $^{2}$
|
| 122 |
+
|
| 123 |
+
Least Confidence (LC) This strategy estimates the uncertainty of a trained model on a source sentence $\mathbf{x}$ by calculating the conditional probability of the prediction $\hat{\mathbf{y}}$ conditioned on the source sentence(Lewis and Catlett, 1994).
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
\mathrm {M} _ {\mathrm {L C}} = \frac {1}{\mathrm {n} (\hat {\mathbf {y}})} \log \mathbf {P} (\hat {\mathbf {y}} | \mathbf {x}) \tag {6}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
A length normalization of $\mathrm{n}(\hat{\mathbf{y}})$ (length of the predicted translation $\hat{\mathbf{y}}$ ) is added.
|
| 130 |
+
|
| 131 |
+
<table><tr><td>Task</td><td>Dataset</td><td>#Train/#Test</td><td>Example (Input/Output)</td></tr><tr><td>NER</td><td>CoNLL 2003</td><td>14987 / 3584</td><td>Fischler proposed EU measures after reports from Britain B-PER 0 B-MISC 0 0 0 0 B-LOC</td></tr><tr><td>POS</td><td>CoNLL 2003</td><td>14987 / 3584</td><td>He ended the World Cup on the wrong note PRP VBD DT NNP NNP IN DT JJ NN</td></tr><tr><td>CHUNK</td><td>CoNLL 2000</td><td>8936 / 2012</td><td>The dollar posted gains in quiet trading B-NP I-NP B-VP B-NP B-PP B-NP I-NP</td></tr><tr><td>SEMTR</td><td>SEMCOR2</td><td>13851 / 4696</td><td>This section prevents the military departments 0 Mental Agentive 0 0 Object</td></tr><tr><td>NMT</td><td>Europarl (en → es)</td><td>100000 / 29155</td><td>(1) that is almost a personal record for me this autumn! (2) es la mejormarca que he alcanzado este otono .</td></tr><tr><td>AUX</td><td>SICK</td><td>9000/1000</td><td>(1) Two dogs are fighting. (2) Two dogs are wrestling and hugging. Similarity Score: 4 (out of 5)</td></tr><tr><td>AUX</td><td>Quora Pairs3</td><td>16000 / 1000 (sets)5</td><td>(1) How do I make friends? (2) How to make friends? Label: 1</td></tr></table>
|
| 132 |
+
|
| 133 |
+
Table 1: Task and dataset descriptions. AUX is the task of training the Siamese network (Section 3.2) or Integrated network $\mathcal{C}$ (Section 3.3). Citations: CoNLL 2003 (Sang and De Meulder, 2003), CoNLL 2000 (Tjong Kim Sang and Buchholz, 2000), SEMCOR $^3$ , Europarl (Koehn, 2005), SICK (Marelli et al., 2014), Quora Pairs $^4$ .
|
| 134 |
+
|
| 135 |
+
Coverage Sampling (CS) A translation model is said to cover the source sentence if it translates all of its tokens. Coverage is estimated by mapping a particular source token to its appropriate target token, without which the model may suffer from under-translation or over-translation issues (Tu et al., 2016). Peris and Casacuberta (2018) proposed to use translation coverage as a measure of uncertainty by:
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\mathrm {M} _ {\mathrm {C S}} = \frac {\sum_ {\mathrm {j} = 1} ^ {n (\mathbf {x})} \log \left(\min \left(\sum_ {\mathrm {i} = 1} ^ {n (\hat {\mathbf {y}})} \alpha_ {\mathrm {i} , \mathrm {j}} , 1\right)\right)}{n (\mathbf {x})} \tag {7}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
Here $\alpha_{\mathrm{i,j}}$ denotes the attention probability calculated by the model for the $j^{th}$ source word in predicting the $i^{th}$ target word. It can be noted that the coverage score will be 0 for samples for which the model almost fully covers the source sentences.
|
| 142 |
+
|
| 143 |
+
Attention Distraction Sampling (ADS) Peris and Casacuberta (2018) claimed that in translating an uncertain sample, the model's attention mechanism would be distracted (dispersed throughout the sentence). Such samples yield attention probability distribution with light tails (e.g., uniform distribution), which can be obtained by taking the Kurtosis of the attention weights for each target token $\mathbf{y}_i$ .
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\operatorname {K u r t} (\mathbf {y} _ {\mathrm {i}}) = \frac {\frac {1}{n (\mathbf {x})} \sum_ {\mathrm {j} = 1} ^ {n (\mathbf {x})} \left(\alpha_ {\mathrm {i} , \mathrm {j}} - \frac {1}{n (\mathbf {x})}\right) ^ {4}}{\left(\frac {1}{n (\mathbf {x})} \left(\sum_ {\mathrm {j} = 1} ^ {n (\mathbf {x})} \alpha_ {\mathrm {i} , \mathrm {j}} - \frac {1}{n (\mathbf {x})}\right)\right) ^ {2}} \tag {8}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
where $\frac{1}{n(\mathbf{x})}$ is the mean of the distribution of the attention weights (for a target word) over the source words. The kurtosis value will be lower for distributions with light tails, so the average of the negative
|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
Figure 1: Comparison of time taken for one data selection step in NMT task by the Model Aware (MA) Siamese and Integrated Clustering (Int) Model across different ALS. It can be observed that $\mathrm{A}^2\mathrm{L}$ adds a negligible overhead ( $\approx \frac{1}{12}$ of the time taken for ALS) to the overall process.
|
| 153 |
+
|
| 154 |
+
kurtosis values for all words in the target sentence is used as the distraction score.
|
| 155 |
+
|
| 156 |
+
$$
|
| 157 |
+
\mathrm {M} _ {\mathrm {A D S}} = \frac {\sum_ {\mathrm {i} = 1} ^ {n (\mathbf {y})} - \operatorname {K u r t} \left(\mathbf {y} _ {\mathrm {i}}\right)}{n (\mathbf {y})} \tag {9}
|
| 158 |
+
$$
|
| 159 |
+
|
| 160 |
+
# 4.3 Details about Training
|
| 161 |
+
|
| 162 |
+
For sequence tagging, we use two kinds of architectures: CNN-BiLSTM-CRF model (CNN for character-level encoding and BiLSTM for word-level encoding) and a BiLSTM-BiLSTM-CRF model (BiLSTM for both character-level and word-level encoding) (Lample et al. (2016); Siddhant
|
| 163 |
+
|
| 164 |
+
<table><tr><td>Task</td><td>Dataset</td><td>% of train data used to reach 99% of full-data F-Score</td><td>% less data required to reach 99% of full-data F-score</td></tr><tr><td>POS</td><td>CoNLL 2003</td><td>25%</td><td>16%</td></tr><tr><td>NER</td><td>CoNLL 2003</td><td>37%</td><td>3%</td></tr><tr><td>SEMTR</td><td>SEMCOR</td><td>35%</td><td>25%</td></tr><tr><td>CHUNK</td><td>CoNLL 2000</td><td>23%</td><td>11%</td></tr></table>
|
| 165 |
+
|
| 166 |
+
Table 2: Fraction of data used for reaching full dataset performance and the corresponding absolute percentage reduction in the data required over the None baseline that uses active learning strategy without the $A^2 L$ step for the best AL strategy (BALD in all cases). Refer Fig 8 in Appendix for CHUNK plots.
|
| 167 |
+
|
| 168 |
+
and Lipton (2018)). For the translation task, we use LSTM based encoder-decoder architecture with Bahdanau attention (Bahdanau et al., 2014). These models were chosen for their performance and ease of implementation.
|
| 169 |
+
|
| 170 |
+
The Siamese network used for model-aware similarity computation (Section 3.2) consists of two bidirectional LSTM (BiLSTM) encoders. We pass each sentence in the pair from the SICK dataset to model $\mathcal{M}$ and feed the resulting encodings to the Siamese BiLSTM encoder. The output is a concatenation of terminal hidden states of the forward and backward LSTMs, which is used to compute the similarity score using (1). As noted before, we keep model $\mathcal{M}$ fixed while training the Siamese encoders and use the trained Siamese encoders for computing similarity between examples chosen by an AL strategy. We maintain the model-awareness by retraining the Siamese after every 10 iterations.
|
| 171 |
+
|
| 172 |
+
The architecture of the clustering model $\mathcal{C}$ (Section 3.3) is similar to that of the Siamese encoder. Additionally, it has a linear layer with a softmax activation function that maps the concatenation of terminal hidden states of the forward and backward LSTMs to $K$ units, where $K$ is the number of clusters. To assign an input example to a cluster, we first pass it through the encoder in $\mathcal{M}$ and feed the resulting encodings to the clustering model $\mathcal{C}$ . The example is assigned to the cluster with the highest softmax output. This network is also retrained after every 10 iterations to retain model-awareness.
|
| 173 |
+
|
| 174 |
+
The initial data splits used for training the model $\mathcal{M}$ were set at $2\%$ of randomly sampled data for Sequence Tagging (20% for NMT). These are in
|
| 175 |
+
|
| 176 |
+
accordance with the splitting techniques used in the existing literature on AL (Siddhant and Lipton, 2018; Liu et al., 2018). The model is then used to provide input to train the Siamese/Clustering network using the SICK/Quora Pairs. At each iteration, we gradually add another $2\%$ of data for sequence tagging ( $5\%$ for NMT) by retrieving low confidence examples using an AL strategy, followed by clustering to extract the most representative examples. We average the results over five independent runs with randomly chosen initial splits. [Hyperparameters details in Appendix A].
|
| 177 |
+
|
| 178 |
+
# 4.4 Baselines
|
| 179 |
+
|
| 180 |
+
We claim that $\mathrm{A}^2\mathrm{L}$ mitigates the redundancies in the existing AL strategies by working in conjunction with them. We validate our claims by comparing our approaches with three baselines that highlight the importance of various components.
|
| 181 |
+
|
| 182 |
+
Cosine: Clustering is done based on cosine similarity between last output encodings (corresponding to sentence length) from encoder in $\mathcal{M}$ . Although this similarity computation is model-aware, it is simplistic and shows the benefit of using a more expressive similarity measure.
|
| 183 |
+
|
| 184 |
+
None: In this baseline, we use the AL strategy without applying Active $^2$ learning to remove redundant examples. This validates our claim about redundancy in examples chosen by AL strategies.
|
| 185 |
+
|
| 186 |
+
Random: No active learning is used, and random examples are selected at each time.
|
| 187 |
+
|
| 188 |
+
# 4.5 Ablation Studies
|
| 189 |
+
|
| 190 |
+
We perform ablation studies to demonstrate the utility of model-awareness using these baselines:
|
| 191 |
+
|
| 192 |
+
Inferent: Clustering is done based on cosine similarity between sentence embeddings (Chen et al., 2015) obtained from a pre-trained InferSent model (Conneau et al., 2017). This similarity computation is not model-aware and shows the utility of model-aware similarity computation.
|
| 193 |
+
|
| 194 |
+
Iso Siamese: To show that the Siamese network alone is not sufficient and model-awareness is needed, in this baseline, we train the Siamese network by directly using GloVe embeddings of the words as input rather than using output from the
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
|
| 202 |
+

|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
|
| 206 |
+

|
| 207 |
+
|
| 208 |
+

|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
|
| 212 |
+

|
| 213 |
+
|
| 214 |
+

|
| 215 |
+
Figure 2: [Best viewed in color] Comparison of our approach $(\mathrm{A}^2\mathrm{L})$ with baseline approaches on different tasks using different active learning strategies. $1^{st}$ row: POS (error bound at convergence: $\pm 0.05$ ), $2^{nd}$ row: NER ( $\pm 0.08$ ), $3^{rd}$ row: SEMTR ( $\pm 0.09$ ), $4^{th}$ row: NMT ( $\pm 0.04$ ). In the first three rows, from left to right, the three columns represent BALD, Entropy, and Margin AL strategies. $4^{th}$ row represents AL strategies for NMT, from left to right (LC: Least Confidence, CS: Coverage Sampling, ADS: Attention Distraction Sampling): Legend Description {100% data: full data performance, $\mathrm{A}^2\mathrm{L}$ (MA Siamese): Model Aware Siamese, $\mathrm{A}^2\mathrm{L}$ (Int Model): Integrated Clustering Model, Cosine: Cosine similarity, None: Active learning strategy without clustering step, Random: Random split (no active learning applied)}. See Section 4.4 for more details about the baselines. All the results were obtained by averaging over 5 random splits. These plots have been magnified to highlight the regions of interest. For original plots, refer to Fig 8 in the Appendix.
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
|
| 219 |
+

|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
Figure 3: [Best viewed in color] Ablations studies on POS task using different active learning strategies. From left to right, the three columns represent BALD, Entropy and Margin based AL strategies. Legend Description {100% data : full data performance, $\mathbf{A}^2\mathbf{L}$ (MA Siamese): Model Aware Siamese, $\mathbf{A}^2\mathbf{L}$ (Int Model): Integrated Clustering Model, Iso Siamese: Model isolated Siamese, InferSent: Cosine similarity based on InferSent encodings}. See Figure 7 in Appendix for experiments on other tasks. All the results were obtained by averaging over 5 splits.
|
| 223 |
+
|
| 224 |
+

|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
|
| 228 |
+
model $\mathcal{M}$ 's encoder. This similarity, which is not model-aware, is then used for clustering.
|
| 229 |
+
|
| 230 |
+

|
| 231 |
+
Figure 4: Comparison between various components of our approach in terms of the time required by them per training epoch for NMT. LSTM (Base) encoder-decoder translation model ( $\approx$ 10 hours), Model Aware (MA) Siamese ( $\approx$ 3 minutes) and Integrated Clustering (Int) Model ( $\approx$ 10 seconds). It can be observed that $\mathrm{A}^2\mathrm{L}$ adds a negligible overhead to the overall training time ( $\approx$ 0.45% of the time taken by the base model).
|
| 232 |
+
|
| 233 |
+
# 5 Results
|
| 234 |
+
|
| 235 |
+
Figure 2 compares the performance of our methods with baselines. It shows the test-set metric on the $y$ -axis against the percentage of training data used on the $x$ -axis for all tasks. See Figures 7 and 8 in the Appendix for additional results.
|
| 236 |
+
|
| 237 |
+
1. As shown in Figure 2, our approach consistently outperforms all baselines on all tasks. Note that one should observe how fast the performance increases with the addition of training data (and
|
| 238 |
+
|
| 239 |
+
not just the final performance) as we are trying to evaluate the effect of adding new examples. Our ablation studies in Figure 3 show the utility of using model-aware similarity.
|
| 240 |
+
|
| 241 |
+
2. In sequence tagging, we match the performance obtained by training on the full dataset using only a smaller fraction of the data (3 - 25% less data as compared to state-of-art AL strategies) (Table 2). On a large dataset in NMT task (Europarl), $\mathrm{A}^2\mathrm{L}$ takes ≈ 4300 sentences fewer than the Least Confidence AL strategy to reach a Bleu score of 12.
|
| 242 |
+
3. While comparing different AL strategies is not our motive, Figure 2 also demonstrates that one can achieve performance comparable to a complex AL strategy like BALD, using simple AL strategies like margin and entropy, by using the proposed $\mathrm{A}^2\mathrm{L}$ framework.
|
| 243 |
+
4. Additionally, from Figure 1, it can be observed that for one step of data selection: (i) The proposed MA Siamese model adds minimal overhead to the overall AL pipeline since it takes less than 5 additional seconds ( $\approx \frac{1}{12}$ of the time taken for ALS); (ii) By approximating the clustering step, Integrated Clustering (Int) Model further reduces the overhead down to 2 seconds. However, owing to this approximation, MA Siamese is observed to perform slightly better than the Int Model (Fig 3). A comparison of training time for various stages of the $\mathrm{A}^2\mathrm{L}$ pipeline is provided in Figure 4.
|
| 244 |
+
|
| 245 |
+
We wish to state that our approach should be evaluated not in terms of the gain in the F1 score but
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
Figure 5: [Best viewed in color] Qualitative case study to convey the notion of redundancy and the model aware similarity. We compare the examples grouped in the same cluster using (i) MA similarity scores [left] and (ii) Cosine similarity scores on the InferSent embedding (Infersent baseline described in Section 4.4) [Right]. As expected, when cosine similarity is used, sentences that have roughly similar content have been assigned to the same cluster. However, when model aware similarity is used, in addition to having similar content, the sentences also have a similar tagging structure. It is sensible to eliminate sentences having similar tagging structures, as they are redundant. [Dataset: CoNLL 2003 dataset, AL strategy: BALD, Data usage: $10\%$ of the total data]
|
| 249 |
+
|
| 250 |
+
<table><tr><td>No. of examples selected by AL Strategy</td><td>Spectral (examples processed per second)</td><td>Integrated (examples processed per second)</td></tr><tr><td>5000</td><td>491.64</td><td>2702.70</td></tr><tr><td>10000</td><td>248.08</td><td>2557.54</td></tr><tr><td>15000</td><td>163.10</td><td>2508.36</td></tr><tr><td>20000</td><td>122.89</td><td>2493.76</td></tr></table>
|
| 251 |
+
|
| 252 |
+
Table 3: Number of samples processed per second by the Spectral Clustering (MA Siamese) compared to the Integrated Clustering (Int Model) methods.
|
| 253 |
+
|
| 254 |
+
in terms of the reduction in data required to achieve the same (3-25 % on multiple datasets). More importantly, this improvement comes at a negligible computation overhead cost. The reported improvements are not relative with respect to any baseline but represent an absolute value and are very significant in the context of similar performance improvements reported in the literature. In Figure 5, we provide a qualitative case study that demonstrates the problem of redundancy.
|
| 255 |
+
|
| 256 |
+
# 6 Conclusion
|
| 257 |
+
|
| 258 |
+
This paper shows that one can further reduce the data requirements of Active Learning strategies
|
| 259 |
+
|
| 260 |
+
by proposing a new method, $\mathrm{A}^2\mathrm{L}$ , which uses a model-aware-similarity computation. We empirically demonstrated that our proposed approaches consistently perform well across many tasks and AL strategies. We compared the performance of our approach with strong baselines to ensure that the role of each component is properly understood.
|
| 261 |
+
|
| 262 |
+
# Acknowledgement
|
| 263 |
+
|
| 264 |
+
This work was funded by British Telecom India Research Center project on Advanced Chatbot.
|
| 265 |
+
|
| 266 |
+
# References
|
| 267 |
+
|
| 268 |
+
Pranjal Awasthi, Maria Florina Balcan, and Philip M. Long. 2017. The power of localization for efficiently learning linear separators with noise. J. ACM, 63(6):50:1-50:27.
|
| 269 |
+
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. In proceedings of ICLR 2015 as oral presentation.
|
| 270 |
+
Michael Bloodgood and Chris Callison-Burch. 2010. Bucking the trend: Large-scale cost-focused active learning for statistical machine translation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10, page 854-864, USA. Association for Computational Linguistics.
|
| 271 |
+
|
| 272 |
+
Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. 2015. Weight uncertainty in neural networks. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15, pages 1613-1622. JMLR.org.
|
| 273 |
+
Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. 1994. Signature verification using a "siamese" time delay neural network. In J. D. Cowan, G. Tesauro, and J. Alspector, editors, Advances in Neural Information Processing Systems, pages 737-744. Morgan-Kaufmann.
|
| 274 |
+
Robert Burchfield. 1985. Frequency analysis of english usage: Lexicon and grammar. by w. nelson francis and henry kucera with the assistance of andrew w. mackie. boston: Houghton mifflin. 1982. x + 561. Journal of English Linguistics, 18(1):64-70.
|
| 275 |
+
Yukun Chen, Thomas A. Lasko, Qiaozhu Mei, Joshua C. Denny, and Hua Xu. 2015. A study of active learning methods for named entity recognition in clinical text. J. of Biomedical Informatics, 58(C):11-18.
|
| 276 |
+
Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah-danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103-111, Doha, Qatar. Association for Computational Linguistics.
|
| 277 |
+
Vincent Claveau and Ewa Kijak. 2017. Strategies to select examples for active learning with conditional random fields. In CICling 2017 - 18th International Conference on Computational Linguistics and Intelligent Text Processing, pages 1-14.
|
| 278 |
+
Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670-680.
|
| 279 |
+
Sanjoy Dasgupta, Adam Tauman Kalai, and Claire Monteleoni. 2009. Analysis of perceptron-based active learning. J. Mach. Learn. Res., 10:281-299.
|
| 280 |
+
Rosa Figueroa, Qing Zeng-Treitler, Long Ngo, Sergey Goryachev, and Eduardo Wiechmann. 2012. Active learning for clinical text classification: Is it better than random sampling? Journal of the American Medical Informatics Association : JAMIA, 19:809-16.
|
| 281 |
+
Yoav Freund, H. Sebastian Seung, Eli Shamir, and Naftali Tishby. 1997. Selective sampling using the query by committee algorithm. *Mach. Learn.*, 28(2-3):133-168.
|
| 282 |
+
Yarin Gal and Zoubin Ghahramani. 2016a. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the
|
| 283 |
+
|
| 284 |
+
33rd International Conference on International Conference on Machine Learning - Volume 48, ICML'16, pages 1050-1059. JMLR.org.
|
| 285 |
+
Yarin Gal and Zoubin Ghahramani. 2016b. A theoretically grounded application of dropout in recurrent neural networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1019-1027. Curran Associates, Inc.
|
| 286 |
+
Gholamreza Haffari, Maxim Roy, and Anoop Sarkar. 2009. Active learning for statistical phrase-based machine translation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 415-423.
|
| 287 |
+
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780.
|
| 288 |
+
Neil Houlsby, Ferenc Huszar, Zoubin Ghahramani, and Mate Lengyel. 2011. Bayesian active learning for classification and preference learning. CoRR, abs/1112.5745.
|
| 289 |
+
W. John Hutchins. 2004. The georgetown-ibm experiment demonstrated in january 1954. In Machine Translation: From Real Users to Research, pages 102-114, Berlin, Heidelberg. Springer Berlin Heidelberg.
|
| 290 |
+
Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Conference Proceedings: the tenth Machine Translation Summit, pages 79-86, Phuket, Thailand. AAMT, AAMT.
|
| 291 |
+
Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270.
|
| 292 |
+
Shari Landes, Claudia Leacock, and Christiane Fellbaum. 1998. Building semantic concordances. WordNet: An Electronic Lexical Database, pages 199-216.
|
| 293 |
+
David D. Lewis and Jason Catlett. 1994. Heterogeneous uncertainty sampling for supervised learning. In Proceedings of the Eleventh International Conference on Machine Learning, pages 148-156, San Francisco (CA). Morgan Kaufmann.
|
| 294 |
+
Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018. Learning to actively learn neural machine translation. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 334-344, Brussels, Belgium. Association for Computational Linguistics.
|
| 295 |
+
|
| 296 |
+
Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Comput. Linguist., 19(2):313-330.
|
| 297 |
+
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella bernardi, and Roberto Zamparelli. 2014. A sick cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014).
|
| 298 |
+
Héctor Martínez Alonso and Barbara Plank. 2017. When is multitask learning effective? semantic sequence prediction under varying data conditions. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 44-53. Association for Computational Linguistics.
|
| 299 |
+
Andrew McCallum and Kamal Nigam. 1998. Employing em and pool-based active learning for text classification. In Proceedings of the Fifteenth International Conference on Machine Learning, ICML '98, pages 350-358.
|
| 300 |
+
Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similarity. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, pages 2786-2792.
|
| 301 |
+
David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Linguisticae Investigationes, 30(1):3-26.
|
| 302 |
+
Laurent Nepveu, Guy Lapalme, Philippe Langlais, and George Foster. 2004. Adaptive language and translation models for interactive machine translation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 190-197, Barcelona, Spain. Association for Computational Linguistics.
|
| 303 |
+
Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. 2002. On spectral clustering: Analysis and an algorithm. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 849-856. Curran Associates, Inc.
|
| 304 |
+
Daniel Ortiz-Martínez. 2016. Online learning for statistical machine translation. Computational Linguistics, 42(1):121-161.
|
| 305 |
+
Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.
|
| 306 |
+
Álvaro Peris and Francisco Casacuberta. 2018. Active learning for interactive neural machine translation of data streams. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 151-160, Brussels, Belgium. Association for Computational Linguistics.
|
| 307 |
+
|
| 308 |
+
Matthew S. Ryan and Graham R. Nudd. 1993. The viterbi algorithm. Technical report, University of Warwick.
|
| 309 |
+
Erik F. Tjong Kim Sang and Fien De Meulder 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition Proceeding of the Computational Natural Language Learning (CoNLL).
|
| 310 |
+
Fei Sha and Lawrence K. Saul. 2007. Large margin hidden markov models for automatic speech recognition. In B. Scholkopf, J. C. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 1249-1256. Curran Associates, Inc.
|
| 311 |
+
Yanyao Shen, Hyokun Yun, Zachary C. Lipton, Yakov Kronrod, and Animashree Anandkumar 2018. Deep active learning for named entity recognition. In 6th International Conference on Learning Representations.
|
| 312 |
+
Aditya Siddhant and Zachary C Lipton. 2018. Deep bayesian active learning for natural language processing: Results of a large-scale empirical study. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2904-2909.
|
| 313 |
+
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014 Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929-1958.
|
| 314 |
+
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014 Sequence to sequence learning with neural networks In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14, page 3104-3112, Cambridge, MA, USA. MIT Press.
|
| 315 |
+
Cynthia A. Thompson, Mary Elaine Califf, and Raymond J. Mooney. 1999. Active learning for natural language parsing and information extraction. In Proceedings of the Sixteenth International Conference on Machine Learning, ICML '99, page 406-414, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
|
| 316 |
+
Erik F. Tjong Kim Sang and Sabine Buchholz. 2000 Introduction to the CoNLL-2000 shared task chunking. In Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop.
|
| 317 |
+
Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 76-85, Berlin, Germany. Association for Computational Linguistics.
|
| 318 |
+
|
| 319 |
+
# Active $^2$ Learning: Actively reducing redundancies in Active Learning methods for Sequence Tagging and Machine Translation: Appendix
|
| 320 |
+
|
| 321 |
+
# A Implementation Details
|
| 322 |
+
|
| 323 |
+

|
| 324 |
+
Figure 6: Modeling similarity using the Siamese encoder (enclosed by dotted lines). A pair of sentences from AUX dataset is fed to the pretrained ST/NMT model. The output of the encoder is then passed to the Siamese encoder. Last hidden state of the Siamese encoder, corresponding to the sequence length of the sentence, is used for assigning a similarity score.
|
| 325 |
+
|
| 326 |
+
We use two different sequence tagging architectures: CNN-BiLSTM-CRF model (CNN for character-level encoding and BiLSTM for word-level encoding) and a BiLSTM-BiLSTM-CRF model (Lample et al., 2016) (BiLSTM for both character-level and word-level encoding). The CNN-BiLSTM-CRF architecture is a light-weight variant of the model proposed in (Siddhant and Lipton, 2018), having one layer in CNN encoder with two filters of sizes 2 and 3, followed by a max pool, as opposed to three layers in the original setup. This modification was found to improve the results. We use glove embeddings (Pennington et al., 2014) for all datasets. We apply normal dropout in the character encoder instead of the use of recurrent dropout (Gal and Ghahramani, 2016b) in the word encoder of the model presented in (Siddhant and Lipton, 2018) owing to an improvement in performance. For numerical stability, we use log probabilities and, thus, the value for margin-based AL strategy's threshold is outside the interval [0, 1]. We use the spectral clustering (Ng et al., 2002) algorithm to cluster the sentences chosen by the AL
|
| 327 |
+
|
| 328 |
+
strategy. We chose two representative examples from each cluster.
|
| 329 |
+
|
| 330 |
+
Hyperparameters are given for Sequence Tagging (CoNLL 2003 NER) and NMT.
|
| 331 |
+
|
| 332 |
+
<table><tr><td colspan="2">Active Learning strategy</td></tr><tr><td>threshold (Margin)</td><td>15</td></tr><tr><td>threshold (Entropy)</td><td>40</td></tr><tr><td>threshold (BALD)</td><td>0.2</td></tr><tr><td>dropout (BALD)</td><td>0.5</td></tr><tr><td>number of forward passes (BALD)</td><td>51</td></tr><tr><td colspan="2">Sequence tagging model</td></tr><tr><td>CNN filter sizes</td><td>[2,3]</td></tr><tr><td>training batch size</td><td>12</td></tr><tr><td>number of train epochs</td><td>16</td></tr><tr><td>dimension of character embedding</td><td>100</td></tr><tr><td>learning rate (Adam)</td><td>0.005</td></tr><tr><td>learning rate decay</td><td>0.9</td></tr><tr><td colspan="2">Siamese encoder</td></tr><tr><td>training batch size</td><td>48</td></tr><tr><td>number of train epochs</td><td>41</td></tr><tr><td>train/dev split</td><td>0.8</td></tr><tr><td>learning rate (Adam)</td><td>1e-5</td></tr><tr><td>period (of retrain)</td><td>10</td></tr><tr><td colspan="2">Clustering</td></tr><tr><td>Number of clusters</td><td>20</td></tr><tr><td colspan="2">Training</td></tr><tr><td>Batch size</td><td>12</td></tr><tr><td colspan="2">NMT model</td></tr><tr><td>training batch size</td><td>128</td></tr><tr><td>number of train epochs</td><td>20</td></tr><tr><td>dimension of (sub)word embedding</td><td>256</td></tr><tr><td>learning rate (Adam)</td><td>1e-3</td></tr><tr><td colspan="2">Siamese encoder</td></tr><tr><td>training batch size</td><td>1150</td></tr><tr><td>number of train epochs</td><td>25</td></tr><tr><td>dimension of (sub)word embedding</td><td>300</td></tr><tr><td>learning rate (Adam)</td><td>1e-3</td></tr><tr><td>period (of retrain)</td><td>3</td></tr><tr><td colspan="2">Clustering</td></tr><tr><td>Number of clusters</td><td>50</td></tr><tr><td colspan="2">Training</td></tr><tr><td>Batch size</td><td>128</td></tr></table>
|
| 333 |
+
|
| 334 |
+

|
| 335 |
+
|
| 336 |
+

|
| 337 |
+
|
| 338 |
+

|
| 339 |
+
|
| 340 |
+

|
| 341 |
+
|
| 342 |
+

|
| 343 |
+
|
| 344 |
+

|
| 345 |
+
|
| 346 |
+

|
| 347 |
+
|
| 348 |
+

|
| 349 |
+
|
| 350 |
+

|
| 351 |
+
|
| 352 |
+

|
| 353 |
+
Figure 7: [Best viewed in color] Ablations studies on different tasks using different active learning strategies. $1^{st}$ row: NER, $2^{nd}$ row: SEMTR, $3^{rd}$ row: CHUNK, $4^{th}$ row: NMT. In first three rows, from left to right, the three columns represent BALD, Entropy and Margin AL strategies. $4^{th}$ row represents AL strategies for NMT, from left to right (LC: Least Confidence, CS: Coverage Sampling, ADS: Attention Distraction Sampling). Legend Description {100% data : full data performance, $\mathbf{A}^2\mathbf{L}$ (MA Siamese) : Model Aware Siamese, $\mathbf{A}^2\mathbf{L}$ (Int Model) : Integrated Clustering Model, Iso Siamese : Model isolated Siamese, InferSent : Cosine similarity based on InferSent encodings}. See Section 4.5 for more details. All results were obtained by averaging over 5 random splits.
|
| 354 |
+
|
| 355 |
+

|
| 356 |
+
|
| 357 |
+

|
| 358 |
+
|
| 359 |
+

|
| 360 |
+
|
| 361 |
+

|
| 362 |
+
|
| 363 |
+

|
| 364 |
+
|
| 365 |
+

|
| 366 |
+
|
| 367 |
+

|
| 368 |
+
|
| 369 |
+

|
| 370 |
+
|
| 371 |
+

|
| 372 |
+
|
| 373 |
+

|
| 374 |
+
|
| 375 |
+

|
| 376 |
+
|
| 377 |
+

|
| 378 |
+
Figure 8: [Best viewed in color] Comparison of our approach (A²L) with baseline approaches on different tasks using different active learning strategies. $1^{st}$ row: POS, $2^{nd}$ row: NER, $3^{rd}$ row: SEMTR, $4^{th}$ row: CHUNK. In each row, from left to right, the three columns represent BALD, Entropy and Margin based AL strategies. Legend Description $\{100\%$ data : full data performance, A²L (MA Siamese) : Model Aware Siamese, A²L (Int Model) : Integrated Clustering Model, Cosine : Cosine similarity, None : Active learning strategy without clustering step, Random : Random split (no active learning applied)}\}. See Section 4.4 for more details. All the results were obtained by averaging over 5 random splits.
|
| 379 |
+
|
| 380 |
+

|
| 381 |
+
|
| 382 |
+

|
active2learningactivelyreducingredundanciesinactivelearningmethodsforsequencetaggingandmachinetranslation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8a9e369539bd7cbec3d002372b899af7df47960dd23f383586c64cb127f97c85
|
| 3 |
+
size 1218626
|
active2learningactivelyreducingredundanciesinactivelearningmethodsforsequencetaggingandmachinetranslation/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0416ae83b86357af60ec9678c724e3048e5e1dd8d712c612effb52084a447028
|
| 3 |
+
size 518900
|
adaptableandinterpretableneuralmemoryoversymbolicknowledge/16b2aa6a-d5d2-411a-a0cb-1a466d152946_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f98f98a16cb3f50a961ae63eb78b40a63d5fda2c2e87ec6d906e4abf7d7e309b
|
| 3 |
+
size 103602
|
adaptableandinterpretableneuralmemoryoversymbolicknowledge/16b2aa6a-d5d2-411a-a0cb-1a466d152946_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:31d2de6cfd1714fb90086dc7bc316e3d69eb78fa74bf296d99453ccd90f8fc30
|
| 3 |
+
size 126834
|
adaptableandinterpretableneuralmemoryoversymbolicknowledge/16b2aa6a-d5d2-411a-a0cb-1a466d152946_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:665a320342a1ab7ab29510c878814e9534b752b626479d9efc7e2a8620d2d705
|
| 3 |
+
size 567148
|
adaptableandinterpretableneuralmemoryoversymbolicknowledge/full.md
ADDED
|
@@ -0,0 +1,422 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Adaptable and Interpretable Neural Memory Over Symbolic Knowledge
|
| 2 |
+
|
| 3 |
+
Pat Verga*, Haitian Sun*, Livio Baldini Soares, William W. Cohen
|
| 4 |
+
|
| 5 |
+
Google Research
|
| 6 |
+
|
| 7 |
+
{patverga, haitiansun, liviobs, wcohen} $@$ google.com
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Past research has demonstrated that large neural language models (LMs) encode surprising amounts of factual information: however, augmenting or modifying this information requires modifying a corpus and retraining, which is computationally expensive. To address this problem, we develop a neural LM that includes an interpretable neuro-symbolic KB in the form of a "fact memory". Each element of the fact memory is formed from a triple of vectors, where each vector corresponds to a KB entity or relation. Our LM improves performance on knowledge-intensive question-answering tasks, sometimes dramatically, including a 27 point increase in one setting of WebQuestionsSP over a state-of-the-art open-book model, despite using $5\%$ of the parameters. Most interestingly, we demonstrate that the model can be modified, without any re-training, by updating the fact memory.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Neural language models (LMs) (Peters et al., 2018; Devlin et al., 2019; Raffel et al., 2019) that have been pre-trained by self-supervision on large corpora contain rich knowledge about the syntax and semantics of natural language (Tenney et al., 2019), and are the basis of much recent work in NLP. Pre-trained LMs also contain large amounts of factual knowledge about the world (Petroni et al., 2019; Roberts et al., 2020; Brown et al., 2020). However, while large LMs can be coerced to answer factual queries, they still lack many of the properties that knowledge bases (KBs) typically have. In particular, it is difficult to distinguish answers produced by memorizing factual statements in the pre-training corpus from lower-precision answers produced by linguistic generalization (Poerner et al., 2019). It is also difficult to add or remove factual information without retraining the LM, an expensive pro
|
| 16 |
+
|
| 17 |
+
cess<sup>1</sup>. The difficulty of updating knowledge in neural LMs contrasts with symbolic KBs, where it is very easy to add or modify triples, and is a major disadvantage of using a LM "as a KB"—as in many domains (news, product reviews, scientific publications, etc) the set of known facts changes frequently. Symbolic KBs thus remain practically important (Google, 2012; Dong, 2017), especially for NLP applications where text is hard to automatically process (e.g., scientific, technical, or legal) or tasks rich in information that exists only in structured form (e.g., technical specifications of a new product, where no product page or review text discussing it yet exists).
|
| 18 |
+
|
| 19 |
+
Motivated by this, past work has sought to combine the benefits of neural LMs with the large, broad-coverage KBs that now exist (Bollacker et al., 2008; Auer et al., 2007; Vrandecic and Krötzsch, 2014). This paper continues this research program with a new knowledge-augmented LM called Fact Injected Language Model (FILM). FILM is a masked LM, where masks can be filled either from the token vocabulary or an entity vocabulary. The vector representation of each entity in a KB is jointly learned alongside other parameters of a Transformer LM, and stored in a separate entity memory. FILM also includes a fact memory where each element is derived from a triple of vectors, representing a KB entity or relation. Since these triples are defined compositionally from (representations of) entities and relations, they have an interpretable symbolic meaning: e.g., if $\mathbf{e}_{\mathrm{mtv}}$ is the vector representation of KB entity "Mountain View, CA" and $\mathbf{e}_{\mathrm{google}}$ and $\mathbf{r}_{\mathrm{hq}}$ similarly correspond to "Google Inc" and the relation "headquartered in", these vectors can be used to construct a memory element $f(\mathbf{e}_{\mathrm{google}}, \mathbf{r}_{\mathrm{hq}}, \mathbf{e}_{\mathrm{mtv}})$ for the KB assertion "Google, Inc is headquartered in Mountain
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: Fact Injected Language Model architecture. The model takes a piece of text (a question during fine-tuning or arbitrary text during pre-training) and first contextually encodes it with an entity enriched transformer. FILM uses the contextually encoded MASK token as a query to the fact memory. In this case, the contextual query chooses the fact key (Charles Darwin, born_in) which returns the set of values {United Kingdom} (The value set can be multiple entity objects such as the case from calling the key [United Kingdom, has_city]). The returned object representation is incorporated back into the context in order to make the final prediction. Note that the entity representations in the facts (both in keys and values) are shared with the entity memory. The portion within the dashed line follows the procedure from Févry et al. (2020).
|
| 23 |
+
|
| 24 |
+
View, CA". This means that the fact memory can be easily extended with new facts.
|
| 25 |
+
|
| 26 |
+
In analysis on four benchmark question answering datasets we show that FILM improves significantly, and sometimes dramatically, over several strong baselines (e.g. BART (Lewis et al., 2019) and T5 (Raffel et al., 2019)) and this improvement is even larger when removing train-test overlap. In one setting of WebQuestionsSP, we outperform the next best performing model (RAG (Lewis et al., 2020a)) by 27 points despite using only $5\%$ of the number of parameters.
|
| 27 |
+
|
| 28 |
+
Most interestingly, we demonstrate that FILM models can be updated without any re-training, by modifying the fact memory. Specifically, in §4.1, we show we can inject new fact memories at inference time, enabling FILM to correctly answer questions about pairs of entities that were never observed in the training (either during pre-training or fine-tuning). In §4.2 we also evaluate updating the model by inserting contra-positive facts that contradict facts mentioned in the pretraining data, and we show that FILM can correctly answer novel questions in this scenario as well. To summarize, this paper's contributions are:
|
| 29 |
+
|
| 30 |
+
1. We propose a neural LM for knowledge-intensive question-answering tasks that incor
|
| 31 |
+
|
| 32 |
+
porates a symbolic fact memory.
|
| 33 |
+
|
| 34 |
+
2. We outperform most baselines on several benchmark open-domain QA datasets, and dramatically if test-train overlap in the datasets are removed.
|
| 35 |
+
3. We show FILM can easily adapt to newly injected and modified facts without retraining.
|
| 36 |
+
|
| 37 |
+
# 2 Fact Injected Language Model Model
|
| 38 |
+
|
| 39 |
+
The Fact Injected Language Model (FILM) model (see Figure 1) extends the Transformer (Vaswani et al., 2017) architecture of BERT (Devlin et al., 2019) with additional entity and facts memories. These memories store semantic information which can later be retrieved and incorporated into the representations of the transformer. Similar to the approach in Févry et al. (2020), entity embeddings will (ideally) store information about the textual contexts in which that entity appears, and by inference, the entity's semantic properties. The fact memory encodes triples from a symbolic KB, constructed compositionally from the learned embeddings of the entities that comprise it and implemented as a key-value memory which is used to retrieve entities given their KB properties. This combination results in a neural LM which learns to
|
| 40 |
+
|
| 41 |
+
access information from a symbolic KB.
|
| 42 |
+
|
| 43 |
+
# 2.1 Definitions
|
| 44 |
+
|
| 45 |
+
We represent a Knowledge Base $\mathcal{K}$ as a set of triples $(s,r,o)$ where $s,o\in \mathcal{E}$ are the subject and object entities and $r\in \mathcal{R}$ is the relation, where $\mathcal{E}$ and $\mathcal{R}$ are pre-defined vocabularies of entities and relations. A text corpus $\mathcal{C}$ is a collection of paragraphs2 $\{p_1,\ldots ,p_{|C|}\}$ . Let $\mathcal{M}$ be the set of entity mentions in the corpus $\mathcal{C}$ . A mention $m_i$ is encoded as $(e_m,s_m^p,t_m^p)$ , indicating entity $e_m$ is mentioned in paragraph $p$ starting at token position $s_m^p$ and ending at $t_m^p$ . We will usually drop the superscript $p$ and use $s_m$ and $t_m$ for brevity.
|
| 46 |
+
|
| 47 |
+
# 2.2 Input
|
| 48 |
+
|
| 49 |
+
The input to our model is a piece of text; either a question during fine tuning (see §A.2.2) or a paragraph in pre-training (see §A.2.1). Pretraining is formulated as a cloze-type Question Answering (QA) task: given a paragraph $p = \{w_1, \dots, w_{|p|}\}$ with mentions $\{m_1, \dots, m_n\}$ , we sample a single mention $m_i$ to act as the cloze answer and replace all tokens of $m_i$ with [MASK] tokens. The entity in $\mathcal{E}$ named by the masked entity is the answer to the cloze question $q$ ('United Kingdon' in the example input of Figure 1). Mentions in the paragraph other than $m$ are referred to below as context mentions. In the following sections we describe how our model learns to jointly link context entities (\$2.3) and predict answer entities (\$2.5).
|
| 50 |
+
|
| 51 |
+
# 2.3 Entity Memory
|
| 52 |
+
|
| 53 |
+
Our entity memory $\mathbf{E} \in \mathbb{R}^{|\mathcal{E}| \times d_e}$ is a matrix containing a vector for each entity in $\mathcal{E}$ and trained as an entity-masked LM. The model input is a text span containing unlinked entity mentions with known boundaries<sup>3</sup>. Mentions are masked with some probability. Our entity memory follows Entity as Experts (EaE) (Févry et al., 2020) which interleaves standard Transformer (Vaswani et al., 2017) layers with layers that accesses the entity memory<sup>4</sup>.
|
| 54 |
+
|
| 55 |
+
Given a piece of text $q = \{w_{1},\dots ,w_{|q|}\}$ the contextual embedding $\mathbf{h}_i^{(l)}$ is the output at the $i$ 'th
|
| 56 |
+
|
| 57 |
+
token of the $l$ th intermediate transformer layer. These contextual embeddings are used to compute query vectors that interface with the entity memory.
|
| 58 |
+
|
| 59 |
+
For each context mention $m_{i} = (e_{m_{i}}, s_{m_{i}}, t_{m_{i}})$ in q, we form a query vector to access the Entity memory by concatenating the context embeddings for the mention $m_{i}$ 's start and end tokens, $\mathbf{h}_{s_{m_i}}^{(l)}$ and $\mathbf{h}_{t_{m_i}}^{(l)}$ and projecting them into the entity embedding space. We use this query to compute attention weights over the full entity vocabulary and produce an attention-weighted sum of entity embeddings $\mathbf{u}_{m_i}^l$ . The result is then projected back to the dimension of the $j$ -indexed contextual token embeddings, and added to what would have been the input to the next layer of the Transformer:
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\mathbf {h} _ {m _ {i}} ^ {(l)} = \mathbf {W} _ {e} ^ {T} \left[ \mathbf {h} _ {s _ {m _ {i}}} ^ {(l)}; \mathbf {h} _ {t _ {m _ {i}}} ^ {(l)} \right] \tag {1}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
\mathbf {u} _ {m _ {i}} ^ {(l)} = \operatorname {s o f t m a x} \left(\mathbf {h} _ {m _ {i}} ^ {(l)}, \mathbf {E}\right) \times \mathbf {E} \tag {2}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\tilde {\mathbf {h}} _ {j} ^ {(l + 1)} = \mathbf {h} _ {j} ^ {(l)} + \mathbf {W} _ {2} ^ {T} \mathbf {u} _ {m _ {i}} ^ {(l)}, s _ {m _ {i}} < j < t _ {m _ {i}} \tag {3}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
After the final transformer layer $T$ , $\mathbf{h}_{m_i}^{(T)}$ is used to predict the context entities $\hat{e_{m_i}}$ and produce a loss with $\mathbb{I}_{e_{m_i}}$ , the one-hot label of entity $e_{m_i}$ . Following Févry et al. (2020), we supervise the entity access for the intermediate query vector in Eq. 1.
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
\hat {e} _ {m _ {i}} = \operatorname {a r g m a x} _ {e _ {i} \in \mathcal {E}} \left(\mathbf {c} _ {m _ {i}} ^ {T} \mathbf {e} _ {i}\right)
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\operatorname {l o s s} _ {\mathrm {c t x}} = \operatorname {c r o s s \_ e n t r o p y} (\operatorname {s o f t m a x} \left(\mathbf {c} _ {m _ {i}}, \mathbf {E}\right), \mathbb {I} _ {e _ {m _ {i}}})
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\operatorname {l o s s} _ {\text {e n t}} = \operatorname {c r o s s} _ {\text {e n t r o p y}} (\operatorname {s o f t m a x} \left(\mathbf {h} _ {m _ {i}} ^ {(l)}, \mathbf {E}\right), \mathbb {I} _ {e _ {m _ {i}}})
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
# 2.4 Fact Memory
|
| 88 |
+
|
| 89 |
+
FILM contains a second fact memory, populated by triples from the knowledge base $\mathcal{K}$ , as shown on the right side of Figure $1^5$ . The fact memory shares its on entity representations with the entity memory embeddings in $\mathbf{E}$ , but each element of the fact memory corresponds to a symbolic substructure, namely a key-value pair $((s,r),\{o_1,\ldots ,o_n\})$ . The key $(s,r)$ is a (subject entity, relation) pair, and the corresponding value $\{o_1,\dots,o_n\}$ is the list of object entities associated with $s$ and $r$ , i.e. $(s,r,o_i)\in \mathcal{K}$ for $i = \{1,\dots,n\}$ . Conceptually, KB triples with the same subject entity and relation are grouped into a single element. We call the subject and relation pair $a_{j} = (s,r)\in A$ a head pair and the list of objects $b_{j} = \{o_{1},\dots,o_{n}\} \in B$ a tail set<sup>6</sup>.
|
| 90 |
+
|
| 91 |
+
In more detail, we encode a head pair $a_{j} = (s,r)\in A$ by concatenating embeddings for the subject entity and relation, and then projecting them linearly to a new head-pair embedding space. More precisely, let $\mathbf{E}\in \mathbb{R}^{|\mathcal{E}|\times d_e}$ be the entity embeddings trained in §2.3, and $\mathbf{R}\in \mathbb{R}^{|\mathcal{R}|\times d_r}$ be embeddings of relations $\mathcal{R}$ in the knowledge base $\kappa$ . We encode a head pair $a$ as:
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\mathbf {a} _ {j} = \mathbf {W} _ {a} ^ {T} \left[ \mathbf {s}; \mathbf {r} \right] \in \mathbb {R} ^ {d _ {a}}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where $\mathbf{s} \in \mathbf{E}$ and $\mathbf{r} \in \mathbf{R}$ are the embeddings of subject $s$ and relation $r$ , and $\mathbf{W}_a$ is a learned linear transformation matrix. We let $\mathbf{A} \in \mathbb{R}^{|A| \times d_a}$ denote the embedding matrix of all head pairs.
|
| 98 |
+
|
| 99 |
+
Let the answer for $q$ be denoted $e_{\mathrm{ans}}$ , and its masked mention $m_{\mathrm{ans}} = (e_{\mathrm{ans}}, s_{\mathrm{ans}}, t_{\mathrm{ans}})$ . For a masked mention $m_{\mathrm{ans}}$ , define a query vector to access the fact memory as:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\mathbf {v} _ {m _ {\text {a n s}}} = \mathbf {W} _ {f} ^ {T} \left[ \mathbf {h} _ {s _ {\text {a n s}}} ^ {(T)}; \mathbf {h} _ {t _ {\text {a n s}}} ^ {(T)} \right] \tag {4}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where $\mathbf{h}_{s_{\mathrm{ans}}}^{(T)}$ and $\mathbf{h}_{t_{\mathrm{ans}}}^{(T)}$ are the contextual embeddings for the start and end tokens of the mention $m_{\mathrm{ans}}$ and $\mathbf{W}_f$ is the linear transformation matrix into the embedding space of head pairs $\mathbf{A}$ .
|
| 106 |
+
|
| 107 |
+
Head pairs in $A$ are scored by the query vector $\mathbf{v}_{m_{\mathrm{ans}}}$ and the top $k$ head pairs with the largest inner product are retrieved. This retrieval process on the fact memory is distantly supervised. We define a head pair to be a distantly supervised positive example $a_{\mathrm{ds}} = (s, r)$ for a passage if its subject entity $s$ is named by a context mention $m_i$ and the masked entity $e_{\mathrm{ans}}$ is an element of the corresponding tail set, i.e. $e_{\mathrm{ans}} \in b_{\mathrm{ds}}$ . When no distantly supervised positive example exists for a passage, it is trained to retrieve a special "null" fact comprised of the $s_{\mathrm{null}}$ head entity and $r_{\mathrm{null}}$ relation: i.e. $a_{\mathrm{ds}} = (s_{\mathrm{null}}, r_{\mathrm{null}})$ and its tail set is empty. This distant supervision is encoded by a loss function:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\begin{array}{c} \mathrm {T O P} _ {k} (\mathbf {v} _ {m _ {\mathrm {a n s}}}, \mathbf {A}) = \operatorname {a r g m a x} _ {k, j \in \{1, \dots , | A | \}} \mathbf {a} _ {j} ^ {T} \mathbf {v} _ {m _ {\mathrm {a n s}}} \\ \mathrm {l o s s} _ {\mathrm {f a c t}} = \mathrm {c r o s s} _ {-} \text {e n t r o p y} (\mathrm {s o f t m a x} (\mathbf {v} _ {m _ {\mathrm {a n s}}}, \mathbf {A}), \mathbb {I} _ {a _ {\mathrm {d s}}}) \end{array}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
The result of this query is that the tail sets associated with the top $k$ scored head pairs, i.e. $\{b_j|j\in \mathrm{TOP}_k(\mathbf{v},\mathbf{A})\}$ , are retrieved from the fact memory.
|
| 114 |
+
|
| 115 |
+
# 2.5 Integrating Knowledge and Context
|
| 116 |
+
|
| 117 |
+
Next, tail sets retrieved from the fact memory are aggregated. Recall that a tail set $b_{j}$ returned from the fact memory is the set of entities $\{o_1,\dots,o_n\}$
|
| 118 |
+
|
| 119 |
+
s.t. $(s,r,o_i)\in \mathcal{K}$ for $i\in \{1,\ldots ,n\}$ with the associated $a_{j} = (s,r)$ . Let $\mathbf{o}_i\in \mathbf{E}$ be the embedding of entity $o_i$ . We encode the returned tail set $\mathbf{b}_j$ as a weighted centroid of the embeddings of entities in the tail set $b_{j}$ .
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\mathbf {b} _ {j} = \sum_ {o _ {i} \in b _ {j}} \alpha_ {i} \mathbf {0} _ {i} \in \mathbb {R} ^ {d _ {e}}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
where $\alpha_{i}$ is a context-dependent weight of the object entity $o_i$ . To compute the weights $\alpha_{i}$ , we use a process similar to Eq. 4: we compute a second query vector $\mathbf{z}_{m_{\mathrm{ans}}}$ to score the entities inside the tail set $b_{j}$ , and the weights $\alpha_{i}$ are the softmax of the inner products between the query vector $\mathbf{z}_{m_{\mathrm{ans}}}$ and the embeddings of entities in the tail set $b_{j}$ .
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
\mathbf {z} _ {m _ {\mathrm {a n s}}} = \mathbf {W} _ {b} ^ {T} \left[ \mathbf {h} _ {s _ {\mathrm {a n s}}} ^ {(T)}; \mathbf {h} _ {t _ {\mathrm {a n s}}} ^ {(T)} \right] \tag {5}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
\alpha_ {i} = \frac {\exp \left(\mathbf {o} _ {i} ^ {T} \mathbf {z} _ {m _ {\mathrm {a n s}}}\right)}{\sum_ {o _ {l} \in b _ {j}} \exp \left(\mathbf {o} _ {l} ^ {T} \mathbf {z} _ {m _ {\mathrm {a n s}}}\right)} \tag {6}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where $\mathbf{W}_b$ is a transformation matrix distinct from $\mathbf{W}_e$ in Eq. 1 and $\mathbf{W}_f$ in Eq. 4. The top $k$ tail sets $\mathbf{b}_j$ are further aggregated using weights $\beta_j$ , which are the softmax of the retrieval (inner product) scores of the top $k$ head pairs $a_j$ . This leads to a single vector $\mathbf{f}_{m_{\mathrm{ans}}}$ that we call the knowledge embedding for the masked mention $m_{\mathrm{ans}}$ .
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\mathbf {f} _ {m _ {\mathrm {a n s}}} = \sum_ {j \in \operatorname {T O P} _ {k} (\mathbf {v} _ {m _ {\mathrm {a n s}}}, \mathbf {A})} \beta_ {j} \mathbf {b} _ {j} \tag {7}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
\beta_ {j} = \frac {\exp \left(\mathbf {a} _ {j} ^ {T} \mathbf {v} _ {m _ {\text {a n s}}}\right)}{\sum_ {t \in \mathrm {T O P} _ {k} \left(\mathbf {v} _ {m _ {\text {a n s}}}, \mathbf {A}\right)} \exp \left(\mathbf {a} _ {t} ^ {T} \mathbf {v} _ {m _ {\text {a n s}}}\right)} \tag {8}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
Intuitively $\mathbf{f}_{m_{\mathrm{ans}}}$ is the result of retrieving a set of entities from the fact memory. The last step is to integrate this retrieved set into the Transformer's contextual embeddings. Of course, KBs are often incomplete, and especially during pre-training, it might be necessary for the model to ignore the result of retrieval, if no suitable triple appears in the KB. To model this, the final step in the integration process is to construct an integrated query $\mathbf{q}_{m_{\mathrm{ans}}}$ with a learnable mixing weight $\lambda$ . Algorithmically, $\lambda$ is computed as the probability of retrieving a special "null" head $a_{\mathrm{null}}$ from the fact memory, i.e. whether an oracle head pair exists in the knowledge base. $\mathbf{q}_{m_{\mathrm{ans}}}$ is used to predict the masked entity.
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
\mathbf {q} _ {m _ {\mathrm {a n s}}} = \lambda \cdot \mathbf {c} _ {m _ {\mathrm {a n s}}} + (1 - \lambda) \cdot \mathbf {f} _ {m _ {\mathrm {a n s}}}, \lambda = P (a _ {\mathrm {n u l l}})
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\hat {e} _ {\mathrm {a n s}} = \operatorname {a r g m a x} _ {e _ {i} \in \mathcal {E}} (\mathbf {q} _ {m _ {\mathrm {a n s}}} ^ {T} \mathbf {e} _ {i})
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
lossans=cross_entropy(softmax(qmans,E),Ieans)
|
| 156 |
+
|
| 157 |
+
<table><tr><td>Model</td><td>P@1</td></tr><tr><td>K-Adapter †</td><td>29.1</td></tr><tr><td>BERT-Large †</td><td>33.9</td></tr><tr><td>BERT-KNN ‡</td><td>38.7</td></tr><tr><td>EaE</td><td>38.6</td></tr><tr><td>FILM</td><td>44.2</td></tr></table>
|
| 158 |
+
|
| 159 |
+
Table 1: LAMA TREx Precision@1. † copied from Wang et al. (2020a), ‡ copied from Kassner and Schütze (2020)
|
| 160 |
+
|
| 161 |
+
The final loss is the sum of the individual losses (See §A.2.1 and §A.2.2 for additional details.)
|
| 162 |
+
|
| 163 |
+
lossfinal $=$ lossent $^+$ lossctx $^+$ lossfact $^+$ lossans
|
| 164 |
+
|
| 165 |
+
# 3 Experiments
|
| 166 |
+
|
| 167 |
+
The primary focus of this work is investigating the incorporating of new symbolic knowledge by injecting new facts without retraining (§4.1) and updating stale facts (§4.2). However, we first validate the efficacy of our model on standard splits of widely used knowledge-intensive benchmarks against many state-of-the-art systems (§3.3), as well as two subsets of these benchmarks restricted to examples answerable with wikidata (§3.4) and examples filtered for train/test overlap (§3.5).
|
| 168 |
+
|
| 169 |
+
# 3.1 Data
|
| 170 |
+
|
| 171 |
+
We evaluate on four knowledge intensive tasks<sup>7</sup>.
|
| 172 |
+
|
| 173 |
+
WebQuestionsSP is an Open-domain Question Answering dataset containing 4737 natural language questions linked to corresponding Freebase entities and relations (Yih et al., 2015) derived from WebQuestions(Berant et al., 2013).
|
| 174 |
+
|
| 175 |
+
LAMA TREx is a set of fact-related cloze questions. Since we are interested in entity prediction models, we restrict our LAMA investigations to TREx, which has answers linked to Wikidata.
|
| 176 |
+
|
| 177 |
+
TriviaQA (open) contains questions scraped from quiz-league websites (Joshi et al., 2017). We use the open splits following Lee et al. (2019).
|
| 178 |
+
|
| 179 |
+
FreebaseQA is an Open-domain QA dataset derived from TriviaQA and other trivia resources (See Jiang et al. (2019) for full details). Every answer can be resolved to at least one Freebase entity and each question contains at least one entity.
|
| 180 |
+
|
| 181 |
+
# 3.2 Baselines
|
| 182 |
+
|
| 183 |
+
T5 (Raffel et al., 2019) and BART (Lewis et al., 2019) are large text-to-text transformers.
|
| 184 |
+
|
| 185 |
+
Dense Passage Retrieval (DPR) (Karpukhin et al., 2020) is a two stage retrieve and read model.
|
| 186 |
+
|
| 187 |
+
Retrieval Augmented Generation (RAG) (Lewis et al., 2020a) and Fusion in Decoder (FID) (Izacard and Grave, 2020) use DPR retrieval, followed
|
| 188 |
+
|
| 189 |
+
by generative decoders based on BART and T5 respectively. FID is the current state-of-the-art on the open domain setting of TriviaQA.
|
| 190 |
+
|
| 191 |
+
K-Adapter (Wang et al., 2020a) and Bert-KNN (Kassner and Schütze, 2020) are recent BERT extensions that perform at or near state-of-the-art on the LAMA benchmark.
|
| 192 |
+
|
| 193 |
+
Entities-as-Experts (EaE) (Févry et al., 2020) is discussed in §2.3. Our EaE models are trained using the same hyperparameters and optimization settings as FILM.
|
| 194 |
+
|
| 195 |
+
# 3.2.1 Open vs Closed Book models
|
| 196 |
+
|
| 197 |
+
Generally, open book models refer to 'retrieve and read' pipelines (Chen et al., 2017) which, given a query, 1) retrieve relevant passages from a corpus, 2) separately re-encode the passages conditioned on the question and then 3) produce an answer. Conversely, closed book models answer questions directly from their parameters without additional processing of source materials. We consider FILM and EaE closed-book models as they do not retrieve and re-encode any source text, and instead attend to parameterized query-independent memories.
|
| 198 |
+
|
| 199 |
+
# 3.3 Results in Convention Settings
|
| 200 |
+
|
| 201 |
+
LAMA TREx. In Table 1, we can see that FILM outperforms several recently proposed models on the LAMA TREx task. FILM outperforms the next best performing model, BERT-KNN by 5.5 points. Question-Answering. In Table 2, we compare FILM to five close-book and three open-book QA models on WebQuestionsSP and TriviaQA. The columns denoted Full Dataset-Total show results for the standard evaluation. For WebQuestionsSP, despite using far fewer parameters (see Table 3 and A.3 for details), FILM outperforms all other models - including the top open-book model RAG. On TriviaQA, FILM outperforms all other closed-book models—though the open-book models are substantially more accurate on this task, likely because of the enormous size of the models and their access to all of Wikipedia, which contains all (or nearly all) of the answers in TriviaQA.
|
| 202 |
+
|
| 203 |
+
# 3.4 Results on KB-Answerable Questions
|
| 204 |
+
|
| 205 |
+
WebQuestionsSP (and similarly FreebaseQA discussed in §4) was constructed such that all questions are answerable using the FreeBase KB, which was last updated in 2016. Because our pretraining corpus is derived from larger and more recent versions of Wikipedia, we elected to use a KB con
|
| 206 |
+
|
| 207 |
+
<table><tr><td rowspan="3"></td><td rowspan="3">Model</td><td colspan="4">WebQuestionsSP</td><td colspan="4">TriviaQA</td></tr><tr><td colspan="2">Full Dataset</td><td colspan="2">Wikidata Answer</td><td colspan="2">Full Dataset</td><td colspan="2">Wikidata Answer</td></tr><tr><td>Total</td><td>No Overlap</td><td>Total</td><td>No Overlap</td><td>Total</td><td>No Overlap</td><td>Total</td><td>No Overlap</td></tr><tr><td rowspan="4">Closed-book</td><td>FILM</td><td>54.7</td><td>36.4</td><td>78.1</td><td>72.2</td><td>29.1</td><td>15.6</td><td>37.3</td><td>28.4</td></tr><tr><td>EaE</td><td>47.4</td><td>25.1</td><td>62.4</td><td>42.9</td><td>19.0</td><td>9.1</td><td>24.4</td><td>17.1</td></tr><tr><td>T5-11B</td><td>49.7</td><td>31.8</td><td>61.0</td><td>48.5</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>BART-Large</td><td>30.4</td><td>5.6</td><td>36.7</td><td>8.3</td><td>26.7</td><td>0.8</td><td>30.6</td><td>1.0</td></tr><tr><td rowspan="3">Open-Book</td><td>RAG</td><td>50.1</td><td>30.7</td><td>62.5</td><td>45.1</td><td>56.8</td><td>29.2</td><td>64.9</td><td>45.2</td></tr><tr><td>DPR</td><td>48.6</td><td>34.1</td><td>56.9</td><td>45.1</td><td>57.9</td><td>31.6</td><td>66.3</td><td>48.8</td></tr><tr><td>FID</td><td>-</td><td>-</td><td>-</td><td>-</td><td>67.6</td><td>42.8</td><td>76.5</td><td>64.5</td></tr><tr><td></td><td>EmQL†</td><td>75.5</td><td>-</td><td>74.6</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr></table>
|
| 208 |
+
|
| 209 |
+
Table 2: Open Domain QA Results. Columns denoted Full Dataset-Total are conventional splits discussed in §3.3, Wikidata Answer are answerable using Wikidata (§3.4), and No Overlap removes train-test overlap (§3.5). Highest closed-book and open-book numbers are bolded. Other than FILM and EaE, all results are derived from the prediction files used in Lewis et al. (2020b) including the nearest neighbor (NN) baselines. †is a dataset specific graph reasoning model and the state-of-the-art WebQuestionSP.
|
| 210 |
+
|
| 211 |
+
<table><tr><td>Model</td><td>B</td><td>M</td><td>T</td></tr><tr><td>FILM</td><td>0.11</td><td>0.72</td><td>0.83</td></tr><tr><td>EaE</td><td>0.11</td><td>0.26</td><td>0.37</td></tr><tr><td>BERT-L</td><td>0.35</td><td>0</td><td>0.35</td></tr><tr><td>BART-L</td><td>0.39</td><td>0</td><td>0.39</td></tr><tr><td>T5-11B</td><td>11</td><td>0</td><td>11</td></tr><tr><td>DPR</td><td>0.11</td><td>16</td><td>16.11</td></tr><tr><td>RAG</td><td>0.39</td><td>16</td><td>16.39</td></tr><tr><td>FID</td><td>0.77</td><td>16</td><td>16.77</td></tr></table>
|
| 212 |
+
|
| 213 |
+
Table 3: Model Parameters Approximate billions of parameters for each model's (B)ase, (M)emories and (T)otal. Excludes token embeddings.
|
| 214 |
+
|
| 215 |
+
structed from Wikidata. Many entities in Freebase are unmappable to the more recent Wikidata KB which means that some questions are no longer answerable using the KB. Because of this, we created reduced versions of these datasets which are Wikidata answerable—i.e., containing only questions answerable by triples from our Wikidata-based KB. The model should learn to rely on the KB to answer the questions. We do the same for TriviaQA. $^{8}$
|
| 216 |
+
|
| 217 |
+
As seen in Table 2 in the column Wikidata answer-Total, FILM does much better on Wikidata answerable questions on WebQuestionsSP. EmQL (Sun et al., 2020), the state-of-the-art dataset specific model, gets $75.5\%$ accuracy on the full dataset. Not surprisingly, this is because EmQL operates over the Freebase knowledge base, giving it full upperbound recall. However, when we restrict to Wikidata answerable questions, thus giving both EmQL and FILM potential for full recall, FILM outperforms EmQL by 3.5 points and the next best model (RAG) by over 15 points.
|
| 218 |
+
|
| 219 |
+
# 3.5 Train-Test Overlap
|
| 220 |
+
|
| 221 |
+
We are interested in the ability of models to use external knowledge to answer questions, rather than learning to recognize paraphrases of semantically identical questions. Unfortunately, analysis showed that many of the test answers also appear as answers to some training-set question: this is the case for $57.5\%$ of the answers in WebQuestionsSP and $75.0\%$ for FreebaseQA. This raises the possibility that some of the performance can be attributed to simply memorizing specific question/answer pairs, perhaps in addition to recognizing paraphrases of the question from its pretraining data.
|
| 222 |
+
|
| 223 |
+
Overlap in fine-tuning train/test splits was concurrently observed by Lewis et al. (2020b), who created human verified filtered splits for TriviaQA and WebQuestions. We evaluate our models on those splits and report results in Table 2 in the "No Overlap" columns. We see that the gap between FILM and the next best performing model RAG increases from 4.6 to 5.7 points on WebQuestionSP. On TriviaQA, FILM is still able to answer many questions correctly after overlap is removed. In contrast, the majority of closed book models such as BART get less than $1\%$ of answers correct.
|
| 224 |
+
|
| 225 |
+
# 3.6 Filtering to Avoid Pretrain, Finetune, and Test Overlap
|
| 226 |
+
|
| 227 |
+
The filtering procedure from Lewis et al. (2020b) addresses finetuning train/test overlap but does not account for overlap with the pretraining data. To investigate this further, we looked at FreebaseQA and WebQuestionsSP which both contain entity linked questions and answers. We first perform a similar procedure to Lewis et al. (2020b) and
|
| 228 |
+
|
| 229 |
+
discard questions in the fine-tuning training data that contain answers which overlap with answers to questions in the dev and test data. We end up with 9144/2308/3996 data (train/dev/test) in FreebaseQA and 1348/151/1639 data in WebQuestion-sSP. This setting is referred to as Fine-tune column in Table 4 which shows the effects of different filterings of the data.
|
| 230 |
+
|
| 231 |
+
Next we want to ensure that the model will be unable to simply memorize paraphrases of question answer pairs that it observed in the text by removing all overlap between the pretraining data and finetuning test data. For every question answer entity pair in our finetuning dataset (coming from any split), we filter every example from our Wikipedia pretraining corpus where those pair of entities co-occur. Additionally, we filter every fact from our fact memory containing any of these entity pairs. Results for this setting are in the column labeled Pretrain. The All column combines both pretrain and fine tune filtering. We see that the models perform substantially worse when these filterings are applied and they are forced to reason across multiple examples, and in the case of FILM, the fact memory. Finally, the column denoted None has no filtering and is the same as the Full Dataset.
|
| 232 |
+
|
| 233 |
+
# 4 Modifying the Knowledge Base
|
| 234 |
+
|
| 235 |
+
Because our model defines facts symbolically, it can in principle reason over new facts injected into its memory, without retraining any parameters of the model. Since existing datasets do not directly test this capability, we elected to construct variants of FreebaseQA and WebQuestionsSP where we could simulate asking questions that are answerable only from newly injected KB facts.
|
| 236 |
+
|
| 237 |
+
The approach we used was to (1) identify pairs of entities that occur in both a question and answer of some test example; (2) filter out such pairs from the KB as well as all pre-training and fine-tuning data; and (3) test the system trained on this filtered data, and then manually updated by injecting facts about those entity pairs. This filtering procedure is reminiscent of that used by Lewis et al. (2020b), but also addresses pretraining / test-set overlap.
|
| 238 |
+
|
| 239 |
+
# 4.1 Injecting New Facts to Update Memory
|
| 240 |
+
|
| 241 |
+
We evaluate EaE and FILM given full knowledge (the original setting); given filtered knowledge; and given filtered knowledge followed by injecting test-question-related facts into the KB. The gap be
|
| 242 |
+
|
| 243 |
+
tween the filtered knowledge setting and injected knowledge setting will indicate how well the model incorporates newly introduced facts.
|
| 244 |
+
|
| 245 |
+
In more detail, we first perform a similar procedure to Lewis et al. (2020b) and discard questions in the fine-tuning training data that contain answers which overlap with answers to questions in the dev and test data. We end up with 9144/2308/3996 data (train/dev/test) in FreebaseQA and 1348/151/1639 data in WebQuestionsSP. Next, to ensure that the model will be unable to memorize paraphrases of question-answer pairs that it observed in the pretraining text, we remove all overlap between the pretraining data and fine-tuning test data: specifically, for every question-answer entity pair in our fine-tuning dataset (from any split), we filter every example from our Wikipedia pretraining corpus in which that pair of entities co-occur. Additionally, we filter every fact from our fact memory containing any of these entity pairs.
|
| 246 |
+
|
| 247 |
+
In these sections we compare against EaE for two reasons: 1) we are specifically looking at closed-book open domain entity based QA and EaE is shown to be at or near state-of-the-art for that task (Févry et al., 2020), 2) most importantly, we want to be able to precisely control for memorization in the training corpus and therefore did not consider existing unconstrained pre-trained models like T5 (Raffel et al., 2019). For reference, the previous state-of-the-art FOFE (Jiang et al., 2019) on FreebaseQA had a score of $37.0\%$ using the original train-test split, while FILM is at $63.3\%$ .
|
| 248 |
+
|
| 249 |
+
The results are shown in Table 5. In the "Full" column, we pretrain and finetune the FILM model with the full knowledge base and corpus. In the "Filter" setting, facts about the finetuning data are hidden from the model at both pretraining and finetuning time. In this case, the model must fall back to the language model to predict the answer, and as shown in Table 5, the accuracies of FILM and EaE are similar. In the "Inject Facts" setting, Facts are hidden at pretraining time, but are injected at test time. The results show that FILM can effectively use the newly injected facts to make prediction, obtaining an absolute improvement of $9.3\%$ compared to the "Filter" setting. EaE does not have a natural mechanism for integrating this new information<sup>9</sup>.
|
| 250 |
+
|
| 251 |
+
<table><tr><td></td><td colspan="4">FreebaseQA</td><td colspan="4">WebQuestionsSP</td></tr><tr><td>Filter Type</td><td>None</td><td>Pretrain</td><td>Fine-tune</td><td>All</td><td>None</td><td>Pretrain</td><td>Fine-tune</td><td>All</td></tr><tr><td>EaE</td><td>53.4</td><td>45.2</td><td>45.8</td><td>28.6</td><td>48.1</td><td>45.4</td><td>30.9</td><td>29.4</td></tr><tr><td>FILM</td><td>63.3</td><td>57.5</td><td>56.5</td><td>48.0</td><td>56.1</td><td>55.4</td><td>40.7</td><td>39.2</td></tr></table>
|
| 252 |
+
|
| 253 |
+
Table 4: Effects of Different Data Filtering. The column denoted None has no filtering. Pretrain removes all entity pair overlap between the eval datasets (all splits) and the pretraining text and kb. The Fine-tune column removes all entity pair overlap between the eval train and test splits. The All column combines both pretrain and fine tune filtering.
|
| 254 |
+
|
| 255 |
+
<table><tr><td></td><td colspan="3">FreebaseQA</td><td colspan="3">WebQuestionsSP</td></tr><tr><td></td><td>Full</td><td>Filter</td><td>Inject</td><td>Full</td><td>Filter</td><td>Inject</td></tr><tr><td>EaE</td><td>45.8</td><td>28.6</td><td>-</td><td>30.9</td><td>29.4</td><td>-</td></tr><tr><td>FILM</td><td>56.5</td><td>38.7</td><td>48.0</td><td>40.7</td><td>32.3</td><td>39.2</td></tr></table>
|
| 256 |
+
|
| 257 |
+
# 4.2 Updating Stale Memories
|
| 258 |
+
|
| 259 |
+
One of the main motivations for our model is to provide knowledge representations that can be incrementally updated as the world changes, avoiding stale data. In order to accomplish this, the model must learn to utilize the fact memory even in the case where those facts have changed such that they may no longer be consistent with the data the model was initially trained on. Further, it needs to accomplish that without any additional training.
|
| 260 |
+
|
| 261 |
+
To probe this ability, we simulate an extreme version of stale facts where all answers to QA pairs in the FreebaseQA test set are 'updated' with plausible alternatives. For each QA pair, we replace the original answer entity $e_{\mathrm{original}}$ with another entity, $e_{\mathrm{new}}$ , from our vocabulary that has: 1) been used as an object in at least one of the same relation types in which $e_{\mathrm{original}}$ was used as an object, and 2) shares at least three Wikipedia categories with $e_{\mathrm{original}}$ .
|
| 262 |
+
|
| 263 |
+
We use the same pretrained models from our earlier experiments and fine-tune on the filtered FreebaseQA train set for 10,000 steps. We then modify the memory of this model without applying any additional training on the new memory. In addition to adding new memories which correspond
|
| 264 |
+
|
| 265 |
+
to our newly created facts, we also must remove the original stale facts that we are updating. We look at two methods for filtering those 'stale facts' from the fact memory.
|
| 266 |
+
|
| 267 |
+
Basic Filter deletes every modified fact $e_{\text{question}}$ , $r$ , $e_{\text{original}}$ and replaces it with a new fact $e_{\text{question}}$ , $r$ , $e_{\text{new}}$ . This would be a low recall filter as it does not account for all possible related facts. The Strict Filter is a high recall filter that more aggressively removes information that may conflict with the newly added fact, additionally removing all facts that contain $e_{\text{question}}$ or $e_{\text{original}}$ . This is important for cases such as when a question contains multiple entities, or the linking relation is one-to-many, leading to multiple plausible answers. Together these two settings define rough bounds on the model's ability to perform this task. In Table 6, we see that FILM is able to utilize the modified KB to make the correct prediction for $54.5\%$ of questions in the Basic Filter setting and $70.3\%$ in the Strict Filter setting.
|
| 268 |
+
|
| 269 |
+
Table 5: Injecting New Facts. In the Filter setting, the models have access to no direct knowledge about question answer entity pairs from either the pretraining corpus or KB. In the Inject setting, the pretraining corpus and training KB are still Filtered, but at inference time, new facts are injected into the models memory allowing it to recover most of the drop from the Full setting. In the Full setting the model is exposed to full knowledge. In all cases, we remove the overlap between the finetune train and eval sets.
|
| 270 |
+
|
| 271 |
+
<table><tr><td>Model</td><td>Basic Filter</td><td>Strict Filter</td></tr><tr><td>FILM</td><td>0.0</td><td>0.0</td></tr><tr><td>+Update Memory</td><td>54.5</td><td>70.3</td></tr></table>
|
| 272 |
+
|
| 273 |
+
Table 6: Updating Stale Memories. Basic filter removes only facts connecting the original question entity to the answer entity. Strict filter removes all facts containing the original question or answer (not just facts connecting them).
|
| 274 |
+
|
| 275 |
+
# 5 Related Work
|
| 276 |
+
|
| 277 |
+
Symbolic KBs have been a core component of AI since the beginning of the field (Newell and Simon, 1956; Newell et al., 1959), and widely available public KBs have been invaluable in research and industry (Bollacker et al., 2008; Auer et al., 2007; Google, 2012; Dong, 2017; Vrandecic and Krötzsch, 2014). In machine learning, a well studied problem is learning KB embeddings (Bordes et al., 2013; Lin et al., 2015; Trouillon et al., 2017;
|
| 278 |
+
|
| 279 |
+
Dettmers et al., 2018) which enable generalization from known KB triples to novel triples that are plausibly true. KB embeddings can often be improved by incorporating raw text and symbolic KGs into a shared embedding space (Riedel et al., 2013; Verga et al., 2016, 2017), to be jointly reasoned over (Sun et al., 2018, 2019). Many prior neural-symbolic methods have attempted to unify symbolic KBs and neural methods (Pinkas, 1991; de Penning et al., 2011; Laird et al., 2017; Besold et al., 2017). Recently, researchers have explored query languages for embedded KBs that are similar to symbolic KB query languages (Cohen et al., 2017; Hamilton et al., 2018; Ren et al., 2020; Cohen et al., 2020).
|
| 280 |
+
|
| 281 |
+
Our fact memory builds on this prior work, and is most closely related to the memory used in EmQL (Sun et al., 2020), one KB embedding model that supports compositional query language. EmQL implements "projection" using neural retrieval over vectorized KB triples. Unlike this work, however, EmQL did not embed its fact memory into a LM, which could be finetuned for many NLP tasks: instead requiring the implementation of a "neural module" into some task-specific architecture. At a more abstract level, the fact memory is a key-value memory (Weston et al., 2014; Miller et al., 2016), a construct used in many neural models in the past.
|
| 282 |
+
|
| 283 |
+
It has been shown that sufficiently large LMs trained through self supervision (Peters et al., 2018; Devlin et al., 2019; Raffel et al., 2019; Brown et al., 2020) also encode factual information, motivating work on the extent to which a LM can serve as a KB (Roberts et al., 2020; Petroni et al., 2019; Poerner et al., 2019). Other work has explored techniques to improve the performance of large LMs in answering factual probes, by adding additional supervision in pre-training (Xiong et al., 2019; Wang et al., 2020b) or by adding entity embeddings into an extended LM (Peters et al., 2019; Zhang et al., 2019; Févry et al., 2020).
|
| 284 |
+
|
| 285 |
+
Our entity memory extends the Entities-as-Experts (EaE) model (Févry et al., 2020). It is both the current state-of-the-art for a number of tasks and simpler to use than most prior models because it does not require external components for entity linking or entity encoding (like (Peters et al., 2019; Zhang et al., 2019; Logan et al., 2019)) and is not restricted to lexical KBs like WordNet and ConceptNet (like (Weissenborn et al., 2017; Chen et al., 2018; Mihaylov and Frank, 2018)).
|
| 286 |
+
|
| 287 |
+
Our model's use of memory also scales to KBs
|
| 288 |
+
|
| 289 |
+
with millions of entities, whereas prior systems that make use of KB triples have been with only a few hundreds of triples in the model at any point, necessitating a separate heuristic process to retrieve candidate KB triples (Ahn et al., 2016; Henaff et al., 2016; Weissenborn et al., 2017; Chen et al., 2018; Mihaylov and Frank, 2018; Logan et al., 2019).
|
| 290 |
+
|
| 291 |
+
There have been a few exploratory experiments on modifying the predictions of retrieval augmented language models by changing the underlying text corpus (Guu et al., 2020; Lewis et al., 2020a). However, text passages are not easily interpretable resulting in them being less inspectible and modifiable than a symbolic fact based memory.
|
| 292 |
+
|
| 293 |
+
# 6 Conclusion
|
| 294 |
+
|
| 295 |
+
We presented FILM, a neural LM with an interpretable symbolically bound fact memory. We demonstrated the effectiveness of this method by outperforming many state-of-the-art methods on four benchmark knowledge intensive datasets. We used the model's symbolic interface to change the output of the LM by modifying only the non-parametric memories, without any additional training. We showed FILM could incorporate newly injected facts unseen during training. Additionally, we can modify facts, such that they contradict the initial pre training text, and our model is still largely able to answer these questions correctly.
|
| 296 |
+
|
| 297 |
+
# 7 Ethics and Broader Impacts
|
| 298 |
+
|
| 299 |
+
All language models learn to exploit correlations in the data they were trained on. As such, they inherit all of the underlying biases within that data (Zhao et al., 2019; Bender et al., 2021). These models require vast amounts of data to train on and therefore tend to rely on internet corpora which have skewed representations of particular groups, cultures, and languages, as well as variable levels of factuality. Our hope is that research into endowing these models with interpretable and modifiable memories will allow us to more readily identify and remedy some of these failures.
|
| 300 |
+
|
| 301 |
+
# Acknowledgments
|
| 302 |
+
|
| 303 |
+
We thank the anonymous reviewers for their helpful feedback and Patrick Lewis for providing prediction files. We also thank Thibault Févry, Nicholas FitzGerald, Eunsol Choi, Tom Kwiatkowski and other members of Google Research for discussions and feedback.
|
| 304 |
+
|
| 305 |
+
# References
|
| 306 |
+
|
| 307 |
+
Sungjin Ahn, Heeyoul Choi, Tanel Parnamaa, and Yoshua Bengio. 2016. A neural knowledge language model. arXiv preprint arXiv:1608.00318.
|
| 308 |
+
Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web, pages 722-735. Springer.
|
| 309 |
+
Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 610-623.
|
| 310 |
+
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533-1544, Seattle, Washington, USA. Association for Computational Linguistics.
|
| 311 |
+
Tarek R Besold, Artur d'Avila Garcez, Sebastian Bader, Howard Bowman, Pedro Domingos, Pascal Hitzler, Kai-Uwe Kühnberger, Luis C Lamb, Daniel Lowd, Priscila Machado Vieira Lima, et al. 2017. Neural-symbolic learning and reasoning: A survey and interpretation. arXiv preprint arXiv:1711.03902.
|
| 312 |
+
Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247-1250.
|
| 313 |
+
Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pages 2787-2795.
|
| 314 |
+
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
|
| 315 |
+
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870-1879.
|
| 316 |
+
Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2406-2417.
|
| 317 |
+
|
| 318 |
+
William W Cohen, Haitian Sun, R Alex Hofer, and Matthew Siegler. 2020. Scalable neural methods for reasoning with a symbolic knowledge base. International Conference on Learning Representations.
|
| 319 |
+
William W Cohen, Fan Yang, and Kathryn Rivard Mazaitis. 2017. Tensorlog: Deep learning meets probabilistic dbs. arXiv preprint arXiv:1707.05390.
|
| 320 |
+
Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Thirty-Second AAAI Conference on Artificial Intelligence.
|
| 321 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
|
| 322 |
+
Luna Dong. 2017. Amazon product graph.
|
| 323 |
+
Thibault Févry, Livio Baldini Soares, Nicholas FitzGerald, Eunsol Choi, and Tom Kwiatkowski. 2020. Entities as experts: Sparse memory access with entity supervision. Conference on Empirical Methods in Natural Language Processing.
|
| 324 |
+
Google. 2012. Introducing the knowledge graph: things, not strings.
|
| 325 |
+
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Realm: Retrievalaugmented language model pre-training. arXiv preprint arXiv:2002.08909.
|
| 326 |
+
Will Hamilton, Payal Bajaj, Marinka Zitnik, Dan Jurafsky, and Jure Leskovec. 2018. Embedding logical queries on knowledge graphs. In Advances in Neural Information Processing Systems, pages 2026-2037.
|
| 327 |
+
Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2016. Tracking the world state with recurrent entity networks. arXiv preprint arXiv:1612.03969.
|
| 328 |
+
Gautier Izacard and Edouard Grave. 2020. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282.
|
| 329 |
+
Kelvin Jiang, Dekun Wu, and Hui Jiang. 2019. Freebaseqa: A new factoid qa data set matchingTrivia-style question-answer pairs with freebase. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 318-323.
|
| 330 |
+
Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly
|
| 331 |
+
|
| 332 |
+
supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, Canada. Association for Computational Linguistics.
|
| 333 |
+
Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wentau Yih. 2020. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906.
|
| 334 |
+
Nora Kassner and Hinrich Schütze. 2020. Bertknn: Adding a knn search component to pretrained language models for better qa. arXiv preprint arXiv:2005.00766.
|
| 335 |
+
John E Laird, Christian Lebiere, and Paul S Rosenbloom. 2017. A standard model of the mind: Toward a common computational framework across artificial intelligence, cognitive science, neuroscience, and robotics. *Ai Magazine*, 38(4):13-26.
|
| 336 |
+
Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086-6096.
|
| 337 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
|
| 338 |
+
Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020a. Retrieval-augmented generation for knowledge-intensive nlp tasks. arXiv preprint arXiv:2005.11401.
|
| 339 |
+
Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2020b. Question and answer test-train overlap in open-domain question answering datasets. arXiv preprint arXiv:2008.02637.
|
| 340 |
+
Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Twenty-ninth AAAI conference on artificial intelligence.
|
| 341 |
+
Robert Logan, Nelson F. Liu, Matthew E. Peters, Matt Gardner, and Sameer Singh. 2019. Barack's wife hillary: Using knowledge graphs for fact-aware language modeling. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.
|
| 342 |
+
Todor Mihaylov and Anette Frank. 2018. Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1:
|
| 343 |
+
|
| 344 |
+
Long Papers), pages 821-832, Melbourne, Australia. Association for Computational Linguistics.
|
| 345 |
+
Alexander Miller, Adam Fisch, Jesse Dodge, Amir Hossein Karimi, Antoine Bordes, and Jason Weston. 2016. Key-value memory networks for directly reading documents. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1400-1409.
|
| 346 |
+
Allen Newell, J. C. Shaw, and Herbert A. Simon. 1959. Report on a general problem-solving program. In Proceedings of the International Conference on Information Processing.
|
| 347 |
+
Allen Newell and Herbert Simon. 1956. The logic theory machine-a complex information processing system. IRE Transactions on information theory, 2(3):61-79.
|
| 348 |
+
H Leo H de Penning, Artur S d'Avila Garcez, Luis C Lamb, and John-Jules C Meyer. 2011. A neuralsymbolic cognitive agent for online learning and reasoning. In Twenty-Second International Joint Conference on Artificial Intelligence.
|
| 349 |
+
Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237.
|
| 350 |
+
Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
|
| 351 |
+
Fabio Petroni, Tim Roktaschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? arXiv preprint arXiv:1909.01066.
|
| 352 |
+
Gadi Pinkas. 1991. Symmetric neural networks and propositional logic satisfiability. *Neural Computation*, 3(2):282-291.
|
| 353 |
+
Nina Poerner, Ulli Waltinger, and Hinrich Schütze. 2019. Bert is not a knowledge base (yet): Factual knowledge vs. name-based reasoning in unsupervised qa. arXiv preprint arXiv:1911.03681.
|
| 354 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683.
|
| 355 |
+
|
| 356 |
+
Hongyu Ren, Weihua Hu, and Jure Leskovec. 2020. Query2box: Reasoning over knowledge graphs in vector space using box embeddings. International Conference on Learning Representations.
|
| 357 |
+
Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74-84.
|
| 358 |
+
Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? arXiv preprint arXiv:2002.08910.
|
| 359 |
+
Haitian Sun, Andrew O Arnold, Tania Bedrax-Weiss, Fernando Pereira, and William W Cohen. 2020. Guessing what's plausible but remembering what's true: Accurate neural reasoning for question-answering. Advances in neural information processing systems.
|
| 360 |
+
Haitian Sun, Tania Bedrax-Weiss, and William Cohen. 2019. Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2380-2390.
|
| 361 |
+
Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen. 2018. Open domain question answering using early fusion of knowledge bases and text. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4231-4242.
|
| 362 |
+
Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593-4601.
|
| 363 |
+
Théo Trouillon, Christopher R Dance, Éric Gaussier, Johannes Welbl, Sebastian Riedel, and Guillaume Bouchard. 2017. Knowledge graph completion via complex tensor factorization. The Journal of Machine Learning Research, 18(1):4735-4772.
|
| 364 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.
|
| 365 |
+
Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, and Andrew McCallum. 2016. Multilingual relation extraction using compositional universal schema. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 886-896.
|
| 366 |
+
|
| 367 |
+
Patrick Verga, Arvind Neelakantan, and Andrew McCallum. 2017. Generalizing to unseen entities and entity pairs with row-less universal schema. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 613-622.
|
| 368 |
+
Denny Vrandecic and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Communications of the ACM, 57(10):78-85.
|
| 369 |
+
Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Cuihong Cao, Daxin Jiang, Ming Zhou, et al. 2020a. K-adapter: Infusing knowledge into pre-trained models with adapters. arXiv preprint arXiv:2002.01808.
|
| 370 |
+
Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2020b. K-adapter: Infusing knowledge into pre-trained models with adapters.
|
| 371 |
+
Dirk Weissenborn, Tomasz Kocisky, and Chris Dyer. 2017. Dynamic integration of background knowledge in neural nlu systems.
|
| 372 |
+
Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. arXiv preprint arXiv:1410.3916.
|
| 373 |
+
Wenhan Xiong, Jingfei Du, William Yang Wang, and Veselin Stoyanov. 2019. Pretrained encyclopedia: Weakly supervised knowledge-pretrained language model. arXiv preprint arXiv:1912.09637.
|
| 374 |
+
Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1321-1331, Beijing, China. Association for Computational Linguistics.
|
| 375 |
+
Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. Ernie: Enhanced language representation with informative entities. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.
|
| 376 |
+
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Coterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 629-634, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 377 |
+
|
| 378 |
+
# A Appendix
|
| 379 |
+
|
| 380 |
+
# A.1 Data
|
| 381 |
+
|
| 382 |
+
# A.1.1 Evaluation Data Statistics
|
| 383 |
+
|
| 384 |
+
For WebQuestionsSP, we mapped question entities and answer entities to their Wikidata ids. $87.9\%$ of the questions are answerable by at least one answer entity that is mappable to Wikidata. For all questions in FreebaseQA there exists at least one relational path in Freebase between the question entity $e_i$ and the answer $e_{\mathrm{ans}}$ . The path must be either a one-hop path, or a two-hop path passing through a mediator (CVT) node, and is verified by human raters. $72\%$ of the question entities and $80\%$ of the answer entities are mappable to Wikidata, and $91.7\%$ of the questions are answerable by at least one answer entity that is mappable to Wikidata.
|
| 385 |
+
|
| 386 |
+
<table><tr><td></td><td></td><td>Full Dataset</td><td>Wikidata Answerable</td></tr><tr><td rowspan="3">FreebaseQA</td><td>Train</td><td>20358</td><td>12535</td></tr><tr><td>Dev</td><td>3994</td><td>2464</td></tr><tr><td>Test</td><td>3996</td><td>2440</td></tr><tr><td rowspan="3">WebQuestionsSP</td><td>Train</td><td>2798</td><td>1388</td></tr><tr><td>Dev</td><td>300</td><td>153</td></tr><tr><td>Test</td><td>1639</td><td>841</td></tr></table>
|
| 387 |
+
|
| 388 |
+
Table 7: Dataset stats. Number of examples in train, dev, and test splits for our three different experimental setups. Full are the original unaltered datasets. Wikidata Answerable keeps only examples where at least one question entity and answer entity are mappable to Wikidata and there is at least one fact between them in our set of facts.
|
| 389 |
+
|
| 390 |
+
# A.1.2 Pretraining Data Details
|
| 391 |
+
|
| 392 |
+
FILM is pretrained on Wikipedia and Wikidata using the same data from Févry et al. (2020). Text in Wikipedia is chunked into 128 token pieces. To compute the entity-linking loss $\text{loss}_{ent}$ , we use as training data entities linked to the 1 million most frequently linked-to Wikidata entities. Text pieces without such entities are dropped. This results in 30.58 million text pieces from Wikipedia. As described in §2.1, we generate $n$ training examples from a piece of text containing $n$ entity mentions, where each mention serves as the masked target for its corresponding example, and other entity mentions in the example are treated as context entities<sup>10</sup>. This conversion results in 85.58 million
|
| 393 |
+
|
| 394 |
+
pre-training examples. The knowledge base $\mathcal{K}$ is a subset of Wikidata that contains all facts with subject and object entity pairs that co-occur at least 10 times on Wikipedia pages.[11] This results in a KB containing 1.54 million KB triples from Wikidata (or 3.08 million if reverse triples are included). Below, this is called the full setting of pretraining—we will also train on subsets of this example set, as described below. We pretrain the model for 500,000 steps with the batch size 2048, and we set $k = 1^{12}$ in the $\mathrm{TOP}_k$ operation for fact memory access.
|
| 395 |
+
|
| 396 |
+
# A.2 Training Details
|
| 397 |
+
|
| 398 |
+
# A.2.1 Pretraining
|
| 399 |
+
|
| 400 |
+
FILM is jointly trained to predict context entities and the masked entity. Context entities are predicted using the contextual embeddings described in §2.3; intermediate supervision with oracle entity linking labels is provided in the entity memory access step for context entities; the masked entity is predicted using the knowledge-enhanced contextual embeddings (§2.5); and distant supervised fact labels are also provided at training time. The final training loss is the unweighted sum of the four losses:
|
| 401 |
+
|
| 402 |
+
$$
|
| 403 |
+
\text {l o s s} _ {\text {p r e t r a i n}} = \text {l o s s} _ {\text {e n t}} + \text {l o s s} _ {\text {c t x}} + \text {l o s s} _ {\text {f a c t}} + \text {l o s s} _ {\text {a n s}}
|
| 404 |
+
$$
|
| 405 |
+
|
| 406 |
+
# A.2.2 Finetuning on Question Answering
|
| 407 |
+
|
| 408 |
+
In the Open-domain Question Answering task, questions are posed in natural language, e.g. "Where was Charles Darwin born?", and answered by a sequence of tokens, e.g. "United Kingdom". In this paper, we focus on a subset of open-domain questions that are answerable using entities from a knowledge base. In the example above, the answer "United Kingdom" is an entity in Wikidata whose identity is Q145.
|
| 409 |
+
|
| 410 |
+
We convert an open-domain question to an input of FILM by appending the special [MASK] token to the end of the question, e.g. {‘Where’, ‘was’, ‘Charles’, ‘Darwin’, ‘born’, ‘?’, [MASK]}. The task is to predict the entity named by mask. Here, “Charles Darwin” is a context entity, which is also referred to as question entity in the finetuning QA task.
|
| 411 |
+
|
| 412 |
+
At finetuning time, entity embeddings $\mathbf{E}$ and relation embeddings $\mathbf{R}$ are fixed, and we finetune
|
| 413 |
+
|
| 414 |
+
all transformer layers and the four transformation matrices: $\mathbf{W}_a$ , $\mathbf{W}_b$ , $\mathbf{W}_e$ , $\mathbf{W}_f$ . Parameters are tuned to optimize unweighted sum of the fact memory retrieval loss $\mathrm{loss}_{\mathrm{fact}}$ and the final answer prediction loss $\mathrm{loss}_{\mathrm{ans}}$ . If multiple answers are available, the training label $\mathbb{I}_{e_{\mathrm{ans}}}$ becomes a $k$ -hot vector uniformly normalized across the answers.
|
| 415 |
+
|
| 416 |
+
$$
|
| 417 |
+
\text {l o s s} _ {\text {f i n e t u n e}} = \text {l o s s} _ {\text {f a c t}} + \text {l o s s} _ {\text {a n s}}
|
| 418 |
+
$$
|
| 419 |
+
|
| 420 |
+
# A.3 Model Parameters
|
| 421 |
+
|
| 422 |
+
The number of Base parameters includes the encoder and (where applicable) decoder transformer parameters derived from the original papers. We exclude token embeddings in this count following prior work. The Memory parameter count for DPR, RAG, and FID includes the number of parameters required to cache and index the full 26 million passage wikipedia corpus with dimension 768 used by those models. For EaE, the Memory is for the entity embedding matrix. For FILM it is both the entity embedding matrix and the cached fact embedding matrix comprised of the 1.7 million precomputed triple embeddings.
|
adaptableandinterpretableneuralmemoryoversymbolicknowledge/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2427b8d0326128cf862c8d0ed0f9bdd68e04ceefe4a0f98a3503d456f7bb539e
|
| 3 |
+
size 330443
|
adaptableandinterpretableneuralmemoryoversymbolicknowledge/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b32a26540929d291cad9ebf0dc6dd04f3e540d782588a843af3ec9c9c5d55aa9
|
| 3 |
+
size 540152
|
adaptingbertforcontinuallearningofasequenceofaspectsentimentclassificationtasks/1c9e01f1-5430-4ac4-b70c-b289752ebdb0_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8f57acf52214e4bd11a75c40ba6eaff629f199181beca74b6fca8b8658e39f47
|
| 3 |
+
size 73184
|
adaptingbertforcontinuallearningofasequenceofaspectsentimentclassificationtasks/1c9e01f1-5430-4ac4-b70c-b289752ebdb0_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3c17d21331c949866dc0c1e4d97362c4fdcf8ee34906007c5a7b7942f0a9e0d1
|
| 3 |
+
size 90560
|
adaptingbertforcontinuallearningofasequenceofaspectsentimentclassificationtasks/1c9e01f1-5430-4ac4-b70c-b289752ebdb0_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:27836dc06f999187648e5919c525812e579f510705da53b65257b91d43c6afe0
|
| 3 |
+
size 620181
|
adaptingbertforcontinuallearningofasequenceofaspectsentimentclassificationtasks/full.md
ADDED
|
@@ -0,0 +1,329 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Adapting BERT for Continual Learning of a Sequence of Aspect Sentiment Classification Tasks
|
| 2 |
+
|
| 3 |
+
Zixuan Ke $^{1}$ , Hu Xu $^{2}$ and Bing Liu $^{1}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Department of Computer Science, University of Illinois at Chicago
|
| 6 |
+
|
| 7 |
+
$^{2}$ Facebook AI Research
|
| 8 |
+
|
| 9 |
+
1{zke4, liub}@uic.edu
|
| 10 |
+
|
| 11 |
+
$^{2}$ huxu@fb.com
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
This paper studies continual learning (CL) of a sequence of aspect sentiment classification (ASC) tasks. Although some CL techniques have been proposed for document sentiment classification, we are not aware of any CL work on ASC. A CL system that incrementally learns a sequence of ASC tasks should address the following two issues: (1) transfer knowledge learned from previous tasks to the new task to help it learn a better model, and (2) maintain the performance of the models for previous tasks so that they are not forgotten. This paper proposes a novel capsule network based model called B-CL to address these issues. B-CL markedly improves the ASC performance on both the new task and the old tasks via forward and backward knowledge transfer. The effectiveness of B-CL is demonstrated through extensive experiments.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Continual learning (CL) aims to incrementally learn a sequence of tasks. Once a task is learned, its training data is often discarded (Chen and Liu, 2018). This is in contrast to multi-task learning, which assumes the training data of all tasks are available simultaneously. The CL setting is important in many practical scenarios. For example, a sentiment analysis company typically has many clients and each client often wants to have their private data deleted after use. In the personal assistant or chatbot context, the user does not want his/her chat data, which often contains sentiments or emotions, uploaded to a central server. In such applications, if we want to improve sentiment analysis accuracy for each user/client without breaching confidentiality, CL is a suitable solution.
|
| 20 |
+
|
| 21 |
+
There are two main types of continual learning: (1) Task Incremental Learning (TIL) and (2) Class Incremental Learning (CIL). This work focuses
|
| 22 |
+
|
| 23 |
+
<table><tr><td>Task ID</td><td>Domain/Task</td><td>One Training Example(in that domain/task)</td></tr><tr><td>1</td><td>Vacuum Cleaner [CF]</td><td>This vacuum cleaner sucks !!!</td></tr><tr><td>2</td><td>Desktop [KT]</td><td>The keyboard is clicky .</td></tr><tr><td>3</td><td>Tablet [KT]</td><td>The soft keyboard is hard to use.</td></tr><tr><td>4 (new task)</td><td>Laptop</td><td>The new keyboard sucks and is hard to click!</td></tr></table>
|
| 24 |
+
|
| 25 |
+
Table 1: Tasks 2 and 3 have shareable knowledge to transfer (KT) to the new task, whereas Task 1 has specific knowledge that is expected to be isolated from the new task to avoid catastrophic forgetting (CF) (although they use the same word). Note that here we use only one sentence to represent a task, but each task actually represents a domain with all its sentences.
|
| 26 |
+
|
| 27 |
+
on TIL, where each task is a separate aspect sentiment classification (ASC) task. An ASC task is defined as follows (Liu, 2015): given an aspect (e.g., picture quality in a camera review) and a sentence containing the aspect in a particular domain (e.g., camera), classify if the sentence expresses a positive, negative, or neutral (no opinion) about the aspect. TIL builds a model for each task and all models are in one neural network. In testing, the system knows which task each test instance belongs to and uses only the model for the task to classify the instance. In CIL, each task contains one or more classes to be learned. Only one model is built for all classes. In testing, a test case from any class may be presented to the model to classify without giving it any task information. This setting is not applicable to ASC.
|
| 28 |
+
|
| 29 |
+
Our goal of this paper is to achieve the following two objectives: (1) transfer the knowledge learned from previous tasks to the new task to help learn a better model for the new task without accessing the training data from previous tasks (in contrast to multi-task learning), and (2) maintain (or even improve) the performance of the old models for previous tasks so that they are not forgotten. The focus of the existing CL (TIL or CIL) research has been on solving (2), catastrophic forgetting (CF) (Chen and Liu, 2018; Ke et al., 2020a). CF means that when a network learns a sequence of tasks, the learning of each new task is likely to change the net
|
| 30 |
+
|
| 31 |
+
work parameters learned for previous tasks, which degrades the model performance for the previous tasks (McCloskey and Cohen, 1989). In our case, (1) is also important as ASC tasks are similar, i.e., words and phrases used to express sentiments for different products/tasks are similar. To achieve the objectives, the system needs to identify the shared knowledge that can be transferred to the new task to help it learn better and the task specific knowledge that needs to be protected to avoid forgetting of previous models. Table 1 gives an example.
|
| 32 |
+
|
| 33 |
+
Fine-tuned BERT (Devlin et al., 2019) is one of the most effective methods for ASC (Xu et al., 2019; Sun et al., 2019). However, our experiments show that it works very poorly for TIL. The main reason is that the fine-tuned BERT on a task/domain captures highly task specific information which is difficult to transfer to a new task.
|
| 34 |
+
|
| 35 |
+
In this paper, we propose a novel model called B-CL (BERT-based Continual Learning) for ASC continual learning. The key novelty is a building block, called Continual Learning Adapter (CLA) inspired by the Adapter-BERT in (Houlsby et al., 2019). CLA leverages capsules and dynamic routing (Sabour et al., 2017) to identify previous tasks that are similar to the new task and exploit their shared knowledge to help the new task learning and uses task masks to protect task-specific knowledge to avoid forgetting (CF). We conduct extensive experiments over a wide range of baselines to demonstrate the effectiveness of B-CL.
|
| 36 |
+
|
| 37 |
+
In summary, this paper makes two key contributions. (1) It proposes the problem of task incremental learning for ASC. (2) It proposes a new model B-CL with a novel adapter CLA incorporated in a pre-trained BERT to enable ASC continual learning. CLA employs capsules and dynamic routing to explore and transfer relevant knowledge from old tasks to the new task and uses task masks to isolate task-specific knowledge to avoid CF. To our knowledge, none of these has been done before.
|
| 38 |
+
|
| 39 |
+
# 2 Related Work
|
| 40 |
+
|
| 41 |
+
Continual learning (CL) has been studied extensively (Chen and Liu, 2018; Parisi et al., 2019). To our knowledge, no existing work has been done on CL for a sequence of ASC tasks, although CL of a sequence of document sentiment classification tasks has been done.
|
| 42 |
+
|
| 43 |
+
Continual Learning. Existing work has mainly focused on dealing with catastrophic forgetting (CF).
|
| 44 |
+
|
| 45 |
+
Regularization-based methods, such as those in (Kirkpatrick et al., 2016; Lee et al.; Seff et al., 2017), add a regularization in the loss to consolidate previous knowledge when learning a new task.
|
| 46 |
+
|
| 47 |
+
Parameter isolation-based methods, such as those in (Serrà et al., 2018; Mallya and Lazebnik, 2018; Fernando et al., 2017), make different subsets of the model parameters dedicated to different tasks and identify and mask them out during the training of the new task.
|
| 48 |
+
|
| 49 |
+
Gradient projection-based method, such as that in (Zeng et al., 2019), ensures the gradient updates occur only in the orthogonal direction to the input of the old tasks and thus will not affect old tasks.
|
| 50 |
+
|
| 51 |
+
Replay-based methods, such as those in (Rebuffi et al., 2017; Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2019), retain an exemplar set of old task training data to help train the new task. The methods in (Shin et al., 2017; Kamra et al., 2017; Rostami et al., 2019; He and Jaeger, 2018) build data generators for previous tasks so that in learning the new task, they can use some generated data for previous tasks to help avoid forgetting.
|
| 52 |
+
|
| 53 |
+
As these methods are mainly for avoiding CF, after learning a sequence of tasks, their final models are typically worse than learning each task separately. The proposed B-CL not only deals with CF, but also performs knowledge transfer to improve the performance of both the new and the old tasks.
|
| 54 |
+
|
| 55 |
+
Lifelong Learning (LL). LL is now regarded the same as CL, but early LL mainly aimed at improving the new task learning through forward transfer without tackling CF (Silver et al., 2013; Ruvolo and Eaton, 2013; Chen and Liu, 2018).
|
| 56 |
+
|
| 57 |
+
Several researchers have used LL for document-level sentiment classification. Chen et al. (2015) and Wang et al. (2019) proposed two Naive Bayes (NB) approaches to help improve the new task learning. A heuristic NB method was also used in (Wang et al., 2019). Xia et al. (2017) presented a LL approach based on voting of individual task classifiers. All these works do not use neural networks, and are not concerned with the CF problem.
|
| 58 |
+
|
| 59 |
+
Shu et al. (2017) used LL for aspect extraction, which is a different problem. Wang et al. (2018) used LL for ASC, but improved only the new task and did not deal with CF. Existing CL systems SRK (Lv et al., 2019), KAN (Ke et al., 2020b) and L2PG (Qin et al., 2020) are for document sentiment classification, but not ASC. Ke et al. (2020a) also performed transfer in the image domain.
|
| 60 |
+
|
| 61 |
+
Recently, capsule networks (Hinton et al., 2011) have been used in sentiment classification and text classification (Chen and Qian, 2019; Zhao et al., 2019). But they have not been used in CL.
|
| 62 |
+
|
| 63 |
+
# 3 Preliminary
|
| 64 |
+
|
| 65 |
+
This section introduces BERT, Adapter-BERT and Capsule Network as they are used in our model.
|
| 66 |
+
|
| 67 |
+
BERT for ASC. Due to its superior performance, this work uses BERT (Devlin et al., 2019) and its transformer (Vaswani et al., 2017) architecture as the base. We also adopt the ASC formulation in (Xu et al., 2019), where the aspect term and review sentence are concatenated via [SEP]. The sentiment polarity is predicted on top of the [CLS] token. Although BERT can achieve impressive performance on a single ASC task, its architecture and fine-tuning paradigm are not suitable for CL (see Sec. 1). Experiments show that it performs very poorly for CL (Sec. 5.4). We found that Adapter-BERT (Houlsby et al., 2019) is a better fit for CL.
|
| 68 |
+
|
| 69 |
+
Adapter-BERT. Adapter-BERT basically inserts a 2-layer fully-connected network (adapter) in each transformer layer of BERT (see Figure 1(A)). During training for the end-task, only the adapters and normalization layers are trained, no change to any other BERT parameters, which is good for CL because fine-tuning BERT itself causes serious forgetting. Adapter-BERT achieves similar performances to fine-tuned BERT (Houlsby et al., 2019). We propose to exploit the adapter idea and the capsule network to achieve effective CL for ASC tasks.
|
| 70 |
+
|
| 71 |
+
Capsule Network. Capsule network (CapsNet) is a relatively new classification architecture (Hinton et al., 2011; Sabour et al., 2017). Unlike CNN, CapsNet replaces the scalar feature detectors with vector capsules that can preserve additional information such as position and thickness in images. A typical CapsNet has two capsule layers. The primary layer stores low-level feature maps and the class layer produces the probability for classification with each capsule corresponding to one class. It uses a dynamic routing algorithm to enable each lower level capsule to send its output to the similar (or "agreed", computed by dot product) higher level capsule. This is the key property that we exploit to identify and group similar tasks and their shared features or knowledge.
|
| 72 |
+
|
| 73 |
+
Note that the proposed B-CL does not adopt the whole capsule network as we are only interested in the capsule layers and dynamic routing instead of
|
| 74 |
+
|
| 75 |
+

|
| 76 |
+
Figure 1: (A). Adapter-BERT (Houlsby et al., 2019) and its adapters in a transformer (Vaswani et al., 2017) layer. An adapter is a 2-layer fully connected network with a skip-connection. It is added twice to each Transformer layer. Only the adapters (yellow boxes) and layer norm (green boxes) layers are trainable. The other modules (grey boxes) are frozen. (B). Proposed B-CL, which replaces the adapter with CLA. CLA has two sub-modules: knowledge sharing module (KSM) and task specific module (TSM). Each of these modules has a skip-connection.
|
| 77 |
+
|
| 78 |
+
the max-margin loss and the classifier.
|
| 79 |
+
|
| 80 |
+
# 4 Continual Learning Adapter (CLA)
|
| 81 |
+
|
| 82 |
+
Recall the proposed B-CL aims to achieve (1) knowledge transfer between related old tasks and the new task through knowledge sharing and (2) forgetting avoidance through preventing task specific knowledge of previous tasks from being overwritten by the new task learning. Inspired by Adapter-BERT, we propose the continual learning adapters (CLA) to replace the adapters in Adapter-BERT to enable CL as in Figure 1(B) to achieve BERT based continual learning for ASC.
|
| 83 |
+
|
| 84 |
+
The architecture of CLA is shown in Figure 2(A). It contains two modules: (1) knowledge sharing module (KSM) for identifying and exploiting shareable knowledge from the similar previous tasks and the new task, and (2) task specific module (TSM) for learning task specific neurons and protecting them from being updated by the new task.
|
| 85 |
+
|
| 86 |
+
CLA takes two inputs: (1) hidden states $h^{(t)}$ from the feed-forward layer inside a transformer layer and (2) task ID $t$ . The outputs are hidden states with features good for the $t$ -th task. KSM leverages capsule layers (see below) and dynamic routing to group similar tasks and the shareable knowledge, whereas TSM takes advantage of task mask (TM) to protect neurons for a particular task and leave other neurons free. Those free neurons are later used by TSM for a new task. Since TMs are differentiable, the whole system B-CL can be
|
| 87 |
+
|
| 88 |
+
trained end-to-end. We detail each module below.
|
| 89 |
+
|
| 90 |
+
# 4.1 Knowledge Sharing Module (KSM)
|
| 91 |
+
|
| 92 |
+
KSM groups similar tasks and shared knowledge (features) among them to enable knowledge transfer among similar tasks. This is achieved through two capsule layers (task capsule layer and knowledge sharing capsule layer) and the dynamic routing algorithm of the capsule network.
|
| 93 |
+
|
| 94 |
+
# 4.1.1 Task Capsule Layer (TCL)
|
| 95 |
+
|
| 96 |
+
Each capsule in TCL represents a task and TCL prepares low-level features derived from each task (Figure 2(A)). As such, a capsule is added to TCL for every new task. This incremental growing is efficient and easy because these capsules are discrete and do not share parameters. Also each capsule is simply a 2-layer fully connected network with a small number of parameters. Let $h^{(t)} \in \mathbb{R}^{d_t \times d_e}$ be the input of CLA, where $d_t$ is the number of tokens and $d_e$ the number of dimensions. Let the set of tasks learned so far be $T_{\text{prev}}$ (before learning the new task $t$ ) and $|T_{\text{prev}}| = n$ . In TCL, we have $n + 1$ different capsules representing all past $n$ learned tasks as well as the new task $t$ . The capsule for the $i$ -th ( $i \leq n + 1$ ) task is
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
p _ {i} ^ {(t)} = f _ {i} \left(h ^ {(t)}\right), \tag {1}
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+
where $f_{i}(\cdot) = \mathrm{MLP}_{i}(\cdot)$ denotes a 2-layer fully-connected network.
|
| 103 |
+
|
| 104 |
+
# 4.1.2 Knowledge Sharing Capsule Layer (KCL)
|
| 105 |
+
|
| 106 |
+
Each knowledge sharing capsule in KCL captures those tasks (i.e., their task capsules $\{p_i^{(t)}\}_{1}^{n + 1}$ ) with similar features or shared knowledge. This is automatically achieved by the dynamic routing algorithm. Recall dynamic routing encourages each lower level capsule (task capsule in our case) to send its output to the similar (or "agreed") higher level capsule (knowledge sharing capsule in our case).
|
| 107 |
+
|
| 108 |
+
Essentially, the similar task capsules (with many shared features) are "clustered" together by higher coefficients (which determine how much a task capsule can go to the next layer) while dissimilar tasks (with few shared features) are blocked via low coefficients. Such clustering identifies the shared features or knowledge from multiple task capsules as well as helps backward transfer across the similar tasks.
|
| 109 |
+
|
| 110 |
+
KCL first turns each task capsule $p_i^{(t)}$ into a temporary feature $u_{j|i}^{(t)}$ as:
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
u _ {j \mid i} ^ {(t)} = W _ {i j} p _ {i} ^ {(t)}, \tag {2}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
where $W_{ij} \in \mathbb{R}^{d_s \times d_k}$ is the weight matrix, $d_s$ and $d_k$ are the dimensions of task capsule $i$ and knowledge sharing capsule $j$ . The number of knowledge sharing capsules is a hyperparameter detailed in the experiment section. The temporary features are summed up with weights $c_{ij}^{(t)}$ to obtain the initial knowledge sharing capsule $s_j^{(t)}$ :
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
s _ {j} ^ {(t)} = \sum_ {i} c _ {i j} ^ {(t)} u _ {j | i} ^ {(t)}, \tag {3}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
where $c_{ij}^{(t)}$ is a coupling coefficient summed up to 1 and we detail how to compute it later. Note that the task capsule for each task in Eq. 1 is mapped to the knowledge sharing capsule in Eq. 3 and $c_{ij}^{(t)}$ indicates how much or how informative the representation of the $i$ -th task is to the $j$ -th knowledge sharing capsule. As a result, a knowledge sharing capsule can represent diverse sharable knowledge. For those tasks with a very low $c_{ij}^{(t)}$ , their representations are less considered in the $j$ -th knowledge sharing capsule. This makes sure only task capsules for tasks that are salient or similar to the new task are used and the others task capsules are ignored (and thus protected) to learn more general shareable knowledge. Recall that the ASC tasks are similar and thus such learning of task sharing features can be very important.
|
| 123 |
+
|
| 124 |
+
Note that in backpropagation, the dissimilar tasks with low $c_{ij}^{(t)}$ are updated with a low gradient while the similar tasks with high $c_{ij}^{(t)}$ are updated with a larger gradient. This encourages backward transfer across similar tasks.
|
| 125 |
+
|
| 126 |
+
Dynamic Routing. The coupling coefficient in Eq. 3 is essential for the quality of shareable knowledge. This is computed by a "routing softmax":
|
| 127 |
+
|
| 128 |
+
$$
|
| 129 |
+
c _ {i j} ^ {(t)} = \frac {\exp \left(b _ {i j} ^ {(t)}\right)}{\sum_ {o} \exp \left(b _ {i o} ^ {(t)}\right)}, \tag {4}
|
| 130 |
+
$$
|
| 131 |
+
|
| 132 |
+
where each $b_{ij}$ is the log prior probability showing how salient or similar a task capsule $i$ is to a knowledge sharing capsule $j$ . It is initialized to 0 indicating no salient connection between them at the beginning. We apply the dynamic routing algorithm in (Sabour et al., 2017) to update $b_{ij}$ :
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
b _ {i j} ^ {(t)} \leftarrow b _ {i j} ^ {(t)} + a _ {i j} ^ {(t)}, \tag {5}
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+

|
| 139 |
+
Figure 2: (A) Architecture of CLA: the skip-connection is not shown for clarity. (B) illustration of task masking: a (learnable) task mask is applied after the activation function to selectively activate a neuron (or feature). Some notes about (B) are: the two rows of each task corresponds to $k_0^{(t)}$ and $k_1^{(t)}$ in TSM. In the cells before training, those with 0's are the neurons to be protected (masked) and those cells without a number are free neurons (not used). In the cells after training, those cells with 1's show neurons that are important for the current task, which are used as a mask for the future. Those cells with more than one color indicate that they are shared by more than one task. Those 0 cells without a color are not used by any task.
|
| 140 |
+
|
| 141 |
+
where $a_{ij}$ is the agreement coefficient (see below). Intuitively, this step tends to aggregate the similar (or "agreed") tasks on a knowledge sharing capsule with a higher agreement coefficient $a_{ij}$ and thus a higher logit $b_{ij}^{(t)}$ (Eq. 5) or coupling coefficient $c_{ij}^{(t)}$ (Eq. 4). The agreement coefficient is computed as
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
a _ {i j} ^ {(t)} = u _ {j | i} ^ {(t)} \cdot v _ {j} ^ {(t)}, \tag {6}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
where $v_{j}^{(t)}$ is a normalized representation by applying the non-linear "squash" function (Sabour et al., 2017) to $s_{j}^{(t)}$ (for the first task, $s_{j}^{(t)} = u_{j|i}^{(t)}$ ):
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
v _ {j} ^ {(t)} = \frac {\left| \left| s _ {j} ^ {(t)} \right| \right| ^ {2}}{1 + \left| \left| s _ {j} ^ {(t)} \right| \right|} \frac {s _ {j} ^ {(t)}}{\left| \left| s _ {j} ^ {(t)} \right| \right|}, \tag {7}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
where the length of $v_{j}^{(t)}$ is normalized to [0,1] to represent the active probability of a knowledge sharing capsule $j$ .
|
| 154 |
+
|
| 155 |
+
Finally, note that the dynamic routing procedure (Eq. $(3)\rightarrow (7))$ is repeated for $r$ iterations.
|
| 156 |
+
|
| 157 |
+
# 4.2 Task Specific Module (TSM)
|
| 158 |
+
|
| 159 |
+
Although knowledge sharing is important for ASC, it is equally important to preserve task specific
|
| 160 |
+
|
| 161 |
+
knowledge for previous tasks to prevent forgetting (CF). To achieve this, we use task masks (Figure 2(B)). Specifically, we first detect the neurons used by each old task, and then block off or mask out all the used neurons when learning a new task.
|
| 162 |
+
|
| 163 |
+
The task specific module consists of differentiable layers (CLA uses a 2-layer fully-connected network). Each layer's output is further applied with a task mask to indicate which neurons should be protected for that task to overcome CF and forbids gradient updates for those neurons during backpropagation for a new task. Those tasks with overlapping masks indicate knowledge sharing. Due to KSM, the features flowing in those overlapping neurons enable the related old tasks to also improve in learning the new task.
|
| 164 |
+
|
| 165 |
+
# 4.3 Task Masks
|
| 166 |
+
|
| 167 |
+
Given the knowledge sharing capsule $s_{j}^{(t)}$ , TSM maps them into input $k_{l}^{(t)}$ via a fully-connected network, where $l$ is the $l$ -th layer in TSM. A task mask (a "soft" binary mask) $\mathfrak{m}_{l}^{(t)}$ is trained for each task $t$ at each layer $l$ in TSM during training task $t$ 's
|
| 168 |
+
|
| 169 |
+
classifier, indicating the neurons that are important for the task in the layer. Here we borrow the hard attention idea in (Serrà et al., 2018) and leverage the task ID embedding to the train the task mask.
|
| 170 |
+
|
| 171 |
+
For a task ID $t$ , its embedding $e_l^{(t)}$ consists of differentiable deterministic parameters that can be learned together with other parts of the network. It is trained for each layer in TSM. To generate the task mask $\mathfrak{m}_l^{(t)}$ from $e_l^{(t)}$ , Sigmoid is used as a pseudo-gate function and a positive scaling hyperparameter $s$ is applied to help training. The $m_l^{(t)}$ is computed as follows:
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
m _ {l} ^ {(t)} = \sigma \left(s e _ {l} ^ {(t)}\right). \tag {8}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
Note that the neurons in $m_{l}^{(t)}$ may overlap with those in other $m_{l}^{(i_{\mathrm{prev}})}$ s from previous tasks showing some shared knowledge. Given the output of each layer in TSM, $k_{l}^{(t)}$ , we element-wise multiply $k_{l}^{(t)} \otimes m_{l}^{(t)}$ . The masked output of the last layer $k^{(t)}$ is fed to the next layer of the BERT with a skipconnection (see Figure 1). After learning task $t$ , the final $m_{l}^{(t)}$ is saved and added to the set $\{m_{l}^{(t)}\}$ .
|
| 178 |
+
|
| 179 |
+
# 4.4 Training
|
| 180 |
+
|
| 181 |
+
For each past task $i_{\mathrm{prev}} \in \mathcal{T}_{\mathrm{prev}}$ , its mask $m_l^{(i_{\mathrm{prev}})}$ indicates which neurons are used by that task and need to be protected. In learning task $t$ , $m_l^{(i_{\mathrm{prev}})}$ is used to set the gradient $g_l^{(t)}$ on all used neurons of the layer $l$ in TSM to 0. Before modifying the gradient, we first accumulate all used neurons by all previous tasks' masks. Since $m_l^{(i_{\mathrm{prev}})}$ is binary, we use max-pooling to achieve the accumulation:
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
m _ {l} ^ {\left(t _ {\mathrm {a c}}\right)} = \operatorname {M a x P o o l} \left(\left\{m _ {l} ^ {\left(i _ {\text {p r e v}}\right)} \right\}\right). \tag {9}
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
The term $m_l^{(t_{\mathrm{ac}})}$ is applied to the gradient:
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
g _ {l} ^ {\prime (t)} = g _ {l} ^ {(t)} \otimes \left(1 - m _ {l} ^ {\left(t _ {\mathrm {a c}}\right)}\right). \tag {10}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
Those gradients corresponding to the 1 entries in $m_{l}^{(t_{\mathrm{ac}})}$ are set to 0 while the others remain unchanged. In this way, neurons in an old task are protected. Note that we expand (copy) the vector $m_{l}^{(t_{\mathrm{ac}})}$ to match the dimensions of $g_{l}^{(t)}$ .
|
| 194 |
+
|
| 195 |
+
Though the idea is intuitive, $e_l^{(t)}$ is not easy to train. To make the learning of $e_l^{(t)}$ easier and more stable, an annealing strategy is applied (Serrà et al., 2018). That is, $s$ is annealed during training, inducing a gradient flow and set $s = s_{\mathrm{max}}$ during testing. Eq. 8 approximates a unit step function as the mask, with $m_l^{(t)} \to \{0,1\}$ when $s \to \infty$ . A
|
| 196 |
+
|
| 197 |
+
training epoch starts with all neurons being equally active, which are progressively polarized within the epoch. Specifically, $s$ is annealed as follows:
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
s = \frac {1}{s _ {\operatorname* {m a x}}} + \left(s _ {\operatorname * {m a x}} - \frac {1}{s _ {\operatorname* {m a x}}}\right) \frac {b - 1}{B - 1}, \tag {11}
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
where $b$ is the batch index and $B$ is the total number of batches in an epoch.
|
| 204 |
+
|
| 205 |
+
Illustration. In Figure 2(B), after learning the first task (Task 0), we obtain its useful neurons marked in orange with a 1 in each neuron, which serves as a mask in learning future tasks. In learning task 1, those useful neurons for task 0 are masked (with 0 in those orange neurons or cells on the left). The process also learns the useful neurons for task 1 marked in green with 1's. When task 2 arrives, all important neurons for tasks 0 and 1 are masked, i.e., its mask entries are set to 0 (orange and green before training). After training task 2, we see that task 2 and task 1 have a shared neuron that is important to both of them. The shared neuron is marked in both red and green.
|
| 206 |
+
|
| 207 |
+
# 5 Experiments
|
| 208 |
+
|
| 209 |
+
We now evaluate B-CL by comparing it with both non-continual learning and continual learning baselines. We follow the standard CL evaluation method in (Lange et al., 2019). We first present B-CL a sequence of aspect sentiment classification (ASC) tasks for it to learn. Once a task is learned, its training data is discarded. After all tasks are learned, we test all task models using their respective test data. In training each task, we use its validation set to decide when to stop training.
|
| 210 |
+
|
| 211 |
+
# 5.1 Experiment Datasets
|
| 212 |
+
|
| 213 |
+
Since B-CL works in the CL setting, we employ a set of 19 ASC datasets (reviews of 19 products) to produce sequences of tasks. Each dataset represents a task. The datasets are from 4 sources: (1) HL5Domains (Hu and Liu, 2004) with reviews of 5 products; (2) Liu3Domains (Liu et al., 2015) with reviews of 3 products; (3) Ding9Domains (Ding et al., 2008) with reviews of 9 products; and (4) SemEval14 with reviews of 2 products - SemEval 2014 Task 4 for laptop and restaurant. For (1), (2) and (3), we split about $10\%$ of the original data as the validation data, another about $10\%$ of the original data as the testing data. For (4), we use 150 examples from the training set for validation. To be consistent with existing research
|
| 214 |
+
|
| 215 |
+
<table><tr><td>Data source</td><td>Task/domain</td><td>Train</td><td>Validation</td><td>Test</td></tr><tr><td rowspan="3">Liu3domain</td><td>Speaker</td><td>352</td><td>44</td><td>44</td></tr><tr><td>Router</td><td>245</td><td>31</td><td>31</td></tr><tr><td>Computer</td><td>283</td><td>35</td><td>36</td></tr><tr><td rowspan="5">HL5domain</td><td>Nokia6610</td><td>271</td><td>34</td><td>34</td></tr><tr><td>Nikon4300</td><td>162</td><td>20</td><td>21</td></tr><tr><td>Creative</td><td>677</td><td>85</td><td>85</td></tr><tr><td>CanonG3</td><td>228</td><td>29</td><td>29</td></tr><tr><td>ApexAD</td><td>343</td><td>43</td><td>43</td></tr><tr><td rowspan="9">Ding9domain</td><td>CanonD500</td><td>118</td><td>15</td><td>15</td></tr><tr><td>Canon100</td><td>175</td><td>22</td><td>22</td></tr><tr><td>Diaper</td><td>191</td><td>24</td><td>24</td></tr><tr><td>Hitachi</td><td>212</td><td>26</td><td>27</td></tr><tr><td>Ipod</td><td>153</td><td>19</td><td>20</td></tr><tr><td>Linksys</td><td>176</td><td>22</td><td>23</td></tr><tr><td>MicroMP3</td><td>484</td><td>61</td><td>61</td></tr><tr><td>Nokia6600</td><td>362</td><td>45</td><td>46</td></tr><tr><td>Norton</td><td>194</td><td>24</td><td>25</td></tr><tr><td rowspan="2">SemEval14</td><td>Rest.</td><td>3452</td><td>150</td><td>1120</td></tr><tr><td>Laptop</td><td>2163</td><td>150</td><td>638</td></tr></table>
|
| 216 |
+
|
| 217 |
+
Table 2: Number of examples in each task or dataset. More detailed data statistics are given in the Appendix.
|
| 218 |
+
|
| 219 |
+
(Tang et al., 2016), examples belonging to the conflict polarity (both positive and negative sentiments are expressed about an aspect term) are not used. Statistics of the 19 datasets are given in Table 2.
|
| 220 |
+
|
| 221 |
+
# 5.2 Compared Baselines
|
| 222 |
+
|
| 223 |
+
We use 18 baselines, including both non-continual learning and continual learning methods.
|
| 224 |
+
|
| 225 |
+
Non-continual Learning (NL) Baselines: NL setting builds a model for each task independently using a separate network. It clearly has no knowledge transfer or forgetting. We have 3 baselines under NL, (1) BERT, (2) Adapter-BERT and (3) W2V (word2vec embeddings). For BERT, we use trainable BERT to perform ASC (see Sec. 3); Adapter-BERT adapts the BERT as in (Houlsby et al., 2019), where only the adapter blocks are trainable; W2V uses embeddings trained on the Amazon review data in (Xu et al., 2018) using FastText (Grave et al., 2018). We adopt the ASC classification network in (Xue and Li, 2018), which takes both aspect term and review sentence as input.
|
| 226 |
+
|
| 227 |
+
Continual Learning (CL) Baselines. CL setting includes 3 baselines without dealing with forgetting (WDF) and 12 baselines from 6 state-of-the-art task incremental learning (TIL) methods dealing with forgetting. WDF baselines greedily learn a sequence of tasks incrementally without explicitly tackling forgetting or knowledge transfer. The 3 baselines under WDF are also (4) BERT, (5) Adapter-BERT and (6) W2V.
|
| 228 |
+
|
| 229 |
+
The 6 state-of-the-art CL systems are: KAN, SRK, HAT, UCL, EWC and OWM. KAN (Ke et al.,
|
| 230 |
+
|
| 231 |
+
2020b) and SRK (Lv et al., 2019) are TIL methods for document sentiment classification. HAT, UCL, EWC and OWM were originally designed for image classification. We replace their original MLP or CNN image classification network with CNN for text classification (Kim, 2014). HAT (Serra et al., 2018) is one of the best TIL methods with almost no forgetting. UCL (Ahn et al., 2019) is a latest TIL method. EWC (Kirkpatrick et al., 2016) is a popular regularization-based class incremental learning (CIL) method, which was adapted for TIL by only training on the corresponding head of the specific task ID during training and only considering the corresponding head's prediction during testing. OWM (Zeng et al., 2019) is a state-of-the-art CIL method, which we also adapt to TIL.
|
| 232 |
+
|
| 233 |
+
From the 6 systems, we created 6 baselines using W2V embeddings with the aspect term added before the sentence so that the CL methods can take both aspect and the review sentence, and 6 baselines using BERT (Frozen) (which replaces W2V embeddings). Following the BERT formulation in Sec. 3, it can naturally take both aspect and review sentence. Adapter-BERT is not applicable to them as their architecture cannot use an adapter.
|
| 234 |
+
|
| 235 |
+
# 5.3 Hyperparameters
|
| 236 |
+
|
| 237 |
+
Unless otherwise stated, for the task sharing module, we employ 2 layers of fully connected network with dimensions 768 in TCL. We also employ 3 knowledge sharing capsules. The dynamic routing is repeated for 3 iterations. For the task-specific module, We employ the embedding with 2000 dimensions as the final and hidden layer of the TSM. The task ID embeddings have 2000 dimensions. A fully connected layer with softmax output is used as the classification heads in the last layer of the BERT, together with the categorical cross-entropy loss. We use 140 for $s_{\mathrm{max}}$ in Eq. 11, dropout of 0.5 between fully connected layers. The training of BERT, Adapter-BERT and B-CL follow that of (Xu et al., 2019). We adopt $\mathrm{BERT}_{\mathrm{BASE}}$ (uncased). The maximum length of the sum of sentence and aspect is set to 128. We use Adam optimizer and set the learning rate to 3e-5. For the SemEval datasets, 10 epochs are used and for all other datasets, 30 epochs are used based on results from validation data. All runs use the batch size 32. For the CL baselines, we train all models with the learning rate of 0.05. We early-stop training when there is no improvement in the validation loss for 5 epochs. The
|
| 238 |
+
|
| 239 |
+
<table><tr><td>Scenario</td><td>Category</td><td>Model</td><td>Acc.</td><td>MF1</td></tr><tr><td rowspan="3">Non-continual Learning</td><td>BERT</td><td>NL</td><td>0.8584</td><td>0.7635</td></tr><tr><td>Adapter-BERT</td><td>NL</td><td>0.8596</td><td>0.7807</td></tr><tr><td>W2V</td><td>NL</td><td>0.7701</td><td>0.5189</td></tr><tr><td rowspan="17">Continual Learning</td><td>BERT</td><td>WDF</td><td>0.4960</td><td>0.4308</td></tr><tr><td>Adapter-BERT</td><td>WDF</td><td>0.5403</td><td>0.4481</td></tr><tr><td>W2V</td><td>WDF</td><td>0.8269</td><td>0.7356</td></tr><tr><td></td><td>KAN</td><td>0.8549</td><td>0.7738</td></tr><tr><td></td><td>SRK</td><td>0.8476</td><td>0.7852</td></tr><tr><td>BERT (Frozen)</td><td>EWC</td><td>0.8637</td><td>0.7452</td></tr><tr><td></td><td>UCL</td><td>0.8389</td><td>0.7482</td></tr><tr><td></td><td>OWM</td><td>0.8702</td><td>0.7931</td></tr><tr><td></td><td>HAT</td><td>0.8674</td><td>0.7816</td></tr><tr><td>W2V</td><td>KAN</td><td>0.7206</td><td>0.4001</td></tr><tr><td></td><td>SRK</td><td>0.7101</td><td>0.3963</td></tr><tr><td></td><td>EWC</td><td>0.8416</td><td>0.7229</td></tr><tr><td></td><td>UCL</td><td>0.8441</td><td>0.7599</td></tr><tr><td></td><td>OWM</td><td>0.8270</td><td>0.7118</td></tr><tr><td></td><td>HAT</td><td>0.8083</td><td>0.6363</td></tr><tr><td colspan="2">B-CL (forward)</td><td>0.8809</td><td>0.7993</td></tr><tr><td colspan="2">B-CL</td><td>0.8829</td><td>0.8140</td></tr></table>
|
| 240 |
+
|
| 241 |
+
batch size is set to 64. For all the CL baselines, we use the code provided by their authors and adopt their original parameters (for EWC, we adopt its TIL variant implemented by (Serrà et al., 2018)).
|
| 242 |
+
|
| 243 |
+
# 5.4 Results and Analysis
|
| 244 |
+
|
| 245 |
+
Since the order of the 19 tasks may have an impact on the final results, we randomly choose and run 5 task sequences and average their results. We compute both accuracy and Macro-F1 over 3 classes of polarities, where Macro-F1 is the major metric as the imbalanced classes introduce biases on accuracy. Table 3 gives the average results of 19 tasks (or datasets) over the 5 random task sequences.
|
| 246 |
+
|
| 247 |
+
Overall Performance. Table 3 shows that B-CL outperforms all baselines markedly. We discuss the detailed observations below:
|
| 248 |
+
|
| 249 |
+
(1) For non-continual learning (NL) baselines, BERT and Adapter-BERT perform similarly. W2V is poorer, which is understandable.
|
| 250 |
+
(2) Comparing NL (non-continual learning) and WDF (continual learning without dealing with forgetting), we see WDF is much better than NL for W2V. This indicates ASC tasks are similar and have shared knowledge. Catastrophic forgetting (CF) is not a major issue for W2V.
|
| 251 |
+
|
| 252 |
+
However, WDF is much worse than NL for BERT (with fine-tuning) and Adapter-BERT (with adapter-tuning). This is because BERT with fine-tuning learns highly task specific knowledge (Merchant et al., 2020). While this is desirable for NL,
|
| 253 |
+
|
| 254 |
+
Table 3: Accuracy (Acc.) and Macro-F1 (MF1) averaged over 5 random sequences of 19 tasks.
|
| 255 |
+
|
| 256 |
+
<table><tr><td>Model</td><td>Acc.</td><td>MF1</td></tr><tr><td>B-CL (-KSM; -TSM)</td><td>0.5403</td><td>0.4481</td></tr><tr><td>B-CL (-KSM)</td><td>0.8614</td><td>0.7852</td></tr><tr><td>B-CL (-TSM)</td><td>0.8312</td><td>0.7107</td></tr><tr><td>B-CL</td><td>0.8829</td><td>0.8140</td></tr></table>
|
| 257 |
+
|
| 258 |
+
Table 4: Ablation experiment results.
|
| 259 |
+
|
| 260 |
+
it is bad for WDF because task specific knowledge is hard to share across tasks or transfer. Then WDF causes serious forgetting (CF) for CL.
|
| 261 |
+
|
| 262 |
+
(3) Unlike BERT and Adapter-BERT, our B-CL can do very well in both forgetting avoidance and knowledge transfer (outperforming all baselines). For state-of-the-art CL baselines, EWC, UCL, OWM and HAT, although they perform better than WDF, they are all significantly poorer than B-CL as they don't have methods to encourage knowledge transfer. KAN and SRK do knowledge transfer but they are for document-level sentiment classification. They are weak, even weaker than other CL methods.
|
| 263 |
+
|
| 264 |
+
Effectiveness of Knowledge Transfer. We now look at knowledge transfer of B-CL. For forward transfer (B-CL(forward)) in Table 3, we use the test accuracy and MF1 of each task when it was first learned. For backward transfer (B-CL in Table 3), we use the final result after all tasks are learned. By comparing the results of NL with the results of forward transfer, we can see whether forward transfer is effective. By comparing the forward transfer result with the backward transfer result, we can see whether the backward transfer can improve further. The average results of B-CL forward (B-CL(forward)) and backward (B-CL) are given in Table 3. It shows that forward transfer of B-CL is highly effective (forward results for other CL baselines are given in the Appendix and we see B-CL's forward result outperforms all baselines' forward results). For backward transfer, B-CL slightly improves the performance.
|
| 265 |
+
|
| 266 |
+
Ablation Experiments. The results of ablation experiments are in Table 4. “-KSM; -TSM” means without knowledge sharing and task specific modules, simply deploying an Adapter-BERT. “-KSM” means without the knowledge sharing module. “-TSM” means without the task specific module. Table 4 clearly shows that the full B-CL system always gives the best overall results, indicating every component contributes to the model.
|
| 267 |
+
|
| 268 |
+
# 6 Conclusion
|
| 269 |
+
|
| 270 |
+
This paper studies continual learning (CL) of a sequence of ASC tasks. It proposed a novel technique called B-CL that can be applied to pretrained BERT for CL. B-CL uses continual learning adapters and capsule networks to effectively encourage knowledge transfer among tasks and also to protect task-specific knowledge. Experiments show that B-CL markedly improves the ASC performance on both the new task and the old tasks via forward and backward knowledge transfer.
|
| 271 |
+
|
| 272 |
+
# Acknowledgments
|
| 273 |
+
|
| 274 |
+
This work was supported in part by two grants from National Science Foundation: IIS-1910424 and IIS-1838770, a DARPA Contract HR001120C0023, and a research gift from Northrop Grumman.
|
| 275 |
+
|
| 276 |
+
# References
|
| 277 |
+
|
| 278 |
+
Hongjoon Ahn, Sungmin Cha, Donggyu Lee, and Taesup Moon. 2019. Uncertainty-based continual learning with adaptive regularization. In NIPS.
|
| 279 |
+
Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. 2019. Efficient lifelong learning with A-GEM. In ICLR.
|
| 280 |
+
Zhiyuan Chen and Bing Liu. 2018. Lifelong machine learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 12(3):1-207.
|
| 281 |
+
Zhiyuan Chen, Nianzu Ma, and Bing Liu. 2015. Life-long learning for sentiment classification. In ACL.
|
| 282 |
+
Zhuang Chen and Tieyun Qian. 2019. Transfer capsule network for aspect level sentiment classification. In ACL.
|
| 283 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT.
|
| 284 |
+
Xiaowen Ding, Bing Liu, and Philip S Yu. 2008. A holistic lexicon-based approach to opinion mining. In Proceedings of the 2008 international conference on web search and data mining.
|
| 285 |
+
Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A. Rusu, Alexander Pritzel, and Daan Wierstra. 2017. Pathnet: Evolution channels gradient descent in super neural networks. CoRR.
|
| 286 |
+
Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In LREC.
|
| 287 |
+
|
| 288 |
+
Xu He and Herbert Jaeger. 2018. Overcoming catastrophic interference using conceptor-aided backpropagation. In ICLR.
|
| 289 |
+
Geoffrey E Hinton, Alex Krizhevsky, and Sida D Wang. 2011. Transforming auto-encoders. In International conference on artificial neural networks.
|
| 290 |
+
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In ICML.
|
| 291 |
+
Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of ACM SIGKDD.
|
| 292 |
+
Nitin Kamra, Umang Gupta, and Yan Liu. 2017. Deep generative dual memory network for continual learning. CoRR.
|
| 293 |
+
Zixuan Ke, Bing Liu, and Xingchang Huang. 2020a. Continual learning of a mixed sequence of similar and dissimilar tasks. In NeurIPS.
|
| 294 |
+
Zixuan Ke, Bing Liu, Hao Wang, and Lei Shu. 2020b. Continual learning with knowledge transfer for sentiment classification. In ECML-PKDD.
|
| 295 |
+
Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP.
|
| 296 |
+
James Kirkpatrick, Razvan Pascanu, Neil C. Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. 2016. Overcoming catastrophic forgetting in neural networks. CoRR.
|
| 297 |
+
Matthias De Lange, Rahaf Aljundi, Marc Masana, and Tinne Tuytelaars. 2019. Continual learning: A comparative study on how to defy forgetting in classification tasks. CoRR.
|
| 298 |
+
Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, and Byoung-Tak Zhang. Overcoming catastrophic forgetting by incremental moment matching. In NIPS.
|
| 299 |
+
Bing Liu. 2015. Sentiment analysis: Mining opinions, sentiments, and emotions. Cambridge University Press.
|
| 300 |
+
Qian Liu, Zhiqiang Gao, Bing Liu, and Yuanlin Zhang. 2015. Automated rule selection for aspect extraction in opinion mining. In *IJCAI*.
|
| 301 |
+
David Lopez-Paz and Marc'Aurelio Ranzato. 2017. Gradient episodic memory for continual learning. In NIPS.
|
| 302 |
+
Guangyi Lv, Shuai Wang, Bing Liu, Enhong Chen, and Kun Zhang. 2019. Sentiment classification by leveraging the shared knowledge from a sequence of domains. In DASFAA.
|
| 303 |
+
|
| 304 |
+
Arun Mallya and Svetlana Lazebnik. 2018. Packnet: Adding multiple tasks to a single network by iterative pruning. In CVPR.
|
| 305 |
+
Michael McCloskey and Neal J Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation.
|
| 306 |
+
Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, and Ian Tenney. 2020. What happens to BERT embeddings during fine-tuning? CoRR.
|
| 307 |
+
German Ignacio Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, and Stefan Wermter. 2019. Continual lifelong learning with neural networks: A review. Neural Networks.
|
| 308 |
+
Qi Qin, Wenpeng Hu, and Bing Liu. 2020. Using the past knowledge to improve sentiment classification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 1124-1133.
|
| 309 |
+
Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H. Lampert. 2017. icarl: Incremental classifier and representation learning. In CVPR.
|
| 310 |
+
Mohammad Rostami, Soheil Kolouri, and Praveen K. Pilly. 2019. Complementary learning for overcoming catastrophic forgetting using experience replay. In *IJCAI*.
|
| 311 |
+
Paul Ruvolo and Eric Eaton. 2013. ELLA: an efficient lifelong learning algorithm. In ICML, pages 507-515.
|
| 312 |
+
Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. 2017. Dynamic routing between capsules. In NIPS.
|
| 313 |
+
Ari Seff, Alex Beatson, Daniel Suo, and Han Liu. 2017. Continual learning in generative adversarial nets. CoRR, abs/1705.08395.
|
| 314 |
+
Joan Serrà, Didac Suris, Marius Miron, and Alexandros Karatzoglou. 2018. Overcoming catastrophic forgetting with hard attention to the task. In ICML.
|
| 315 |
+
Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. 2017. Continual learning with deep generative replay. In NIPS.
|
| 316 |
+
Lei Shu, Hu Xu, and Bing Liu. 2017. Lifelong learning CRF for supervised aspect extraction. In ACL.
|
| 317 |
+
Daniel L. Silver, Qiang Yang, and Lianghao Li. 2013. Lifelong machine learning systems: Beyond learning algorithms. In Lifelong Machine Learning, Papers from the 2013 AAAI Spring Symposium, Palo Alto, California, USA, March 25-27, 2013.
|
| 318 |
+
Chi Sun, Luyao Huang, and Xipeng Qiu. 2019. Utilizing BERT for aspect-based sentiment analysis via constructing auxiliary sentence. In NAACL.
|
| 319 |
+
|
| 320 |
+
Duyu Tang, Bing Qin, and Ting Liu. 2016. Aspect level sentiment classification with deep memory network. In EMNLP.
|
| 321 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.
|
| 322 |
+
Hao Wang, Bing Liu, Shuai Wang, Nianzu Ma, and Yan Yang. 2019. Forward and backward knowledge transfer for sentiment classification. In ACML.
|
| 323 |
+
Shuai Wang, Guangyi Lv, Sahisnu Mazumder, Geli Fei, and Bing Liu. 2018. Lifelong learning memory networks for aspect sentiment classification. In IEEE International Conference on Big Data.
|
| 324 |
+
Rui Xia, Jie Jiang, and Huihui He. 2017. Distantly supervised lifelong learning for large-scale social media sentiment analysis. IEEE Trans. Affective Computing, 8(4):480-491.
|
| 325 |
+
Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2019. BERT post-training for review reading comprehension and aspect-based sentiment analysis. In NAACL-HLT.
|
| 326 |
+
Hu Xu, Sihong Xie, Lei Shu, and Philip S. Yu. 2018. Dual attention network for product compatibility and function satisfiability analysis. In AAAI.
|
| 327 |
+
Wei Xue and Tao Li. 2018. Aspect based sentiment analysis with gated convolutional networks. In ACL.
|
| 328 |
+
Guanxiong Zeng, Yang Chen, Bo Cui, and Shan Yu. 2019. Continuous learning of context-dependent processing in neural networks. Nature Machine Intelligence.
|
| 329 |
+
Wei Zhao, Haiyun Peng, Steffen Eger, Erik Cambria, and Min Yang. 2019. Towards scalable and reliable capsule networks for challenging NLP applications. In ACL.
|
adaptingbertforcontinuallearningofasequenceofaspectsentimentclassificationtasks/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8a9a71436f57c2162446ad916b21c845c6ccad0cc1c66f6f5c0fca9bac58add4
|
| 3 |
+
size 336043
|
adaptingbertforcontinuallearningofasequenceofaspectsentimentclassificationtasks/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ac09eca07622f3d305c113cdf790b2ebb42d7fd35e00a9f70bfe3aa56d0e3cb9
|
| 3 |
+
size 372746
|
adaptingcoreferenceresolutionforprocessingviolentdeathnarratives/f03fd56f-b55d-41e6-b8a1-fc58f89bd508_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a1d31d702675c5b9f2f3ef309367a34bdf8dff727b67eace6686dab446d19e25
|
| 3 |
+
size 47219
|
adaptingcoreferenceresolutionforprocessingviolentdeathnarratives/f03fd56f-b55d-41e6-b8a1-fc58f89bd508_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:87a1e6a4eb621d5eaf1b015bd9cf3578ff305615d1ca637fb0b41ff59c87e5aa
|
| 3 |
+
size 57609
|