ACL-OCL / Base_JSON /prefixS /json /S15 /S15-1024.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S15-1024",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:37:20.932200Z"
},
"title": "Learning to predict script events from domain-specific text",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": "",
"affiliation": {},
"email": "rudinger@jhu.edu"
},
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Saarland University",
"location": {
"settlement": "Saarbr\u00fccken",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Ashutosh",
"middle": [],
"last": "Modi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Saarland University",
"location": {
"settlement": "Saarbr\u00fccken",
"country": "Germany"
}
},
"email": "amodi@mmci.uni-saarland.de"
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": "",
"affiliation": {},
"email": "vandurme@cs.jhu.edu"
},
{
"first": "Manfred",
"middle": [],
"last": "Pinkal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Saarland University",
"location": {
"settlement": "Saarbr\u00fccken",
"country": "Germany"
}
},
"email": "pinkal@coli.uni-saarland.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The automatic induction of scripts (Schank and Abelson, 1977) has been the focus of many recent works. In this paper, we employ a variety of these methods to learn Schank and Abelson's canonical restaurant script, using a novel dataset of restaurant narratives we have compiled from a website called \"Dinners from Hell.\" Our models learn narrative chains, script-like structures that we evaluate with the \"narrative cloze\" task (Chambers and Jurafsky, 2008).",
"pdf_parse": {
"paper_id": "S15-1024",
"_pdf_hash": "",
"abstract": [
{
"text": "The automatic induction of scripts (Schank and Abelson, 1977) has been the focus of many recent works. In this paper, we employ a variety of these methods to learn Schank and Abelson's canonical restaurant script, using a novel dataset of restaurant narratives we have compiled from a website called \"Dinners from Hell.\" Our models learn narrative chains, script-like structures that we evaluate with the \"narrative cloze\" task (Chambers and Jurafsky, 2008).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A well-known theory from the intersection of psychology and artificial intelligence posits that humans organize certain kinds of general knowledge in the form of scripts, or common sequences of events (Schank and Abelson, 1977) . Though many early AI systems employed hand-encoded scripts, more recent work has attempted to induce scripts with automatic and scalable techniques. In particular, several related techniques approach the problem of script induction as one of learning narrative chains from text corpora (Chambers and Jurafsky, 2008; Chambers and Jurafsky, 2009; Jans et al., 2012; Pichotta and Mooney, 2014) . These statistical approaches have focused on open-domain script acquisition, in which a large number of scripts may be learned, but the acquisition of any particular set of scripts is not guaranteed. For many specialized applications, however, knowledge of a few relevant scripts may be more useful than knowledge of many irrelevant scripts. With this scenario in mind, we attempt to learn the famous \"restaurant script\" (Schank and Abelson, 1977) by applying the aforementioned narrative chain learning methods to a specialized corpus of dinner narratives we compile from the website \"Dinners from Hell.\" Our results suggest that applying these techniques to a domain-specific dataset may be reasonable way to learn domain-specific scripts.",
"cite_spans": [
{
"start": 201,
"end": 227,
"text": "(Schank and Abelson, 1977)",
"ref_id": "BIBREF12"
},
{
"start": 516,
"end": 545,
"text": "(Chambers and Jurafsky, 2008;",
"ref_id": "BIBREF0"
},
{
"start": 546,
"end": 574,
"text": "Chambers and Jurafsky, 2009;",
"ref_id": "BIBREF1"
},
{
"start": 575,
"end": 593,
"text": "Jans et al., 2012;",
"ref_id": "BIBREF5"
},
{
"start": 594,
"end": 620,
"text": "Pichotta and Mooney, 2014)",
"ref_id": "BIBREF10"
},
{
"start": 1044,
"end": 1070,
"text": "(Schank and Abelson, 1977)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous work in the automatic induction of scripts or script-like structures has taken a number of different approaches. Regneri et al. (2010) attempt to learn the structure of specific scripts by eliciting event sequence descriptions (ESDs) from humans to which they apply multiple sequence alignment (MSA) to yield one global structure per script. (Orr et al. (2014) learn similar structures in a probabilistic framework with Hidden Markov Models.) Although Regneri et al. (2010) , like us, are concerned with learning pre-specified scripts, our approach is different in that we apply unsupervised techniques to scenario-specific collections of natural, pre-existing texts.",
"cite_spans": [
{
"start": 122,
"end": 143,
"text": "Regneri et al. (2010)",
"ref_id": "BIBREF11"
},
{
"start": 351,
"end": 369,
"text": "(Orr et al. (2014)",
"ref_id": "BIBREF8"
},
{
"start": 461,
"end": 482,
"text": "Regneri et al. (2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Note that while the applicability of our approach to script learning may appear limited to domains for which a corpus conveniently already exists, previous work demonstrates the feasibility of assembling such a corpus by automatically retrieving relevant documents from a larger collection. For example, Chambers and Jurafsky (2011) use information retrieval techniques to gather a small number of bombing-related documents from the Gigaword corpus, which they successfully use to learn a MUCstyle (Sundheim, 1991) information extraction tem-plate for bombing events.",
"cite_spans": [
{
"start": 498,
"end": 514,
"text": "(Sundheim, 1991)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Following the work of Church and Hanks (1990) in learning word associations via mutual information, and the DIRT system introduced by Lin and Pantel (2001) , Chambers and Jurafsky (2008) propose a PMI-based system for learning script-like structures called narrative chains. Several followup papers introduce variations and improvements on this original model for learning narrative chains (Chambers and Jurafsky, 2009; Jans et al., 2012; Pichotta and Mooney, 2014) . It is from this body of work that we borrow techniques to apply to the Dinners from Hell dataset.",
"cite_spans": [
{
"start": 22,
"end": 45,
"text": "Church and Hanks (1990)",
"ref_id": "BIBREF3"
},
{
"start": 134,
"end": 155,
"text": "Lin and Pantel (2001)",
"ref_id": "BIBREF6"
},
{
"start": 158,
"end": 186,
"text": "Chambers and Jurafsky (2008)",
"ref_id": "BIBREF0"
},
{
"start": 390,
"end": 419,
"text": "(Chambers and Jurafsky, 2009;",
"ref_id": "BIBREF1"
},
{
"start": 420,
"end": 438,
"text": "Jans et al., 2012;",
"ref_id": "BIBREF5"
},
{
"start": 439,
"end": 465,
"text": "Pichotta and Mooney, 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "As defined by Chambers and Jurafsky (2008) , a narrative chain is \"a partially ordered set of narrative events that share a common actor,\" where a narrative event is \"a tuple of an event (most simply a verb) and its participants, represented as typed dependencies.\" To learn narrative chains from text, Chambers and Jurafsky extract chains of narrative events linked by a common coreferent within a document. For example, the sentence \"John drove to the store where he bought some ice cream.\" would generate two narrative events corresponding to the protagonist John: (DRIVE, nsubj) followed by (BUY, nsubj). Over these extracted chains of narrative events, pointwise mutual information (PMI) is computed between all pairs of events. These PMI scores are then used to predict missing events from such chains, i.e. the narrative cloze evaluation. Jans et al. (2012) expand on this approach, introducing an ordered PMI model, a bigram probability model, skip n-gram counting methods, coreference chain selection, and an alternative scoring metric (recall at 50). Their bigram probability model outperforms the original PMI model on the narrative cloze task under many conditions. Pichotta and Mooney (2014) introduce an extended notion of narrative event that includes information about subjects and objects. They also introduce a competitive \"unigram model\" as a baseline for the narrative cloze task.",
"cite_spans": [
{
"start": 14,
"end": 42,
"text": "Chambers and Jurafsky (2008)",
"ref_id": "BIBREF0"
},
{
"start": 846,
"end": 864,
"text": "Jans et al. (2012)",
"ref_id": "BIBREF5"
},
{
"start": 1178,
"end": 1204,
"text": "Pichotta and Mooney (2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "To learn the restaurant script from our dataset, we implement the models of Chambers and Jurafsky (2008) and Jans et al. (2012) , as well as the unigram baseline of Pichotta and Mooney (2014). To evaluate our success in learning the restaurant script, we perform a modified version of the nar-rative cloze task, predicting only verbs that we annotate as \"restaurant script-relevant\" and comparing the performance of each model. Note that these annotations are not used for training.",
"cite_spans": [
{
"start": 76,
"end": 104,
"text": "Chambers and Jurafsky (2008)",
"ref_id": "BIBREF0"
},
{
"start": 109,
"end": 127,
"text": "Jans et al. (2012)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "This section provides an overview of each of the different methods and parameter settings we employ to learn narrative chains from the Dinners from Hell corpus, starting with the original model (Chambers and Jurafsky, 2008) and extending to the modifications of Jans et al. (2012) . As part of this work, we are releasing a program called NaChos, our integrated Python implementation of each of the methods for learning narrative chains described in this section. 1",
"cite_spans": [
{
"start": 194,
"end": 223,
"text": "(Chambers and Jurafsky, 2008)",
"ref_id": "BIBREF0"
},
{
"start": 262,
"end": 280,
"text": "Jans et al. (2012)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Formally, a narrative event, e := (v, d), is a verb, v, paired with a typed dependency (De Marneffe et al., 2006) , d, defining the role a \"protagonist\" (coreference mention) plays in an event (verb). The main computational component of learning narrative chains in Chambers and Jurafsky's model is to learn the pointwise mutual information for any pair of narrative events:",
"cite_spans": [
{
"start": 91,
"end": 113,
"text": "Marneffe et al., 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Counting methods for PMI",
"sec_num": "3.1"
},
{
"text": "pmi(e 1 , e 2 ) := log C(e 1 , e 2 ) C(e 1 , * )C( * , e 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Counting methods for PMI",
"sec_num": "3.1"
},
{
"text": "where C(e 1 , e 2 ) is the number of times that narrative events e 1 and e 2 \"co-occur\" and C(e, * ) := e C(e, e )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Counting methods for PMI",
"sec_num": "3.1"
},
{
"text": "Chambers and Jurafsky define C(e 1 , e 2 ) as \"the number of times the two events e 1 and e 2 had a coreferring entity filling the values of the dependencies d 1 and d 2 .\" This is a symmetric value with respect to e 1 and e 2 . We implement the following counting variants:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Counting methods for PMI",
"sec_num": "3.1"
},
{
"text": "Skip N-gram By default, C(e 1 , e 2 ) is incremented if e 1 and e 2 occur anywhere within the same chain of events derived from a single coreference chain (skip-all); we also implement an option to restrict the distance between e 1 and e 2 to 0 though 5 intervening events (skip-0 through skip-5). (Jans et al., 2012) Coreference Chain Length The original model counts co-occurrences in all coreference chains; we include Jans et al. (2012)'s option to count over only the longest chains in each document, or to count only over chains of length 5 or greater (long).",
"cite_spans": [
{
"start": 298,
"end": 317,
"text": "(Jans et al., 2012)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Counting methods for PMI",
"sec_num": "3.1"
},
{
"text": "Count Threshold Because PMI favors low-count events, we add an option to set C(e 1 , e 2 ) to zero for any e 1 , e 2 for which C(e 1 , e 2 ) is below some threshold, T , up to 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Counting methods for PMI",
"sec_num": "3.1"
},
{
"text": "In order to perform the narrative cloze task, we need a model for predicting the missing narrative event, e, from a chain of observed narrative events, e 1 . . . e n , at insertion point k. The original model, proposed by Chambers and Jurafsky (2008) , predicts the event that maximizes unordered pmi,",
"cite_spans": [
{
"start": 222,
"end": 250,
"text": "Chambers and Jurafsky (2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predictive Models for Narrative Cloze",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e = arg max e\u2208V n i=1 pmi(e, e i )",
"eq_num": "(3)"
}
],
"section": "Predictive Models for Narrative Cloze",
"sec_num": "3.2"
},
{
"text": "where V is the set of all observed events (the vocabulary) and C(e 1 , e 2 ) is symmetric. Two additional models are introduced by Jans et al. 2012and we use them here, as well. First, the ordered pmi model,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictive Models for Narrative Cloze",
"sec_num": "3.2"
},
{
"text": "e = arg max e\u2208V k i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictive Models for Narrative Cloze",
"sec_num": "3.2"
},
{
"text": "pmi(e i , e) + n i=k+1 pmi(e, e i ) (4) where C(e 1 , e 2 ) is asymmetric, i.e., C(e 1 , e 2 ) counts only cases in which e 1 occurs before e 2 . Second, the bigram probability model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictive Models for Narrative Cloze",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e = arg max e\u2208V k i=1 p(e|e i ) n i=k+1 p(e i |e)",
"eq_num": "(5)"
}
],
"section": "Predictive Models for Narrative Cloze",
"sec_num": "3.2"
},
{
"text": "where p(e 2 |e 1 ) = C(e 1 ,e 2 ) C(e 1 , * ) and C(e 1 , e 2 ) is asymmetric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictive Models for Narrative Cloze",
"sec_num": "3.2"
},
{
"text": "Discounting For each model, we add an option for discounting the computed scores. In the case of the two PMI-based models, we use the discount score described in Pantel and Ravichandran (2004) and used by Chambers and Jurafsky (2008) . For the bigram probability model, this PMI discount score would be inappropriate, so we instead use absolute discounting. Document Threshold We include a document threshold parameter, D, that ensures that, in any narrative cloze test, any event e that was observed during training in fewer than D distinct documents will receive a worse score (i.e. be ranked behind) any event e whose count meets the document threshold.",
"cite_spans": [
{
"start": 162,
"end": 192,
"text": "Pantel and Ravichandran (2004)",
"ref_id": "BIBREF9"
},
{
"start": 205,
"end": 233,
"text": "Chambers and Jurafsky (2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predictive Models for Narrative Cloze",
"sec_num": "3.2"
},
{
"text": "The source of our data for this experiment is a blog called \"Dinners From Hell\" 2 where readers submit stories about their terrible restaurant experiences. For an example story, see Figure 1 . To process the raw data, we stripped all HTML and other non-story content from each file and processed the remaining text with the Stanford CoreNLP pipeline version 3.3.1 (Manning et al., 2014) . Of the 237 stories obtained, we manually filtered out 94 stories that were \"off-topic\" (e.g., letters to the webmaster, dinners not at restaurants), leaving a total of 143 stories. The average story length is 352 words.",
"cite_spans": [
{
"start": 364,
"end": 386,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 182,
"end": 190,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset: Dinners From Hell",
"sec_num": "4"
},
{
"text": "For the purposes of evaluation only, we hired four undergraduates to annotate every non-copular verb in each story as either corresponding to an event \"related to the experience of eating in a restaurant\" (e.g., ordered a steak), \"unrelated to the experience of eating in a restaurant\" (e.g., answered the phone), or uncertain. We used the WebAnno platform for annotation (Yimam et al., 2013) .",
"cite_spans": [
{
"start": 372,
"end": 392,
"text": "(Yimam et al., 2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "4.1"
},
{
"text": "A total of 8,202 verb (tokens) were annotated, each by three annotators. 70.3% of verbs annotated achieved 3-way agreement; 99.4% had at least 2-way agreement. After merging the annotations (simple majority vote), 30.7% of verbs were labeled as restaurant-script-related, 68.6% were labeled as restaurant-script-unrelated, and the remaining 0.7% as uncertain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "4.1"
},
{
"text": "Corresponding to the 8,202 annotated verb tokens, there are 1,481 narrative events at the type level. 580 of these narrative event types were annotated as script-relevant in at least one token instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation",
"sec_num": "4.1"
},
{
"text": "\"A long time ago when I was still in college, my family decided to take me out for pizza on my birthday. We decided to try the new location for a favorite pizza chain of ours. It was all adults and there were about 8 of us, so we ordered 3 large pizzas. We got to chatting and soon realized that the pizzas should've been ready quite a bit ago, so we called the waitress over and she went to check on our pizzas. She did not come back. We waited about another 10 minutes, then called over another waitress, who went to check on our pizzas and waitress. It now been over an hour. About 10 minutes later, my Dad goes up to the check-out and asks the girl there to send the manager to our table. A few minutes later the manager comes out. He explains to us that our pizzas got stuck in the oven and burned. They were out of large pizza dough bread, so they were making us 6 medium pizzas for the price of 3 large pizzas. We had so many [pizzas] on our table we barely had [room] to eat! Luckily my family is pretty easy going so we just laughed about the whole thing. We did tell the manager that it would have been nice if someone, anyone, had said something earlier to us, instead of just disappearing, and he agreed. He even said it was his responsibility, but that he had been busy trying to fix what caused the pizzas to jam up in the oven. He went so far as to give us 1/2 off our bill, which was really nice. It was definitely a memorable birthday!\" Figure 1 : Example story from Dinners from Hell corpus. Bold words indicate events in the \"we\" coreference chain (the longest chain). Boxed words (blue) indicate best narrative chain of length three (see Section 5.2); underlined words (orange) are corresponding subjects and bracketed words (green) are corresponding objects.",
"cite_spans": [],
"ref_spans": [
{
"start": 1454,
"end": 1462,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotation",
"sec_num": "4.1"
},
{
"text": "We evaluate the various models on the narrative cloze task. What is different about our version of the narrative cloze task here is that we limit the cloze tests to only \"interesting\" events, i.e., those that have been identified as relevant to the restaurant script by our annotators (see Section 4.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Narrative Cloze",
"sec_num": "5.1"
},
{
"text": "Because our dataset is small (143 documents), we perform leave-one-out testing at the document level, training on 133 folds total. (Ten documents are excluded for a development set.) For each fold of training, we extract all of the narrative chains (mapped directly from coreference chains) in the held out test document. For each test chain, we generate one narrative cloze test per \"script-relevant\" event in that chain. For example, if a chain contains ten events, three of which are \"script-relevant,\" then three cloze tests will be generated, each containing nine \"observed\" events. Chains with fewer than two events are excluded. In this way, we generate a total of 2,273 cloze tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Narrative Cloze",
"sec_num": "5.1"
},
{
"text": "We employ three different scoring metrics: average rank (Chambers and Jurafsky, 2008) , mean reciprocal rank, and recall at 50 (Jans et al., 2012) .",
"cite_spans": [
{
"start": 56,
"end": 85,
"text": "(Chambers and Jurafsky, 2008)",
"ref_id": "BIBREF0"
},
{
"start": 127,
"end": 146,
"text": "(Jans et al., 2012)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring",
"sec_num": null
},
{
"text": "Baseline The baseline we use for the narrative cloze task is to rank events by frequency. This is the \"unigram model\" employed by Pichotta and Mooney (2014) , a competitive baseline on this task.",
"cite_spans": [
{
"start": 130,
"end": 156,
"text": "Pichotta and Mooney (2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring",
"sec_num": null
},
{
"text": "For each model and scoring metric, we perform a complete grid search over all possible parameter settings to find the best-scoring combination on a cloze tests from a set-aside development set of ten documents. The parameter space is defined as the Cartesian product of each of the following possible parameter values: skip-n (all,0-5), coreference chain length (all, long, longest), count threshold (T=1-5), document threshold (D=1-5), and discounting (yes/no). Bigram probability with and without discounting are treated as two separate models. Figure 2 reports the results of the narrative cloze evalutation. Each of the four models (unordered pmi, ordered pmi, bigram, and bigram with discounting) outperform the baseline on the average rank metric when the parameters are optimized for that metric. Both bigram models beat the baseline on mean reciprocal rank not only for MRR-optimized parameter settings, but for the average-rank-and recall-at-50-optimized settings. None of the parameter settings are able to ouperform the baseline on recall at 50, though both PMI models tie the baseline. Overall, the model that performs the best is the bigram probability model with discounting (row 12 of Figure 2 ) which has the following parameter settings: skip-all, coref-all, T=1, and D=5.",
"cite_spans": [],
"ref_spans": [
{
"start": 547,
"end": 555,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1200,
"end": 1208,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Scoring",
"sec_num": null
},
{
"text": "The fact that several model settings outperform an informed baseline on average rank and mean reciprocal rank indicates that these methods may in general be applicable to smaller, domain-specific corpora. Furthermore, it is apparent from the results that the bigram probability models perform better overall than PMI-based models, a finding also reported in Jans et al. (2012) . This replication is futher evidence that these methods do in fact transfer.",
"cite_spans": [
{
"start": 358,
"end": 376,
"text": "Jans et al. (2012)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring",
"sec_num": null
},
{
"text": "To get a qualitative sense of the narrative events these models are learning to associate from this data, we use the conditional probabilities learned in the bigram model (Fig 2, row 12) to select the highest probability narrative chain of length three out of the 12 possible events in the \"we\" coreference chain in Figure 1 (bolded) . The three events selected are boxed and highlighted in blue. The bigram model selects the \"deciding\" event (selecting restaurant) and the \"having\" event (having pizza), both reasonable components of the restaurant script. The third event selected is \"having room,\" which is not part of the restaurant script. This mistake illustrates a weakness of the narrative chains model; without considering the verb's object, the model is unable to distinguish \"have pizza\" from \"have room.\" Incorporating object information in future experiments, as in Pichotta and Mooney (2014), might resolve this issue, although it could introduce sparsity problems.",
"cite_spans": [],
"ref_spans": [
{
"start": 171,
"end": 186,
"text": "(Fig 2, row 12)",
"ref_id": null
},
{
"start": 316,
"end": 333,
"text": "Figure 1 (bolded)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Example",
"sec_num": "5.2"
},
{
"text": "In this work, we describe the collection and annotation of a corpus of natural descriptions of restaurant visits from the website \"Dinners from Hell.\" We use this dataset in an attempt to learn the restaurant script, using a variety of related methods for learning narrative chains and evaluating on the narrative cloze task. Our results suggest that it may be possible in general to use these methods on domainspecific corpora in order to learn particular scripts from a pre-specified domain, although further experiments in other domains would help bolster this conclusion. In principle, a domain-specific corpus need not come from a website like Dinners from Hell; it could instead be sub-sampled from a larger corpus, retrieved from the web, or directly elicited. Our domain-specific approach to script learning is potentially useful for specialized NLP applications that require knowledge of only a particular set of scripts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "One feature of the Dinners from Hell corpus that bears further inspection in future work is the fact that its stories contain many violations of the restaurant script. A question to investigate is whether these violations impact how the restaurant script is learned. Other avenues for future work include incorporating object information into event representations and applying domain adaptation techniques in order to leverage larger general-domain corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "www.clsp.jhu.edu/people/rachel-rudinger",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.dinnersfromhell.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was sponsored in part by the NSF under grant 0530118 (PIRE) and with additional support from the Allen Institute for Artificial Intelligence (AI2). The authors would also like to thank Michaela Regneri, Annemarie Friedrich, Stefan Thater, Alexis Palmer, Andrea Horbach, Diana Steffen, Asad Sayeed, and Frank Ferraro for their insightful contributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised learning of narrative event chains",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "789--797",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers and Dan Jurafsky. 2008. Unsu- pervised learning of narrative event chains. In Pro- ceedings of ACL-08: HLT, pages 789-797, Columbus, Ohio. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised learning of narrative schemas and their participants",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers and Dan Jurafsky. 2009. Unsuper- vised learning of narrative schemas and their partici- pants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th Interna- tional Joint Conference on Natural Language Process- ing of the AFNLP, pages 602-610, Suntec, Singapore. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Templatebased information extraction without the templates",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "976--986",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers and Dan Jurafsky. 2011. Template- based information extraction without the templates. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 976-986, Portland, Ore- gon, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "Kenneth",
"middle": [
"Ward"
],
"last": "Church",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicogra- phy. Computational linguistics, 16(1):22-29.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Generating typed dependency parses from phrase structure parses",
"authors": [
{
"first": "Marie-Catherine De",
"middle": [],
"last": "Marneffe",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of LREC",
"volume": "6",
"issue": "",
"pages": "449--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine De Marneffe, Bill MacCartney, Christo- pher D Manning, et al. 2006. Generating typed de- pendency parses from phrase structure parses. In Pro- ceedings of LREC, volume 6, pages 449-454.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Skip n-grams and ranking functions for predicting script events",
"authors": [
{
"first": "Bram",
"middle": [],
"last": "Jans",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "336--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bram Jans, Steven Bethard, Ivan Vuli\u0107, and Marie- Francine Moens. 2012. Skip n-grams and ranking functions for predicting script events. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 336-344, Avignon, France. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Dirt -discovery of inference rules from text",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "323--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin and Patrick Pantel. 2001. Dirt -discov- ery of inference rules from text. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 323- 328. ACM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning scripts as hidden markov models",
"authors": [
{
"first": "J Walker",
"middle": [],
"last": "Orr",
"suffix": ""
},
{
"first": "Prasad",
"middle": [],
"last": "Tadepalli",
"suffix": ""
},
{
"first": "Janardhan",
"middle": [],
"last": "Rao Doppa",
"suffix": ""
},
{
"first": "Xiaoli",
"middle": [],
"last": "Fern",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"G"
],
"last": "Dietterich",
"suffix": ""
}
],
"year": 2014,
"venue": "Twenty-Eighth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Walker Orr, Prasad Tadepalli, Janardhan Rao Doppa, Xiaoli Fern, and Thomas G Dietterich. 2014. Learn- ing scripts as hidden markov models. In Twenty- Eighth AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatically labeling semantic classes",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Deepak",
"middle": [],
"last": "Ravichandran",
"suffix": ""
}
],
"year": 2004,
"venue": "HLT-NAACL 2004: Main Proceedings",
"volume": "",
"issue": "",
"pages": "321--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Pantel and Deepak Ravichandran. 2004. Auto- matically labeling semantic classes. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT- NAACL 2004: Main Proceedings, pages 321-328, Boston, Massachusetts, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Statistical script learning with multi-argument events",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Pichotta",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "220--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Pichotta and Raymond Mooney. 2014. Statisti- cal script learning with multi-argument events. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguis- tics, pages 220-229, Gothenburg, Sweden. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning script knowledge with web experiments",
"authors": [
{
"first": "Michaela",
"middle": [],
"last": "Regneri",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Pinkal",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "979--988",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michaela Regneri, Alexander Koller, and Manfred Pinkal. 2010. Learning script knowledge with web experiments. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguistics, pages 979-988, Uppsala, Sweden. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Scripts, plans, goals and understanding: An inquiry into human knowledge structures",
"authors": [
{
"first": "Roger",
"middle": [],
"last": "Schank",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Abelson",
"suffix": ""
}
],
"year": 1977,
"venue": "Lawrence Erlbaum Associates",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger Schank and Robert Abelson. 1977. Scripts, plans, goals and understanding: An inquiry into hu- man knowledge structures. Lawrence Erlbaum Asso- ciates, Hillsdale, NJ.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Third message understanding evaluation and conference (muc-3): Phase 1 status report",
"authors": [
{
"first": "Beth",
"middle": [
"M"
],
"last": "Sundheim",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the Message Understanding Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beth M. Sundheim. 1991. Third message understand- ing evaluation and conference (muc-3): Phase 1 status report. In Proceedings of the Message Understanding Conference.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Webanno: A flexible, web-based and visually supported system for distributed annotations",
"authors": [
{
"first": "Iryna",
"middle": [],
"last": "Seid Muhie Yimam",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Eckart De Castilho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seid Muhie Yimam, Iryna Gurevych, Richard Eckart de Castilho, and Chris Biemann. 2013. Webanno: A flexible, web-based and visually supported system for distributed annotations. In Proceedings of the 51st",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 1-6, Sofia, Bulgaria. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Parameter settings corresponding to each model inFig 2.",
"num": null
}
}
}
}