ACL-OCL / Base_JSON /prefixL /json /lilt /2015.lilt-12.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:59:52.483937Z"
},
"title": "Linguistic Issues in Language Technology -LiLT",
"authors": [
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Ohio State University",
"location": {}
},
"email": "melsner@ling.ohio-state.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Since the 18th century, the novel has been one of the defining forms of English writing, a mainstay of popular entertainment and academic criticism. Despite its importance, however, there are few computational studies of the large-scale structure of novels-and many popular representations for discourse modeling do not work very well for novelistic texts. This paper describes a high-level representation of plot structure which tracks the frequency of mentions of different characters, topics and emotional words over time. The representation can distinguish with high accuracy between real novels and artificially permuted surrogates; characters are important for eliminating random permutations, while topics are effective at distinguishing beginnings from ends.",
"pdf_parse": {
"paper_id": "2015",
"_pdf_hash": "",
"abstract": [
{
"text": "Since the 18th century, the novel has been one of the defining forms of English writing, a mainstay of popular entertainment and academic criticism. Despite its importance, however, there are few computational studies of the large-scale structure of novels-and many popular representations for discourse modeling do not work very well for novelistic texts. This paper describes a high-level representation of plot structure which tracks the frequency of mentions of different characters, topics and emotional words over time. The representation can distinguish with high accuracy between real novels and artificially permuted surrogates; characters are important for eliminating random permutations, while topics are effective at distinguishing beginnings from ends.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The novel, one of the characteristic forms of modern English literature, poses several interesting challenges from the point of view of computational analysis. Some of these have to do with the sheer size of a novel. Unlike a newspaper or encyclopedia article, novels routinely stretch to tens or hundreds of thousands of words, covering complex sequences of interlocking characters and events. Other challenges are representational. It is clear that not all descriptions of sequences of events make for acceptable novelistic plots, but literary theorists have taken a variety of perspectives on what the defining characteristics of plot structure actually are.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper attempts to represent \"plot structure\" at a high level, abstracting away from any particular events. The aim is to capture basic concepts such as \"protagonist\" or \"happy ending\" in ways that apply across a broad range of texts. Whenever possible, the representation is constructed using lexical distribution rather than requiring text analyses (possibly error-prone) with complex NLP tools. This representation is used to create models capable of distinguishing real novels from artificially disordered texts for which plot structure is missing or incomprehensible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This approach is motivated by the inadequate performance of current general-purpose NLP systems on novelistic text. For instance, generalpurpose extractive summarizers fail to find suitable sentences (Kazantseva and Szpakowicz, 2010) . The output of Microsoft Word 2008's builtin summarizer on Pride and Prejudice is shown in Figure 1 . \"Bingley.\" Elizabeth felt Jane's pleasure. \"Miss Elizabeth Bennet.\" Elizabeth looked surprised. \"FITZWILLIAM DARCY\" Elizabeth was delighted. Elizabeth read on: Elizabeth smiled. \"If! \"Dearest Jane! FIGURE 1 Output the built-in summarizer in Microsoft Word 2008, run on the full text of Pride and Prejudice; quoted by Huff (2010) .",
"cite_spans": [
{
"start": 200,
"end": 233,
"text": "(Kazantseva and Szpakowicz, 2010)",
"ref_id": "BIBREF31"
},
{
"start": 654,
"end": 665,
"text": "Huff (2010)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 326,
"end": 334,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Part of the problem is that counting unigrams at the document level identifies character names, rather than themes or plot elements, as the most important content of Pride and Prejudice. Another issue is the extractive assumption. Not only has this summary selected the wrong sentences, but the novel may not contain a good set of \"topic sentences\" to begin with. Thus, rather than asking whether it is possible to summarize Pride and Prejudice by drawing specific sentences from within the text that say \"what the story is about\", summarizers may be better off representing its structure in terms of similarities to other works. For instance, one can say Pride and Prejudice is a romance. This ties it to a variety of other books (including Jane Austen's other works) and sets up some basic expectations in the mind of a potential reader: the story will focus on a protagonist (probably female) who falls in love and eventually gets married. The goal of this paper is to construct an automatic function for measuring similarity between novels, as progress toward eventual summarization, search and recommendation systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While this study focuses on public-domain works from the 19th and early 20th centuries, these representational strategies will hopefully continue to be useful on modern texts from the Western novelistic tradition. Modern writers continue to produce vast amounts of fictional text. That includes amateur authors whose output is never formally published or curated; in 2010, over 200,000 people participated in NaNoWriMo (national novel writing month). A system able to provide summaries or other representations of these texts could aid in making them more accessible to prospective readers as well as to academics such as sociolinguists or narratologists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The system represents a novel as a set of trajectories which describe the frequencies of various kinds of language over time. Two novels are defined as similar if their trajectories for various features tend to look alike. Several options are available for constructing such trajectories. Either the entire novel can have a single representation, or there can be features for the language associated with individual characters within it. And there are multiple possibilities for linguistic features to track. This paper describes representations based on sentiment words, the frequencies of mentions of the characters themselves, and word clusters created by Latent Dirichlet Allocation (henceforth, LDA; Blei et al., 2001) . LDA groups related words, such as \"song\" and \"melody\", because they frequently occur together in the same works.",
"cite_spans": [
{
"start": 705,
"end": 723,
"text": "Blei et al., 2001)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Some literary scholars claim that sentiment is central to the highlevel construction of plots. Crane (2002) writes that effective plots cause us to \"wish good [for some of the characters], for others ill, and depending on our inferences as to the events, we feel hope or fear... or similar emotions\". Phelan and Rabinowitz (2012) extend this approach to a rhetorical theory of narrative which analyzes stories by examining the kinds of emotional and moral judgements which the author intends to elicit from the audience. On the other hand, LDA induces word clusters directly from the corpus and can therefore learn features which a general-purpose lexicon could not. The experiments discussed in this paper evaluate the strengths and weaknesses of each approach.",
"cite_spans": [
{
"start": 95,
"end": 107,
"text": "Crane (2002)",
"ref_id": "BIBREF17"
},
{
"start": 301,
"end": 329,
"text": "Phelan and Rabinowitz (2012)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These representational choices are evaluated through the construc-tion of a similarity function, or kernel, which measures how alike two novels are in a given feature representation. Representation quality can then be compared by drawing on an existing tradition of discourse coherence evaluation using artificial reordering experiments (Karamanis et al., 2004) . In these experiments, the kernel is used to distinguish novels that have been randomly permuted, or reversed, chapter-by-chapter, from the originals. The proposed representation and experimental setup implicitly assume that novels share a common sequential structure-an emotional/rhetorical structure for systems using sentiment features or a chronological sequence for LDA. Deviations from these structural principles are a common way to create suspense or draw attention to the narrative as an artifact. In a 19th-century example, the long flashback in The Tenant of Wildfell Hall breaks the chronological sequence to reveal the details of a character's mysterious past. Post-modern novels often involve even more complex structures, such as intertwined or nested narratives (for instance, David Foster Wallace's Infinite Jest or Milorad Pavi\u0107's Dictionary of the Khazars). Computational linguists have devoted some effort to representing such texts (Ryan, 1991) and disentangling their narrative threads (Wallace, 2012) . The representation proposed here is unlikely to work well with texts which violate its core assumption of a sequential structure, but might still be helpful in representing single threads extracted by such a technique.",
"cite_spans": [
{
"start": 337,
"end": 361,
"text": "(Karamanis et al., 2004)",
"ref_id": "BIBREF30"
},
{
"start": 1316,
"end": 1328,
"text": "(Ryan, 1991)",
"ref_id": "BIBREF51"
},
{
"start": 1371,
"end": 1386,
"text": "(Wallace, 2012)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The experimental results indicate that character-based systems and single-trajectory systems which represent the whole novel at once have complementary strengths. Character frequency is most effective for distinguishing randomly permuted novels from originals, while a single trajectory of LDA features is most effective for reversals. This implies that tracking the course of events (as in many previous models) is indeed important in making sure the plot sequence points in the correct direction, from beginning to end. For instance, the rise in suspense or exciting sentiment at the end of a mystery or adventure story is a property of the story as a whole, not an emotion felt only by the protagonist. It is also important, however, to track the kinds of characters who appear in a work, and so help distinguish a coherent plot structure from a randomized one; minor characters' marriage occurs throughout Jane Austen's novels, but the protagonist only gets married at the end.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper builds on previous work (Elsner, 2012) , which proposed the initial building blocks of the representation used here, but expands on it in several ways. First, this work tracks a wider range of lexical features, using a more sophisticated sentiment lexicon as well as LDA topics. Second, when comparing characters from one novel to charac-ters from another, creating an explicit one-to-one map of corresponding characters (symmetrization) improves upon the previous technique of comparing all pairs. The results, especially the use of symmetrization, are substantially better than those reported in (Elsner, 2012) ; the best (one-neighbor) classification of randomly permuted novels increases from 62% to 82% and of reversals from 52% to 89%. Further comparisons are given in subsection 6.2.",
"cite_spans": [
{
"start": 35,
"end": 49,
"text": "(Elsner, 2012)",
"ref_id": "BIBREF18"
},
{
"start": 609,
"end": 623,
"text": "(Elsner, 2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous attempts to represent plot structure automatically have largely focused on understanding stories sentence-by-sentence. A few have attempted to construct high-level representations as in this paper; many of these have settled on characters as an important representational primitive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 High-level representations",
"sec_num": "2"
},
{
"text": "The work by Coll Ardanuy and Sporleder (2014) is the closest to what is presented here. They too use both static and time-varying character similarity features to represent texts as graphs with characters as nodes. They use these graphs to define a text similarity metric, and cluster texts by author and genre. Unlike this work, however, they gather features over the network as a whole (for instance, the proportion of male characters) rather than trying to define text similarity by building up from character similarities. Thus, the feature sets considered in their work are quite different from those used here. also build social networks for characters in novels, which they use to evaluate several questions in literary theory. As in this work, they begin by identifying characters using coreference resolution on mentions. They construct the social network based on conversation structure (O'Keefe et al., 2012 ; the experiments here use the simpler, but less precise, heuristic of co-frequency counts. Their work, however, does not use time-varying features, but collapses the novel over time, producing a static picture of a dynamic system. Bamman et al. (2014) use a mixed-effects model to infer latent character types from the text of a large set of novels. Like , they have a feature set that is not time-varying. They build upon previous work by Bamman et al. (2013) who use a Bayesian model to learn a set of character roles such as protagonist, love interest and best friend for movie characters, but using metadata rather than scripts directly. They evaluate against a set of preregistered hypotheses. Alm and Sproat (2005) , on the other hand, produce an explicitly temporal structure, a time-varying curve of emotional language over time, which they call an emotional trajectory. Alm and Sproat (2005) rely on hand-annotation of the trajectory. They produce a single trajectory per story (or several stories). Subsequent work on sentiment analysis of fiction and literary texts has retained the single-trajectory assumption, while focusing on enriching the set of sentiments used by the system and improving techniques for detecting them. Volkova et al. (2010) , for instance, describe a protocol for hand-annotation of sentiment in fairy tales which allows non-experts to achieve high agreement.",
"cite_spans": [
{
"start": 17,
"end": 45,
"text": "Ardanuy and Sporleder (2014)",
"ref_id": "BIBREF16"
},
{
"start": 897,
"end": 918,
"text": "(O'Keefe et al., 2012",
"ref_id": "BIBREF46"
},
{
"start": 1151,
"end": 1171,
"text": "Bamman et al. (2014)",
"ref_id": "BIBREF6"
},
{
"start": 1360,
"end": 1380,
"text": "Bamman et al. (2013)",
"ref_id": "BIBREF5"
},
{
"start": 1619,
"end": 1640,
"text": "Alm and Sproat (2005)",
"ref_id": "BIBREF3"
},
{
"start": 1799,
"end": 1820,
"text": "Alm and Sproat (2005)",
"ref_id": "BIBREF3"
},
{
"start": 2158,
"end": 2179,
"text": "Volkova et al. (2010)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 High-level representations",
"sec_num": "2"
},
{
"text": "Mohammad and Turney (2010) construct a large emotional lexicon using Mechanical Turk crowd-sourcing, which is used in the systems presented here. In Mohammad (2011 Mohammad ( , 2012 , it is used to construct emotional trajectories for a few literary works such as Hamlet, but, apart from a broad corpus-level comparison between novels and fairy tales, no effort is made to evaluate these systematically. Ang (2012) creates trajectory plots using Mohammad and Turney's (2010) emotional categories, and evaluates them as part of a toolkit for writers. In a survey, writers find the tool interesting, although they point out that its inferred sentiments can be inaccurate due to negation and other effects of context. 1 This paper evaluates both traditional trajectory systems which produce a single trajectory per work, and multiple trajectories, one per character. It finds that both can be effective at capturing different aspects of plot structure.",
"cite_spans": [
{
"start": 149,
"end": 163,
"text": "Mohammad (2011",
"ref_id": "BIBREF42"
},
{
"start": 164,
"end": 181,
"text": "Mohammad ( , 2012",
"ref_id": "BIBREF45"
},
{
"start": 446,
"end": 474,
"text": "Mohammad and Turney's (2010)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 High-level representations",
"sec_num": "2"
},
{
"text": "An alternative to tracking sentiment distributions is tracking topicswords which occur together in context. LDA (Blei et al., 2001) , the now-standard topic model, has been used in a variety of analyses in the digital humanities (Salway and Herman, 2011) . LDA groups the words of a corpus into semantic \"topics\" by considering the set of documents in which they co-occur. The vector of topic frequency counts within a document can then be used as a coarse-grained approximation of its content.",
"cite_spans": [
{
"start": 112,
"end": 131,
"text": "(Blei et al., 2001)",
"ref_id": "BIBREF10"
},
{
"start": 229,
"end": 254,
"text": "(Salway and Herman, 2011)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 High-level representations",
"sec_num": "2"
},
{
"text": "Although the frequencies of LDA topics naturally vary throughout the course of a long text, LDA does not directly model temporal variation. Subsequent work proposes a variety of Bayesian topic models which make more sophisticated use of document metadata, including sequence ordering Lafferty, 2006, Kim and Sudderth, 2011) . These papers tend to evaluate using a combination of held-out likelihood and eyeball, often on corpora of scientific journal articles-an approach criticized by Chang et al. (2009) and Mimno and Blei (2011) .",
"cite_spans": [
{
"start": 284,
"end": 307,
"text": "Lafferty, 2006, Kim and",
"ref_id": null
},
{
"start": 308,
"end": 323,
"text": "Sudderth, 2011)",
"ref_id": "BIBREF32"
},
{
"start": 486,
"end": 505,
"text": "Chang et al. (2009)",
"ref_id": "BIBREF14"
},
{
"start": 510,
"end": 531,
"text": "Mimno and Blei (2011)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 High-level representations",
"sec_num": "2"
},
{
"text": "The emperor rules the kingdom. The kingdom holds on to the emperor. The emperor rides out of the kingdom. The kingdom speaks out against the emperor. The emperor lies. How these representations compare to sentiment or other features on novelistic text is an open question. This paper uses the simpler method of running standard LDA and measuring temporal patterns on the resulting topics. While more sophisticated methods do have published implementations, they tend to be slower, less stable and less scalable than LDA. Since the results in this paper show that topic model features are useful in capturing a global beginning-to-end temporal structure, evaluating these more complex topic models is a promising direction for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 High-level representations",
"sec_num": "2"
},
{
"text": "In contrast to these abstract representations are models that stay closer to the level of specific events extracted from sentences in the text. Such models have usually been applied to shorter fictional narratives such as fables. Many of these follow from narrative schema extraction (Chambers and Jurafsky, 2009), which attempts to learn representations for events in news stories. These representations are similar to Schankian scripts (Schank and Abelson, 1977) ; they are networks of events in temporal sequence, with slots for specific actors. For instance, \"terrorist attacks target\" can be followed by \"terrorist being arrested\".",
"cite_spans": [
{
"start": 438,
"end": 464,
"text": "(Schank and Abelson, 1977)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Event-based representations",
"sec_num": "2.2"
},
{
"text": "Similar representations for fiction were investigated by McIntyre and Lapata (2009) , who use them to generate short fables. In later work (McIntyre and Lapata, 2010), they add a coherence component to ensure smooth sentence-to-sentence transitions. Figure 2 shows a sample output. The lack of global \"plot\" structure is an important shortcoming of this work; while the generated stories describe reasonable sequences of events, the stories do not seem to raise or resolve any central conflict.",
"cite_spans": [
{
"start": 57,
"end": 83,
"text": "McIntyre and Lapata (2009)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 250,
"end": 258,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Event-based representations",
"sec_num": "2.2"
},
{
"text": "Similar narrative models are considered by Li et al. (2012) , who learn event networks from a corpus of short texts elicited through crowdsourcing. These texts focus on specific events such as bank robberies or dates. Importantly, although the goal of this work is to learn knowledge that may be useful in constructing or understanding narratives, the elicited texts themselves are not fictional narratives intended to entertain but simple descriptions; thus the models capture sequences of events without necessarily producing a satisfying plot structure. Finlayson (2009) also produces event network models, following a Proppian structuralist analysis of story structure (Propp, 1968) . It has been evaluated against hand-annotated Russian folktales and a small corpus of Shakespeare's plays.",
"cite_spans": [
{
"start": 43,
"end": 59,
"text": "Li et al. (2012)",
"ref_id": "BIBREF34"
},
{
"start": 557,
"end": 573,
"text": "Finlayson (2009)",
"ref_id": "BIBREF23"
},
{
"start": 673,
"end": 686,
"text": "(Propp, 1968)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Event-based representations",
"sec_num": "2.2"
},
{
"text": "Early work by Lehnert (1981) proposes a model which represents emotion and goal information alongside events. This plot-unit model treats positive and negative sentiment as primitives, and builds up a plot as a set of actions which result from, and cause, characters to feel good or bad. For instance, a retaliation has the form: \"Because Y's [action] caused a [negative state] for X, X [acted] to cause a [negative state] for Y\". Early implementations relied heavily on hand-engineered domain knowledge, and thus did not generalize well. AESOP (Goyal et al., 2010) attempts to modernize this representation by learning which verbs cause positive or negative states automatically. A similar goal-based representation is learned in O'Neill and Riedl's (2011) work, which performs story generation by abductive plan inference. These representations are sophisticated, but brittle; AESOP's accuracy is poor even for short fables, and the authors conclude that the approach is unlikely to be scalable. present Scheherezade, a system for human annotation of AESOP-like goal structures; a small annotated corpus is available . However, extending this expensive hands-on annotation task to novelistic texts is likely to be prohibitively time-consuming. Kazantseva and Szpakowicz (2010) , while relying on sentence-based information, is somewhat different in its objectives. Rather than modeling plot structure, it attempts to produce \"spoiler-free\" summaries which exclude plot detail while incorporating character and setting description, using features such as stative verbs (\"stand, know, be located\") to find appropriate sentences. They analyze the performance of summarization systems on fictional text and find that conventional systems are deeply inadequate; this serves as further motivation for our own work.",
"cite_spans": [
{
"start": 14,
"end": 28,
"text": "Lehnert (1981)",
"ref_id": "BIBREF33"
},
{
"start": 545,
"end": 565,
"text": "(Goyal et al., 2010)",
"ref_id": "BIBREF24"
},
{
"start": 1246,
"end": 1278,
"text": "Kazantseva and Szpakowicz (2010)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Event-based representations",
"sec_num": "2.2"
},
{
"text": "The systems presented in this paper are kernel functions, a standard tool in machine learning for feature-based classification and regression (Bishop, 2006, chapter 6) . A kernel is a similarity function k(X, Y ) \u2265 0, with 0 representing minimal similarity. A valid kernel represents an inner product in some feature space \u03c6, so that k(X, Y ) = \u03c6(X) \u2022 \u03c6(Y ). The linear kernel takes \u03c6 as the identity and is simply a dot product.",
"cite_spans": [
{
"start": 142,
"end": 167,
"text": "(Bishop, 2006, chapter 6)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels, symmetrization and parameter optimization",
"sec_num": "2.3"
},
{
"text": "More complex kernels have been proposed for the case where X and Y are structured objects such as graphs (Vishwanathan et al., 2010) . A common method of extending a simple kernel to an object with many substructures is to avail oneself of the convolution theorem (Haussler, 1999) which applies the simple kernel to all pairs of substructures and sums the result.",
"cite_spans": [
{
"start": 105,
"end": 132,
"text": "(Vishwanathan et al., 2010)",
"ref_id": "BIBREF55"
},
{
"start": 264,
"end": 280,
"text": "(Haussler, 1999)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels, symmetrization and parameter optimization",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "K(X, Y ) = u\u2208X v\u2208Y k(u, v)",
"eq_num": "(1)"
}
],
"section": "Kernels, symmetrization and parameter optimization",
"sec_num": "2.3"
},
{
"text": "Other methods have been proposed, although not all of them result in theoretically valid kernels. The approach here follows the work of Boughorbel et al. (2004) in computer vision, who compute a mapping between X and Y and only evaluate k(u, v) for matched pairs:",
"cite_spans": [
{
"start": 136,
"end": 160,
"text": "Boughorbel et al. (2004)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels, symmetrization and parameter optimization",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k match (X, Y ) = max matching F :u\u2194v u\u2208X k(u, F (u))",
"eq_num": "(2)"
}
],
"section": "Kernels, symmetrization and parameter optimization",
"sec_num": "2.3"
},
{
"text": "Such a function can no longer be represented as a linear product, since it includes a maximization operator. However, it can still be used to provide positive similarities. Because these functions remain similar to kernels in practice, the paper will continue to describe them as such.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels, symmetrization and parameter optimization",
"sec_num": "2.3"
},
{
"text": "Additional intuition for this approach comes from machine translation (MT), where the use of a one-to-one mapping in word alignment is known as symmetrization. Symmetrization is a common method for improving word alignments. Suppose that an alignment between two sentences indicates that word u translates as v; when aligning in reverse, v should become u again. In this work, the same holds true for characters-character u from work X (Elizabeth Bennet from Pride and Prejudice) should correspond only to character v from work Y (Jane from Jane Eyre). If Elizabeth took on the roles of a whole set of characters, this would be a point of dissimilarity between the two novels. The simplest technique for computing a symmetric alignment is to run independent one-to-many models in each direction and take the intersection. MT researchers have improved upon this by computing a matching explicitly (Matusov et al., 2004) or training the component models directly to encourage agreement (Liang et al., 2006) .",
"cite_spans": [
{
"start": 896,
"end": 918,
"text": "(Matusov et al., 2004)",
"ref_id": "BIBREF36"
},
{
"start": 984,
"end": 1004,
"text": "(Liang et al., 2006)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels, symmetrization and parameter optimization",
"sec_num": "2.3"
},
{
"text": "When the problem of interest involves multiple sets of features, it may be useful to set kernel parameters weighing the different features to optimize performance. This task is sometimes considered as multiple kernel learning (Gnen and Alpayd\u0131n, 2011); a variety of methods have been used. In this paper, kernel parameters are optimized using rank learning: attempting to make pairs of training instances which should be similar score higher under k than less similar ones. The ranking protocol was designed following Feng and Hirst (2012) , who used a similar procedure to optimize parameters in a model of document coherence.",
"cite_spans": [
{
"start": 518,
"end": 539,
"text": "Feng and Hirst (2012)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels, symmetrization and parameter optimization",
"sec_num": "2.3"
},
{
"text": "The specific rank learner used is SVM-rank (Joachims, 2006) , which solves an optimization problem based on ordinal regression to approximately minimize the number of pairs ranked in the wrong order.",
"cite_spans": [
{
"start": 43,
"end": 59,
"text": "(Joachims, 2006)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels, symmetrization and parameter optimization",
"sec_num": "2.3"
},
{
"text": "The experiments presented here test the kernel similarity function by challenging it to distinguish novels from artificial \"negative examples\". These are created from real texts by permuting the order of the chapters. This procedure originated in the discourse coherence literature (Karamanis et al., 2004, Barzilay and Lapata, 2005) , in which it is assumed that most reorderings of a text cause a loss of coherence and are therefore suitable as negative examples. Post (2011) uses a similar idea to evaluate a model of grammaticality.",
"cite_spans": [
{
"start": 282,
"end": 319,
"text": "(Karamanis et al., 2004, Barzilay and",
"ref_id": null
},
{
"start": 320,
"end": 333,
"text": "Lapata, 2005)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation by reordering",
"sec_num": "2.4"
},
{
"text": "The use of artificial negative examples has both strengths and weaknesses. On one hand, it is highly replicable and objective in domains where annotation would be expensive and unreliable. In this case, human annotators would have to decide which real novels are more or less similar to one another. On an artificial task, systems can be evaluated for at least basic effectiveness without this kind of resource.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation by reordering",
"sec_num": "2.4"
},
{
"text": "On the other hand, performance on artificial reordering tasks can fail to correlate with performance on more realistic tasks (Elsner et al., 2007) . Random permutations can cause particular problems, since they often create local structures (for instance, a rarely mentioned minor character who appears in two widely separated chapters) which are uncommon in real documents. The use of novels in reverse order is intended to guard against this to some degree, because reversals destroy the global plot structure of the novel while preserving its local consistency. In any case, failure to correlate is primarily a problem with well-performing systems; systems which fail badly on tests with artificial documents rarely succeed on more complex ones.",
"cite_spans": [
{
"start": 125,
"end": 146,
"text": "(Elsner et al., 2007)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation by reordering",
"sec_num": "2.4"
},
{
"text": "Another potential testing strategy is that of Bamman et al. (2014) , who make a series of pre-registered hypotheses on the similarity of characters by the same author or of the same type. For example, Elizabeth Bennet is more similar to Elinor Dashwood than Allan Quatermain (\"Austenian protagonists should resemble each other more than they resemble a grizzled hunter\"). Such tests are more realistic, since they are directly based on human intuition. However, they require input from a literary scholar and do not transfer easily between corpora. They are also unsuitable for training a machine learning algorithm, since that requires a large set of training instances to optimize against, while the set of hypotheses created by a literary scholar is typically small. Thus, this evaluation strategy seems complementary to the use of artificial examples. Here, an evaluation in terms of literary hypotheses is left for future work.",
"cite_spans": [
{
"start": 46,
"end": 66,
"text": "Bamman et al. (2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation by reordering",
"sec_num": "2.4"
},
{
"text": "As stated, the overall goal of this paper is to design systems for measuring the similarity of plot structures by comparing the frequency of linguistic features over time. This section discusses in detail the basic operations used to compute frequency-over-time trajectories for sentiment words, LDA topics and characters in novels. Two kinds of representations are constructed. Single-trajectory representations track the frequency of different lexical features across the narrative as a whole. In character-trajectory representations, on the other hand, each character in the novel has their own set of associated trajectories, indicating the frequency of appearance and the frequency of different lexical features associated with them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Creating representations",
"sec_num": "3"
},
{
"text": "The corpus used in all experiments consists of 50 novels, listed in the appendix. All novels are downloaded from the Project Gutenberg Website (www.gutenberg.org) in raw text form; the Gutenberg header and footer are stripped, as are introductory and concluding material by editors, critics or publishers. Since machine learning is used to optimize system parameters, the data are divided into a training set of 20 novels, used to establish the parameters, and a test set of 30 novels, reserved for evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Creating representations",
"sec_num": "3"
},
{
"text": "Most of the representations described here measure lexical frequency over time. Such systems require a particular lexicon of relevant words. This study uses two such lexicons: Mohammad and Turney's (2010) crowdsourced sentiment lexicon, and topics from LDA (Blei et al., 2001 ). These word frequency systems represent a text as the count, for each chapter, of how often words from a particular lexical category appear, normalized by the total number of word tokens from the lexicon. For instance, a system using the category \"anger\" would represent a 10-chapter novel as a 10-element list, with the first element being the percentage of emotion words in chapter 1 in the \"angry\" category. Mohammad and Turney's (2010) lexicon is a list of words, each annotated for potential associations with various emotions by Amazon Mechanical Turk workers. The version used here contains 14273 words; the lexicon recognizes 8 different basic emotions (anger, anticipation, disgust, fear, joy, sadness, surprise and trust) and two umbrella cat- egories, positive and negative, for a total of 10. 2 As with any purely lexical sentiment resource, it will mis-classify texts which are contextspecific, specific to some sense of the word, or negated by their semantic context (\"not happy\", \"wish to be happy\"). The LDA topics are computed with Mallet (McCallum, 2002) , stripping stop words and specifying 10 topics (to keep parity with the number of emotions). Topic counts are computed from the output list of topic indicator variables z. Table 1 shows the most common words grouped in each topical category. In many cases the topics are quite interpretable; topic 1 (\"love, life, heart, soul\") seems to involve romantic feeling, while topic 8 (\"room, house, door\") covers setting details. But the clustering is very coarse; less frequent words associated with topic 1 include non-romantic feeling words like \"fear, hope, death\".",
"cite_spans": [
{
"start": 257,
"end": 275,
"text": "(Blei et al., 2001",
"ref_id": "BIBREF10"
},
{
"start": 689,
"end": 717,
"text": "Mohammad and Turney's (2010)",
"ref_id": "BIBREF44"
},
{
"start": 1334,
"end": 1350,
"text": "(McCallum, 2002)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 1524,
"end": 1531,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Single trajectories",
"sec_num": "3.1"
},
{
"text": "A character-trajectory representation has a trajectory for each character in the narrative. For instance, a system using character frequency as its only feature represents a 10-chapter novel with three characters as a bundle of three 10-element lists. Each list represents the normalized frequency with which the associated character occurs in each chapter. A system using both character frequency and anger represents the novel as three bundles of two 10-element lists. Each bundle contains the character frequencies in one list, and the frequency with which the character is associated with \"anger\" words in the other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character trajectories",
"sec_num": "3.2"
},
{
"text": "To count how often a character appears over time, the system must first compute a canonical list of the characters who appear in the text, bearing in mind that one character may be called by many names. This computation follows Bhattacharya and Getoor (2005) in extracting a list of proper names and performing cross-document coreference resolution using a series of filters which cluster them together. A related framework is described in (Coll Ardanuy and Sporleder, 2014).",
"cite_spans": [
{
"start": 228,
"end": 258,
"text": "Bhattacharya and Getoor (2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Character trajectories",
"sec_num": "3.2"
},
{
"text": "The system begins by detecting proper nouns and discarding those which occur fewer than 5 times. Identical mentions longer than two words are merged into a single entity, so that for example all mentions of \"George Osborne\" are taken to refer to the same person. Next, each name is assigned a gender-masculine, feminine or neuter-using a list of gendered titles, then a list of male and female first names from the 1990 US census. Mentions are then merged when each is longer than one word, the genders do not clash, and first and last names are consistent (Charniak, 2001 ). This step would cluster \"George Osborne\", \"Mr. Osborne\" and \"Mr. George Osborne\". Single-word mentions are then merged, either with matching multiword mentions if they appear in the same paragraph, or else with the multi-word mention that occurs in the most paragraphs-so when \"George\" appears close to \"George Osborne\", the two refer to the same person. Finally, mentions are discared if they still have not been assigned a non-neuter gender, or if they match synset location.n.01 in WordNet (Miller et al., 1993) ; these are likely to be place names like \"London\", which are proper noun phrases but not characters.",
"cite_spans": [
{
"start": 557,
"end": 572,
"text": "(Charniak, 2001",
"ref_id": "BIBREF15"
},
{
"start": 1069,
"end": 1090,
"text": "(Miller et al., 1993)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Character trajectories",
"sec_num": "3.2"
},
{
"text": "These coreference heuristics still have some difficulty in coping with the variety of names used by characters in 19th-century novels. These stories often contain characters who are related to one another (and thus share a last name); characters refer to one another by nickname; titles-and even names-can change over time (due to marriage, military promotion and so on). For instance, in Thackeray's Vanity Fair, Mr. (John) Osborne has a son, George Osborne, initially with the title Master, then Mr., but eventually rising to the rank of army captain. George, in turn, marries (making his wife Mrs. George Osborne) and has a son, who is inconveniently named George Osborne. By the end of the book, this son is himself known as Mr. Osborne. Table 2 shows how our system tries to resolve this confusion; since it insists that titles be consistent, it produces a somewhat excessive list of characters. This is one of the harder cases in the corpus, however, and in general the results look sensible, although there is no annotated novelistic coreference corpus with which to validate them.",
"cite_spans": [],
"ref_spans": [
{
"start": 742,
"end": 749,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Character trajectories",
"sec_num": "3.2"
},
{
"text": "To represent a novel using character frequency, the system preprocesses the text by splitting it into paragraphs. The coreference heuristics are used to decide which characters appear in each paragraph. The frequency of a character in a chapter is defined as the number of para- TABLE 2 Names (over frequency cap) of characters named \"George\" or \"Osborne\" detected in Vanity Fair. \"Mr. Osborne\" can refer to multiple characters. \"Capt. Osborne\" sometimes refers to the same person as \"Mr. Osborne\". \"Georgy\" is missing from the name list and fails to be assigned a gender, so it is discarded from consideration in character-based systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character trajectories",
"sec_num": "3.2"
},
{
"text": "graphs in which they appear, divided by the total number of character appearances. Character-specific lexical trajectories are computed by counting lexicon items-sentiment or LDA-in paragraphs featuring a specific character. Such trajectories naturally reflect overall character frequency, since there can be more instances of particular words if there are more paragraphs overall. To remove this correlation so that the characterfrequency features are not redundant, they are normalized per character rather than with reference to the whole text. To compute this normalized score, the count is first set to 0 if the character frequency for the chapter is less than .05 (so that dividing by a small number does not inflate very uncertain statistics). Then the count is divided by the total number of lexicon items appearing in paragraphs of the chapter which mention the specific character (rather than dividing by the number of lexicon items in the chapter overall).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character trajectories",
"sec_num": "3.2"
},
{
"text": "All trajectories are smoothed using a moving average with a window size of 10, forcing them to vary smoothly over time:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character trajectories",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x (t) = 1 t+10 i=t\u221210 |t \u2212 i| t+10 i=t\u221210 |t \u2212 i|x(i)",
"eq_num": "(3)"
}
],
"section": "Character trajectories",
"sec_num": "3.2"
},
{
"text": "After smoothing, each trajectory is projected onto a fixed basis of 50 points using linear interpolation. This enables fast comparison of trajectories, because it yields a fixed-length discrete representation of each one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character trajectories",
"sec_num": "3.2"
},
{
"text": "After preprocessing, a novel is represented as a set of time-varying trajectories, each representing the proportional frequency of some feature (e.g., \"anger\" words) in each chapter. In single-trajectory systems, there is one set of trajectories for the entire novel, with a trajectory for each lexical feature. In character-based or hybrid character/lexicon systems, characters have their own associated trajectories. A similarity function for representations of this type can be defined as a kernel function k(X, Y ) which will be large when novels X and Y are similar and small when they are not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel-based measurements",
"sec_num": "4"
},
{
"text": "The function k for this representation relies on a function c which compares trajectories for a single feature. The system uses the simple dot product (the linear kernel function), 3 defining the similarity of a pair of trajectories u and v (e.g. \"anger\" over time in two novels) as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel-based measurements",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c(u, v) = u \u2022 v",
"eq_num": "(4)"
}
],
"section": "Kernel-based measurements",
"sec_num": "4"
},
{
"text": "To combine trajectories for multiple features (various emotions or topics), the total similarity k is defined as a weighted sum controlled by a parameter vector \u03b8 which represents the relative weight assigned to each one in the overall similarity function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel-based measurements",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k(u, v; \u03b8) = coord i \u03b8 i c(u i , v i )",
"eq_num": "(5)"
}
],
"section": "Kernel-based measurements",
"sec_num": "4"
},
{
"text": "This function k will be called the \"single-trajectory plot kernel\" since it can be used to evaluate similarity for representations which do not include per-character trajectories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel-based measurements",
"sec_num": "4"
},
{
"text": "We cannot use k for representations with character features, because there is no obvious way to determine which pairs of characters should be compared with function c. For instance, should the system use k to compare Elizabeth from Pride and Prejudice to Jane from Jane Eyre, or to Rochester, or to someone else? One standard approach is to define k char (X, Y ) using the convolution theorem (Haussler, 1999 , Equation 1), which compares each character u from X to all the characters v from Y :",
"cite_spans": [
{
"start": 393,
"end": 408,
"text": "(Haussler, 1999",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel-based measurements",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k char (X, Y ) = u\u2208X v\u2208Y k(u, v; \u03b8)",
"eq_num": "(6)"
}
],
"section": "Kernel-based measurements",
"sec_num": "4"
},
{
"text": "Intuitively, the construction of k char from c is intended to find two novels more similar if characters from one correspond to the characters from the other in a way that makes sense. For instance, Pride and Prejudice is like Jane Eyre in that they both have a female protagonist, a male love interest, and so forth. The convolution theorem does not restrict itself to sensible alignments, however. It counts many-to-many alignments (as if to allow a single character from Pride and Prejudice to stand in for all the characters from Jane Eyre at once), and to make things worse, it sums over all the alignments, so that many fairly tenuous comparisons can \"gang up\" to render a pair of texts more similar than a single good comparison. As discussed in section 2.3, the use of a matching can improve performance in such a system by forcing each character to map to only one other character. The mapping gives each Pride and Prejudice character a \"best equivalent\" character in Jane Eyre (ignoring \"left-over\" characters), e.g., Elizabeth mapping to Jane, Darcy to Rochester, and so forth. Following Matusov et al. (2004) , the system computes an optimal bipartite matching between characters with the Hungarian algorithm, which works in polynomial time.",
"cite_spans": [
{
"start": 1099,
"end": 1120,
"text": "Matusov et al. (2004)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Symmetrization",
"sec_num": "4.1"
},
{
"text": "k char symm (X, Y ) = max",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symmetrization",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "matching F :u\u2194v u\u2208X k(u, F (u))",
"eq_num": "(7)"
}
],
"section": "Symmetrization",
"sec_num": "4.1"
},
{
"text": "4.2 Parameter estimation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symmetrization",
"sec_num": "4.1"
},
{
"text": "The function k described in Equation 5 is controlled by a parameter vector \u03b8 which controls the relative weight assigned to similarity with respect to each feature. Since some of the feature sets used here are relatively large (10 emotions and 10 LDA dimensions), some method for automatic parameter tuning is desirable. This section describes a training procedure, although its results show that it displays certain ambivalence. The procedure, following Feng and Hirst (2012) , is based on rankings. Since the system will be evaluated by using it to detect aberrant novels, the training objective should not depend on the absolute similarity values assigned to each novel pair, but on the difference between the scores k(X, Y ) and k(X, Y ) where Y is some kind of aberrant text. As in any learning task, a training set of novels X 1...n is used to fix the parameters. For each X i , the training program constructs aberrant texts X i,1...k , and assigns a rank to each aberrant ordering based on its dissimilarity to the original text. The two choices used for X correspond to the two settings for synthetic-data experiments reported below: random permutations and reversals. When training to optimize discrimination of random permutations, the system computes k = 10 random orderings of each training novel and ranks them by edit distance from the identity permutation. 4 When training to optimize reversals, it uses a single X 1 , the reversed novel, for each training novel.",
"cite_spans": [
{
"start": 455,
"end": 476,
"text": "Feng and Hirst (2012)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Symmetrization",
"sec_num": "4.1"
},
{
"text": "Training instances are generated for each ordered pair X i , X j . The SVM-rank learner (Joachims, 2006) attempts to find parameters such that the original novel X i is more similar to the real novel X j than it is to the least aberrant permutation X j,1 , more similar to X j,1 than to X j,2 , and so forth. 5",
"cite_spans": [
{
"start": 88,
"end": 104,
"text": "(Joachims, 2006)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Symmetrization",
"sec_num": "4.1"
},
{
"text": "k(X i , X j ) > k(X i , X j,1 ) > k(X i , X j,2 ) > . . . > k(X i , X j,k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Symmetrization",
"sec_num": "4.1"
},
{
"text": "As an example, consider the ordered pair Pride and Prejudice and Jane Eyre as X i , X j . 6 The system produces chapter-by-chapter permutations, ranked by edit distance from the original (schematically, these might be called Jaen yrEe and naeE yJr ) and then attempts to achieve the ranking: k(P P, Jane Eyre) > k(P P, Jaen yrEe) > k(P P, naeE yJr)",
"cite_spans": [
{
"start": 90,
"end": 91,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Symmetrization",
"sec_num": "4.1"
},
{
"text": "Optimization is somewhat more complicated a symmetrized system, because the matching acts as a latent variable, rendering the optimization non-convex (Yu and Joachims, 2009) . The system employs an EM-like coordinate ascent procedure; it begins with an initial classifier, solves for the matching, and then reoptimizes the classifier. The character frequency features yield good results in the experiments discussed below, so the initial weights are set to (character-frequencygender-matched: 1, character-frequency-gender-mismatched: 1). In development experiments, running only a single iteration leads to the best results, but this may be a function of the small size of the training set.",
"cite_spans": [
{
"start": 150,
"end": 173,
"text": "(Yu and Joachims, 2009)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Symmetrization",
"sec_num": "4.1"
},
{
"text": "In the experiments in this section, the system is used to distinguish real novels in the test set (see the appendix) from artificial surrogates produced by permuting them. Two conditions are reported: random permutations and reversals. In each case, the permutations are performed chapter-by-chapter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "The experiments consider pairwise classifications in which the system is given access to a single training novel x along with a test pair (y, y perm ), and asked to decide whether y or y perm is the original. y is only selected as the original if k(x, y) > k(x, y perm ). Since this is a binary forced choice, a random baseline would score 50%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "The systems to be tested vary in three dimensions. They may compute a single trajectory of values for the entire novel or character trajectories for every character. They may use sentiment features or LDA features (or, in the case of character trajectories, they may use only the frequency with which each character appears). Finally, for character trajectory systems only, they may compare representations using a symmetrized or unsymmetrized kernel. The details of single and character-based representations appear in sections 3.1 and 3.2. The feature sets are described in section 3.1. Single-trajectory representations are always compared with the function k (Equation 5); the symmetrized kernel is k char symm (Equation 7) and the unsymmetrized one is k char (Equation 6), both described in section 2.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "There are 30 test novels, and thus 30 \u00d7 29 = 870 pairwise comparisons. To avoid over-analyzing miniscule differences in results due only to luck, a statistical test must be used to indicate when the gap between two systems is statistically significant. A variety of common tests cannot be used since they assume different test trials to be independent and identically distributed. That is not the case here since each test novel y participates in multiple comparisons, with different novels standing in as x. Instead, significance is assessed using a Monte Carlo permutation test with 100000 random permutations. 7 Differences between systems are reported as significant if they reach the p < .05 level-that is, if the probability that they are due only to chance is assessed as less than 5%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5"
},
{
"text": "The accuracy scores of various single-trajectory systems appear in Table 3, and those of character-trajectory systems in Table 4 . For instance, the first row of Table 3 shows that a representation based on \"anger\" as a proportion of all sentiment words in a chapter makes it possible to distinguish from the originals 74% of randomly permuted novels in the development set. Only 64% of reversed novels can be distinguished. Results in the test data are somewhat different, with 64% of permuted novels distinguishable and 62% of reversals. The last row of the table shows that a system combining all sentiment and LDA features (with learned weights) distinguishes 71% of randomly permuted novels in the test set and 88% of reversed ones from the original, and that it is significantly better than a system using only sentiment. Comparing the results in Table 3 for single features (above the line) with those for learned combinations (below) shows that while training is generally effective, its results are not particularly impressive. This seems to reflect several issues. One is that novels are quite heterogenous, and parameters from the training set do not always work well in testing. For instance, on the development set, trust is a good indicator for only 43% of the random permutations but is correct for 73% of the reversals. On the test set, it is 60% accurate for random permutations and 53% for reversals.",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 128,
"text": "Table 4",
"ref_id": "TABREF2"
},
{
"start": 162,
"end": 169,
"text": "Table 3",
"ref_id": null
},
{
"start": 853,
"end": 860,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy scores",
"sec_num": "6.1"
},
{
"text": "Another issue is that, for mathematical reasons, the learning system aims only approximately to maximize the number of correctly ranked pairs in the training set. 8 This means that trained systems are not guaranteed to outperform their components, even on the training data. For instance, the learned sentiment trajectory system scores 65% on reversals, no better than positive or fear on their own.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy scores",
"sec_num": "6.1"
},
{
"text": "Nonetheless, learning-based systems are capable of effective parameter tuning, especially using the LDA features. The LDA singletrajectory system, optimized for reverse permutations, scores 89%. Using uniform weights instead of optimization (not shown in Table 3 ), the result is significantly worse at 82%. Where the learning methods fail to improve over single-feature systems, it seems likely that the critical problem is insufficient training data. Now, Table 4 . Systems with character-specific trajectories perform worse on reversals than those using single trajectories. While the LDAbased single-trajectory system can distinguish these at the 89% level, the corresponding character trajectory system with LDA features scores only 59%. Higher performance, however, is possible for random orderings, on which simply comparing character frequencies scores 61%.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 3",
"ref_id": null
},
{
"start": 458,
"end": 465,
"text": "Table 4",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Accuracy scores",
"sec_num": "6.1"
},
{
"text": "Using the symmetrization-by-matching technique (Equation 7) improves this ordering result substantially. All matching-based systems are significantly better at random orders than their un-symmetrized counterparts. Frequency alone is effective in 81%; sentiment and the combined model perform comparably, although they do not improve, possibly because the development set is too small to get good parameter estimates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy scores",
"sec_num": "6.1"
},
{
"text": "The results across different representations are reported in Table 5 . It is clear that a single trajectory is better for reversals, with scores of 89% using LDA versus 59% for a character-trajectory system. Characterbased systems are numerically better for orderings (82% using all features versus 71% for a single-trajectory system), although the difference does not reach significance. LDA features are more effective than sentiment overall, and combining the two feature sets seems to add relatively little-compare row 1 to 2 and row 3 to 4.",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy scores",
"sec_num": "6.1"
},
{
"text": "The earlier system presented in Elsner (2012) differed from this one in several ways. A few differences are minor: the name resolution algorithm presented here uses a better gender heuristic, and the definition of the basic trajectory comparison c (Equation 4) has been simplified by dropping a bag-of-unigrams feature set which appears to have little effect on performance. The basic character frequency system with these changes in place (Table 4 ) scores 61% on random orderings. This is comparable to the 60% reported by Elsner (2012) for a character-based kernel, suggesting that these simplifications have little effect.",
"cite_spans": [
{
"start": 32,
"end": 45,
"text": "Elsner (2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 440,
"end": 448,
"text": "(Table 4",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Comparison to (Elsner, 2012)",
"sec_num": "6.2"
},
{
"text": "The major additions are the use of Mohammad and Turney's (2010) sentiment lexicon rather than that in (Wilson et al., 2005) , the use of LDA features, and symmetrization. Table 5 shows the results of the best earlier system described in (Elsner, 2012) . That system is outperformed by the best systems described here, scoring 62% on orderings versus 82% with symmetrized character features and 52% on reversals versus 89% with LDA.",
"cite_spans": [
{
"start": 102,
"end": 123,
"text": "(Wilson et al., 2005)",
"ref_id": "BIBREF58"
},
{
"start": 237,
"end": 251,
"text": "(Elsner, 2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 171,
"end": 178,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison to (Elsner, 2012)",
"sec_num": "6.2"
},
{
"text": "There is also a major subtraction. Elsner (2012) presented a secondorder system; it takes character relationships into account, and outperforms the character-to-character systems. Once the new features have been incorporated, the second-order system performs worse than the basic systems. In development experiments (not shown in Table 5 ), adding the second-order relationship features to a system using character frequencies and symmetrization decreases the best ordering result from 81% to 78%. This suggests that although relationship information gives some accurate cues to structure, it can also be misleading and will need to be incorporated into realistic systems with care.",
"cite_spans": [],
"ref_spans": [
{
"start": 330,
"end": 337,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison to (Elsner, 2012)",
"sec_num": "6.2"
},
{
"text": "Finally, Elsner (2012) described a highly ineffective single-trajectory system (not shown in Table 5 ) which performed essentially at chance. He concluded that such systems were inferior to those with character information. This appears to be untrue in general. That single-trajectory system used as its only feature the proportion of \"subjective\" words from Wilson et al. (2005) . The results in Table 3 show single-trajectory that systems can be quite effective when given the right feature set. Better sentiment features score 57% on ordering and 65% on reversals, and LDA scores 67% on orderings and 89% on reversals.",
"cite_spans": [
{
"start": 359,
"end": 379,
"text": "Wilson et al. (2005)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 5",
"ref_id": null
},
{
"start": 397,
"end": 404,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison to (Elsner, 2012)",
"sec_num": "6.2"
},
{
"text": "Analysis of the synthetic permutation-discrimination task reveals several interesting facts. First, while using a larger sentiment lexicon definitely improves over a smaller one, sentiment words on their own are still a cue to plot structure less effective than LDA topics, especially for reversals. This suggests that the plot of a 19th-century novel is in fact tied closely to domain events such as, e.g., marriages-more than to sentiment patterns like happiness-and that the domain-based cues are particularly useful in detecting beginnings and endings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "On the other hand, symmetrized character frequency is, on its own, overwhelmingly the best indicator for distinguishing random permutations from real texts. That LDA models do not do especially well on this case indicates that the middle sections of a novel vary widely in terms of domain events (middle sections vary in their inclusion of events like travel, marriages of minor characters, or illness and death). What is more likely to stay constant is the way in which the narrative directs its focus toward main characters, while introducing minor characters who can attain importance for shorter periods before fading back into the background.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Constructing an explicit matching between characters is helpful for character-based systems, regardless of the feature set. Matchings allow the system to check that the two texts not only incorporate a similar set of emotions or actions, but also partition them among different characters in similar ways.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Character-specific emotions or LDA topics, however, do not seem to improve results by sharpening the model's ability to distinguish between character roles, at least as far as random permutations are concerned; on reversals, they improve significantly over character frequency alone, but the effect is not large. This suggests that, even in a character-based system, such features capture whole-story trends involving beginnings and ends. In other words, character frequency alone does a good job in distinguishing \"choppy\" from \"fluent\" patterns. All the same, specialized lexical frequencies (whether measured over the whole text or per-character) are better representations of the overall plot arc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "It seems possible that similarity measurement systems of this type may eventually be useful in search and recommendation systems for fiction. This, however, would require additional resources beyond those presented here. The disappointing performance from parameter tuning suggests that larger datasets for artificial reordering might be useful, since they could allow the system to use its existing feature set more productively, or to work effectively with more features. Even so, while artificial tasks are likely to remain helpful for cheaply ruling out bad representational choices during development, they are probably not sufficient to fine-tune a system that must measure similarity between pairs of real novels rather than between real and reordered ones. Training data for this scenario are likely to require effort from human annotators. Future work with a practical recommendation system may help to clarify how well systems like the current work can benefit the production of a truly effective system for novelistic texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "/ LiLT volume 12, issue 5 October 2015",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Sentiment analysis systems which work on whole clauses or sentences are an active research area(Socher et al., 2011, Yessenalina and Cardie, 2011, among others); to my knowledge, such systems have not been used to construct emotional trajectory models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "/ LiLT volume 12, issue 5 October 2015",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The paper subsequently refers to all 10 of these as \"emotion\" or \"sentiment\" categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Another common choice would be the normalized cosine, a standard measurement in information theory. In development, however, the normalized cosine caused problems, because it assigns high similarity to pairs of curves which have relatively small values throughout. Such small values are generally uninteresting-every novel has rare characters, but this does not represent any deep similarity in plot.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Edit distance was one of the better-performing dissimilarity metrics in(Feng and Hirst, 2012).5 The SVM-rank learner uses a hinge loss function and a linear kernel.6 The pair Jane Eyre as X i and Pride and Prejudice as X j is a separate instance of the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For kernels which can be represented as inner products, the Maximum Mean Discrepancy test(Gretton et al., 2007) is more suitable but, as discussed in section 2.3, the max operator in the symmetric kernel means that this condition does not hold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The approximation used is described fully byJoachims (2006). In development experiments using a maximum entropy system, which uses a different approximation to the training error(Bishop, 2006, chapter 4), the trained system actually performed worse on the training set than a system with uniform weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "I am grateful to the editors of this volume, especially Stan Szpakowicz, and to three anonymous reviewers for their comments and suggestions. The discussion of related work owes much to suggestions by Diane Litman, Janice Wiebe and Robyn Warhol.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "/sentiment (symm.) 81 \u2020 sentiment,lda\u2212symm 54",
"authors": [
{
"first": "",
"middle": [],
"last": "Char",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Freq",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Char. freq./sentiment (symm.) 81 \u2020 sentiment,lda\u2212symm 54",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "/LDA 57 59 \u2020 f req,sent Char. freq./LDA (symm.) 72 \u2020 lda 56",
"authors": [
{
"first": "",
"middle": [],
"last": "Char",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Freq",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Char. freq./LDA 57 59 \u2020 f req,sent Char. freq./LDA (symm.) 72 \u2020 lda 56",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "/sentiment/LDA (symm.) 82 \u2020 combo,lda\u2212symm 57",
"authors": [
{
"first": "",
"middle": [],
"last": "Freq",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Freq./sentiment/LDA (symm.) 82 \u2020 combo,lda\u2212symm 57",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Emotional Sequencing and Development in Fairy Tales",
"authors": [
{
"first": "References",
"middle": [],
"last": "Alm",
"suffix": ""
},
{
"first": "Cecilia",
"middle": [],
"last": "Ovesdotter",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 2005,
"venue": "ACII",
"volume": "",
"issue": "",
"pages": "668--674",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References Alm, Cecilia Ovesdotter and Richard Sproat. 2005. Emotional Sequencing and Development in Fairy Tales. In ACII , pages 668-674.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The Writer's Toolkit. Master's thesis",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Ang",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ang, Robert. 2012. The Writer's Toolkit. Master's thesis, University of Edinburgh.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning Latent Personas of Film Characters",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Connor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "352--361",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bamman, David, Brendan O'Connor, and Noah A. Smith. 2013. Learning Latent Personas of Film Characters. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 352-361. Sofia, Bulgaria: Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Bayesian Mixed Effects Model of Literary Character",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Underwood",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "370--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bamman, David, Ted Underwood, and Noah A. Smith. 2014. A Bayesian Mixed Effects Model of Literary Character. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 370-379. Baltimore, Maryland: Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Modeling Local Coherence: an Entity-Based Approach",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barzilay, Regina and Mirella Lapata. 2005. Modeling Local Coherence: an Entity-Based Approach . In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Relational clustering for multitype entity resolution",
"authors": [
{
"first": "Indrajit",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 4th international workshop on Multi-relational mining, MRDM '05",
"volume": "",
"issue": "",
"pages": "3--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhattacharya, Indrajit and Lise Getoor. 2005. Relational clustering for multi- type entity resolution. In Proceedings of the 4th international workshop on Multi-relational mining, MRDM '05, pages 3-12. New York, NY, USA: ACM. ISBN 1-59593-212-7.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Pattern Recognition and Machine Learning (Information Science and Statistics)",
"authors": [
{
"first": "Christopher",
"middle": [
"M"
],
"last": "Bishop",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics). Secaucus, NJ, USA: Springer-Verlag New York, Inc. ISBN 0387310738.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Latent Dirichlet Allocation",
"authors": [
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "David",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blei, David, Andrew Y. Ng, and Michael I. Jordan. 2001. Latent Dirichlet Allocation. Journal of Machine Learning Research 3:2003.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dynamic Topic Models",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2006,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blei, David M. and John D. Lafferty. 2006. Dynamic Topic Models. In ICML.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Non-Mercer Kernels for SVM Object Recognition",
"authors": [
{
"first": "",
"middle": [],
"last": "Boughorbel",
"suffix": ""
},
{
"first": "Jean-Philippe",
"middle": [],
"last": "Sabri",
"suffix": ""
},
{
"first": "Francois",
"middle": [],
"last": "Tarel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fleuret",
"suffix": ""
}
],
"year": 2004,
"venue": "BMVC",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boughorbel, Sabri, Jean-Philippe Tarel, and Francois Fleuret. 2004. Non- Mercer Kernels for SVM Object Recognition. In BMVC , pages 1-10.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Unsupervised Learning of Narrative Schemas and their Participants",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "602--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chambers, Nathanael and Dan Jurafsky. 2009. Unsupervised Learning of Narrative Schemas and their Participants. In Proceedings of the Joint Con- ference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP , pages 602-610. Suntec, Singapore: Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Reading Tea Leaves: How Humans Interpret Topic Models",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Gerrish",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2009,
"venue": "Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang, Jonathan, Jordan Boyd-Graber, Chong Wang, Sean Gerrish, and David M. Blei. 2009. Reading Tea Leaves: How Humans Interpret Topic Models. In Neural Information Processing Systems.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Unsupervised learning of name structure from coreference data",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2001,
"venue": "Second Meeting of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charniak, Eugene. 2001. Unsupervised learning of name structure from coref- erence data. In Second Meeting of the North American Chapter of the Association for Computational Linguistics (NACL-01).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Structure-based Clustering of Novels",
"authors": [
{
"first": "Coll",
"middle": [],
"last": "Ardanuy",
"suffix": ""
},
{
"first": "Caroline",
"middle": [],
"last": "Mariona",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sporleder",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Computational Linguistics for Literature (CLFL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Coll Ardanuy, Mariona and Caroline Sporleder. 2014. Structure-based Clus- tering of Novels. In Proceedings of Computational Linguistics for Literature (CLFL). Gothenburg, Sweden. References / 25",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The Concept of Plot and the Plot of Tom Jones",
"authors": [
{
"first": "R",
"middle": [
"S"
],
"last": "Crane",
"suffix": ""
}
],
"year": 2002,
"venue": "Narrative dynamics : essays on time, plot, closure, and frames",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Crane, R.S. 2002. The Concept of Plot and the Plot of Tom Jones. In B. Richardson, ed., Narrative dynamics : essays on time, plot, closure, and frames. The Ohio State University Press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Character-based kernels for novelistic plot structure",
"authors": [
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "634--644",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elsner, Micha. 2012. Character-based kernels for novelistic plot structure. In Proceedings of the 13th Conference of the European Chapter of the As- sociation for Computational Linguistics, pages 634-644. Avignon, France: Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A unified local and global model for discourse coherence",
"authors": [
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Austerweil",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of HLT-NAACL '07",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elsner, Micha, Joseph Austerweil, and Eugene Charniak. 2007. A unified local and global model for discourse coherence. In Proceedings of HLT- NAACL '07 .",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Extracting Social Networks from Literary Fiction",
"authors": [
{
"first": "David",
"middle": [],
"last": "Elson",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Dames",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "138--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elson, David, Nicholas Dames, and Kathleen McKeown. 2010. Extracting Social Networks from Literary Fiction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 138-147. Uppsala, Sweden: Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Building a Bank of Semantically Encoded Narratives",
"authors": [
{
"first": "David",
"middle": [
"K"
],
"last": "Elson",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kathleen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elson, David K. and Kathleen R. McKeown. 2010. Building a Bank of Seman- tically Encoded Narratives. In N. C. C. Chair), K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, M. Rosner, and D. Tapias, eds., Proceed- ings of the Seventh conference on International Language Resources and Evaluation (LREC'10). Valletta, Malta: European Language Resources Association (ELRA). ISBN 2-9517408-6-7.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Extending the Entity-based Coherence Model with Multiple Ranks",
"authors": [
{
"first": "Vanessa",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "315--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng, Vanessa Wei and Graeme Hirst. 2012. Extending the Entity-based Co- herence Model with Multiple Ranks. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 315-324. Avignon, France: Association for Computational Linguis- tics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Deriving narrative morphologies via analogical story merging",
"authors": [
{
"first": "Mark",
"middle": [
"A"
],
"last": "Finlayson",
"suffix": ""
}
],
"year": 2009,
"venue": "New Frontiers in Analogy Research: Proceedings of the Second International Conference on Analogy",
"volume": "",
"issue": "",
"pages": "127--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finlayson, Mark A. 2009. Deriving narrative morphologies via analogical story merging. In New Frontiers in Analogy Research: Proceedings of the Second International Conference on Analogy, pages 127-136. Sofia, Bul- garia: New Bulgarian University Press.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Automatically Producing Plot Unit Representations for Narrative Text",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "77--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goyal, Amit, Ellen Riloff, and Hal Daume III. 2010. Automatically Producing Plot Unit Representations for Narrative Text. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 77-86. Cambridge, MA: Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A Kernel Method for the Two-Sample-Problem",
"authors": [
{
"first": "Arthur",
"middle": [],
"last": "Gretton",
"suffix": ""
},
{
"first": "Karsten",
"middle": [
"M"
],
"last": "Borgwardt",
"suffix": ""
},
{
"first": "Malte",
"middle": [],
"last": "Rasch",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Schlkopf",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"J"
],
"last": "Smola",
"suffix": ""
}
],
"year": 2007,
"venue": "Advances in Neural Information Processing Systems 19",
"volume": "",
"issue": "",
"pages": "513--520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gretton, Arthur, Karsten M. Borgwardt, Malte Rasch, Bernhard Schlkopf, and Alexander J. Smola. 2007. A Kernel Method for the Two-Sample- Problem. In B. Schlkopf, J. Platt, and T. Hoffman, eds., Advances in Neural Information Processing Systems 19 , pages 513-520. Cambridge, MA: MIT Press.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Multiple kernel learning algorithms",
"authors": [
{
"first": "Mehmet",
"middle": [],
"last": "Gnen",
"suffix": ""
},
{
"first": "Ethem",
"middle": [],
"last": "Alpayd\u0131n",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2211--2268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gnen, Mehmet and Ethem Alpayd\u0131n. 2011. Multiple kernel learning algo- rithms. The Journal of Machine Learning Research 12:2211-2268.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Convolution Kernels on Discrete Structures",
"authors": [
{
"first": "David",
"middle": [],
"last": "Haussler",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haussler, David. 1999. Convolution Kernels on Discrete Structures. Tech. Rep. UCSC-CRL-99-10, Computer Science Department, UC Santa Cruz.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Autosummarize. McNally Jackson Books",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Huff",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huff, Jason. 2010. Autosummarize. McNally Jackson Books. http://jason- huff.com/projects/autosummarize/.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Training linear SVMs in linear time",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "217--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachims, Thorsten. 2006. Training linear SVMs in linear time. In Proceed- ings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 217-226. ACM.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Evaluating Centering-Based Metrics of Coherence",
"authors": [
{
"first": "Nikiforos",
"middle": [],
"last": "Karamanis",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Mellish",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Oberlander",
"suffix": ""
}
],
"year": 2004,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "391--398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karamanis, Nikiforos, Massimo Poesio, Chris Mellish, and Jon Oberlander. 2004. Evaluating Centering-Based Metrics of Coherence. In ACL, pages 391-398.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Summarizing short stories. Computational Linguistics pages",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Kazantseva",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "71--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazantseva, Anna and Stan Szpakowicz. 2010. Summarizing short stories. Computational Linguistics pages 71-109.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The Doubly Correlated Nonparametric Topic Model",
"authors": [
{
"first": "Dae",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Erik",
"middle": [
"B"
],
"last": "Il",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sudderth",
"suffix": ""
}
],
"year": 2011,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "1980--1988",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim, Dae Il and Erik B. Sudderth. 2011. The Doubly Correlated Nonpara- metric Topic Model. In NIPS , pages 1980-1988.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Plot Units and Narrative Summarization",
"authors": [
{
"first": "Wendy",
"middle": [],
"last": "Lehnert",
"suffix": ""
}
],
"year": 1981,
"venue": "Cognitive Science",
"volume": "4",
"issue": "",
"pages": "293--331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lehnert, Wendy. 1981. Plot Units and Narrative Summarization. Cognitive Science 4:293-331.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Crowdsourcing narrative intelligence",
"authors": [
{
"first": "Boyang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Lee-Urban",
"suffix": ""
},
{
"first": "Darren",
"middle": [
"Scott"
],
"last": "Appling",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"O"
],
"last": "Riedl",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in Cognitive Systems",
"volume": "2",
"issue": "",
"pages": "25--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, Boyang, Stephen Lee-Urban, Darren Scott Appling, and Mark O Riedl. 2012. Crowdsourcing narrative intelligence. Advances in Cognitive Systems 2:25-42.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Alignment by agreement",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang, Percy, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Compu- tational Linguistics, pages 104-111. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Symmetric word alignments for statistical machine translation",
"authors": [
{
"first": "Evgeny",
"middle": [],
"last": "Matusov",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th international conference on Computational Linguistics, page 219. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matusov, Evgeny, Richard Zens, and Hermann Ney. 2004. Symmetric word alignments for statistical machine translation. In Proceedings of the 20th international conference on Computational Linguistics, page 219. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "MALLET: A Machine Learning for Language Toolkit",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCallum, Andrew. 2002. MALLET: A Machine Learning for Language Toolkit.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Learning to tell tales: A datadriven approach to story generation",
"authors": [
{
"first": "Neil",
"middle": [],
"last": "Mcintyre",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "1",
"issue": "",
"pages": "217--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McIntyre, Neil and Mirella Lapata. 2009. Learning to tell tales: A data- driven approach to story generation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1- Volume 1 , pages 217-225. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Plot Induction and Evolutionary Search for Story Generation",
"authors": [
{
"first": "Neil",
"middle": [],
"last": "Mcintyre",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1562--1572",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McIntyre, Neil and Mirella Lapata. 2010. Plot Induction and Evolutionary Search for Story Generation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1562-1572. Uppsala, Sweden: Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Introduction to WordNet: an on-line lexical database",
"authors": [
{
"first": "G",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "A",
"middle": [
"R"
],
"last": "Beckwith",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1993,
"venue": "Tech. rep",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, G., A.R. Beckwith, C. Fellbaum, D. Gross, and K. Miller. 1993. In- troduction to WordNet: an on-line lexical database. Tech. rep., Princeton University.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Bayesian Checking for Topic Models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "227--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mimno, David and David Blei. 2011. Bayesian Checking for Topic Models. In Proceedings of the 2011 Conference on Empirical Methods in Natural Lan- guage Processing, pages 227-237. Edinburgh, Scotland, UK.: Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "From Once Upon a Time to Happily Ever After: Tracking Emotions in Novels and Fairy Tales",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 5th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad, Saif. 2011. From Once Upon a Time to Happily Ever After: Tracking Emotions in Novels and Fairy Tales. In Proceedings of the 5th",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "ACL-HLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "105--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ACL-HLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pages 105-114. Portland, OR, USA: Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Emotions Evoked by Common Words and Phrases: Using Mechanical Turk to Create an Emotion Lexicon",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text",
"volume": "",
"issue": "",
"pages": "26--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad, Saif and Peter Turney. 2010. Emotions Evoked by Common Words and Phrases: Using Mechanical Turk to Create an Emotion Lexi- con. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, pages 26-34. Los Angeles, CA: Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "From once upon a time to happily ever after: Tracking emotions in mail and books",
"authors": [
{
"first": "Saif",
"middle": [
"M"
],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2012,
"venue": "Decision Support Systems",
"volume": "53",
"issue": "4",
"pages": "730--741",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad, Saif M. 2012. From once upon a time to happily ever after: Tracking emotions in mail and books . Decision Support Systems 53(4):730 -741.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "A Sequence Labelling Approach to Quote Attribution",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "O'keefe",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Pareti",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
},
{
"first": "Irena",
"middle": [],
"last": "Koprinska",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "790--799",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O'Keefe, Timothy, Silvia Pareti, James R. Curran, Irena Koprinska, and Matthew Honnibal. 2012. A Sequence Labelling Approach to Quote Attri- bution. In Proceedings of the 2012 Joint Conference on Empirical Meth- ods in Natural Language Processing and Computational Natural Language Learning, pages 790-799. Jeju Island, Korea: Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Toward a computational framework of suspense and dramatic arc",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "O'neill",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Riedl",
"suffix": ""
}
],
"year": 2011,
"venue": "Affective Computing and Intelligent Interaction",
"volume": "",
"issue": "",
"pages": "246--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O'Neill, Brian and Mark Riedl. 2011. Toward a computational framework of suspense and dramatic arc. In Affective Computing and Intelligent Inter- action, pages 246-255. Springer.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Narrative as Rhetoric",
"authors": [
{
"first": "James",
"middle": [],
"last": "Phelan",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Rabinowitz",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phelan, James and Peter J. Rabinowitz. 2012. Narrative as Rhetoric. In D. Herman, J. Phelan, P. J. Rabinowitz, B. Richardson, and R. Warhol, eds., Narrative Theory. The Ohio State University Press.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Judging Grammaticality with Tree Substitution Grammar Derivations",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "217--222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Post, Matt. 2011. Judging Grammaticality with Tree Substitution Grammar Derivations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 217- 222. Portland, Oregon, USA: Association for Computational Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Morphology of the Folktale",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Propp",
"suffix": ""
}
],
"year": 1968,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Propp, Vladimir. 1968. Morphology of the Folktale. University of Texas Press, 2nd edn.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Possible worlds, artificial intelligence and narrative theory",
"authors": [
{
"first": "Marie-Laure",
"middle": [],
"last": "Ryan",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan, Marie-Laure. 1991. Possible worlds, artificial intelligence and narrative theory. Bloomington: Indiana University Press.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Digital Corpora as Theorybuilding Resource",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Salway",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Herman",
"suffix": ""
}
],
"year": 2011,
"venue": "New Narratives: Stories and Storytelling in the Digital Age",
"volume": "",
"issue": "",
"pages": "120--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Salway, Andrew and David Herman. 2011. Digital Corpora as Theory- building Resource. In R. Page and B. Thomas, eds., New Narratives: Stories and Storytelling in the Digital Age, pages 120-137. University of Nebraska.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Scripts, plans, goals and understanding: An inquiry into human knowledge structures",
"authors": [
{
"first": "Rogert",
"middle": [],
"last": "Schank",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Abelson",
"suffix": ""
}
],
"year": 1977,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schank, Rogert and Robert Abelson. 1977. Scripts, plans, goals and un- derstanding: An inquiry into human knowledge structures. Hillsdale, NJ.: Lawrence Erlbaum Associates.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"H"
],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "151--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Socher, Richard, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011. Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions. In Proceedings of the 2011 Con- ference on Empirical Methods in Natural Language Processing, pages 151- 161. Edinburgh, Scotland, UK.: Association for Computational Linguistics.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Graph kernels",
"authors": [
{
"first": "S",
"middle": [
"V N"
],
"last": "Vishwanathan",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Nicol",
"suffix": ""
},
{
"first": "Risi",
"middle": [],
"last": "Schraudolph",
"suffix": ""
},
{
"first": "Karsten",
"middle": [
"M"
],
"last": "Kondor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Borgwardt",
"suffix": ""
}
],
"year": 2010,
"venue": "The Journal of Machine Learning Research",
"volume": "11",
"issue": "",
"pages": "1201--1242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vishwanathan, S. V. N., Nicol N. Schraudolph, Risi Kondor, and Karsten M Borgwardt. 2010. Graph kernels. The Journal of Machine Learning Re- search 11:1201-1242.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Emotional Perception of Fairy Tales: Achieving Agreement in Emotion Annotation of Text",
"authors": [
{
"first": "Ekaterina",
"middle": [
"P"
],
"last": "Volkova",
"suffix": ""
},
{
"first": "Betty",
"middle": [],
"last": "Mohler",
"suffix": ""
},
{
"first": "Detmar",
"middle": [],
"last": "Meurers",
"suffix": ""
},
{
"first": "Dale",
"middle": [],
"last": "Gerdemann",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Heinrich",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blthoff",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text",
"volume": "",
"issue": "",
"pages": "98--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Volkova, Ekaterina P., Betty Mohler, Detmar Meurers, Dale Gerdemann, and Heinrich H. Blthoff. 2010. Emotional Perception of Fairy Tales: Achieving Agreement in Emotion Annotation of Text. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Gen- eration of Emotion in Text, pages 98-106. Los Angeles, CA: Association for Computational Linguistics.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Multiple Narrative Disentanglement: Unraveling Infinite Jest",
"authors": [
{
"first": "Byron",
"middle": [],
"last": "Wallace",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wallace, Byron. 2012. Multiple Narrative Disentanglement: Unraveling In- finite Jest. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, pages 1-10. Montr\u00e9al, Canada: Association for Com- putational Linguistics.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Recognizing Contextual Polarity in Phrase-Level Sentiment Analysis",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "347--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilson, Theresa, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing Contextual Polarity in Phrase-Level Sentiment Analysis. In Proceedings of Human Language Technology Conference and Conference on Empiri- cal Methods in Natural Language Processing, pages 347-354. Vancouver, British Columbia, Canada: Association for Computational Linguistics.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Compositional Matrix-Space Models for Sentiment Analysis",
"authors": [
{
"first": "Ainur",
"middle": [],
"last": "Yessenalina",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "172--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yessenalina, Ainur and Claire Cardie. 2011. Compositional Matrix-Space Models for Sentiment Analysis. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 172-182.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Learning structural SVMs with latent variables",
"authors": [
{
"first": "Chun-Nam",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09",
"volume": "",
"issue": "",
"pages": "1169--1176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu, Chun-Nam John and Thorsten Joachims. 2009. Learning structural SVMs with latent variables. In Proceedings of the 26th Annual Inter- national Conference on Machine Learning, ICML '09, pages 1169-1176. New York, NY, USA: ACM. ISBN 978-1-60558-516-1.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Story generated by an event-based model with coherence reranking (McIntyre and Lapata 2010).",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Order Rev.</td></tr><tr><td>Best (Elsner, 2012)</td><td>62</td><td>52</td></tr><tr><td>1 Single-traj. combined</td><td>71 \u2020 4</td><td>88 \u2020 3,4</td></tr><tr><td>2 Single-traj. LDA</td><td>67</td><td>89</td></tr><tr><td colspan=\"2\">3 Char.-traj. combined (symm.) 82 \u2020 4</td><td>57</td></tr><tr><td>4 Char.-traj. LDA</td><td>57</td><td>59</td></tr></table>",
"html": null,
"num": null,
"text": "Test accuracy (%) for character-trajectory systems. Non-symmetrized systems use Equation 6, symmetrized ones use Equation 7. \u2020 indicates a significant difference (p < .05)."
}
}
}
}