ACL-OCL / Base_JSON /prefixE /json /E17 /E17-1044.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E17-1044",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:52:48.639978Z"
},
"title": "A Two-stage Sieve Approach for Quote Attribution",
"authors": [
{
"first": "Grace",
"middle": [],
"last": "Muzny",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {
"postCode": "94305",
"settlement": "Stanford",
"region": "CA"
}
},
"email": "muzny@cs.stanford.edu"
},
{
"first": "Michael",
"middle": [],
"last": "Fang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {
"postCode": "94305",
"settlement": "Stanford",
"region": "CA"
}
},
"email": "mjfang@cs.stanford.edu"
},
{
"first": "Angel",
"middle": [
"X"
],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {
"postCode": "94305",
"settlement": "Stanford",
"region": "CA"
}
},
"email": "angelx@cs.stanford.edu"
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {
"postCode": "94305",
"settlement": "Stanford",
"region": "CA"
}
},
"email": "jurafsky@cs.stanford.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a deterministic sieve-based system for attributing quotations in literary text and a new dataset: QuoteLi3 1. Quote attribution, determining who said what in a given text, is important for tasks like creating dialogue systems, and in newer areas like computational literary studies, where it creates opportunities to analyze novels at scale rather than only a few at a time. We release QuoteLi3, which contains more than 6,000 annotations linking quotes to speaker mentions and quotes to speaker entities, and introduce a new algorithm for quote attribution. Our twostage algorithm first links quotes to mentions, then mentions to entities. Using two stages encapsulates difficult sub-problems and improves system performance. The modular design allows us to tune either for overall performance or for the high precision appropriate for many use cases. Our system achieves an average F-score of 87.5% across three novels, outperforming previous systems, and can be tuned for precision of 90.4% at a recall of 65.1%.",
"pdf_parse": {
"paper_id": "E17-1044",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a deterministic sieve-based system for attributing quotations in literary text and a new dataset: QuoteLi3 1. Quote attribution, determining who said what in a given text, is important for tasks like creating dialogue systems, and in newer areas like computational literary studies, where it creates opportunities to analyze novels at scale rather than only a few at a time. We release QuoteLi3, which contains more than 6,000 annotations linking quotes to speaker mentions and quotes to speaker entities, and introduce a new algorithm for quote attribution. Our twostage algorithm first links quotes to mentions, then mentions to entities. Using two stages encapsulates difficult sub-problems and improves system performance. The modular design allows us to tune either for overall performance or for the high precision appropriate for many use cases. Our system achieves an average F-score of 87.5% across three novels, outperforming previous systems, and can be tuned for precision of 90.4% at a recall of 65.1%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dialogue, representing linguistic and social relationships between characters, is an important component of literature. In this paper, we consider the task of quote attribution for literary text: identifying the speaker for each quote. This task is important for developing realistic character-specific conversational models (Vinyals and Le, 2015; Li et al., 2016) , analyzing discourse structure, and literary studies (Muzny et al., 2016) . But identifying speakers can be difficult; authors often refer to the 1 Quotes in Literary text from 3 novels. speaker only indirectly via anaphora, or even omit mention of the speaker entirely (Table 1) .",
"cite_spans": [
{
"start": 325,
"end": 347,
"text": "(Vinyals and Le, 2015;",
"ref_id": "BIBREF25"
},
{
"start": 348,
"end": 364,
"text": "Li et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 419,
"end": 439,
"text": "(Muzny et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 512,
"end": 513,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 636,
"end": 645,
"text": "(Table 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Prior work has produced important datasets labeling quotes in novels, providing training data for supervised methods. But some of these model the quote-attribution task at the mention-level (Elson and McKeown, 2010; O'Keefe et al., 2012) , and others at the entity-level (He et al., 2013) , leading to labels that are inconsistent across datasets.",
"cite_spans": [
{
"start": 190,
"end": 215,
"text": "(Elson and McKeown, 2010;",
"ref_id": "BIBREF7"
},
{
"start": 216,
"end": 237,
"text": "O'Keefe et al., 2012)",
"ref_id": "BIBREF15"
},
{
"start": 271,
"end": 288,
"text": "(He et al., 2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose entity-level quote attribution as the end goal but with mention-level quote attribution as an important intermediary step. Our first contribution is the QuoteLi3 dataset, a unified combination of data from Elson and McKeown (2010) and He et al. (2013) with the addition of more than 3,000 new labels from expert annotators. This dataset provides both mention and entity labels for Pride and Prejudice, Emma, and The Steppe.",
"cite_spans": [
{
"start": 217,
"end": 241,
"text": "Elson and McKeown (2010)",
"ref_id": "BIBREF7"
},
{
"start": 246,
"end": 262,
"text": "He et al. (2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Next, we describe a new deterministic system that models quote attribution as a two-step process that i) uses textual cues to identify the mention that corresponds to the speaker of a quote, and ii) resolves the mention to an entity. This system improves over previous work by 0.8-2.1 F1 points and its modular design makes it easy to add sieves and incorporate new knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, our contributions are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 A unified dataset with both quote-mention and quote-speaker links labeled by expert annotators. \u2022 A new quote attribution strategy that improves on all previous algorithms and allows the incorporation of both rich linguistic features and machine learning components. \u2022 A new annotation tool designed with the specifics of this task in mind. We freely release the data, system, and annotation tool to the community. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Speaker Explicit \"Do you really think so?\" cried Elizabeth, brightening up ... Elizabeth Bennet Anaphoric (pronoun) \"You are uniformly charming!\" cried he, with an air of awkward gallantry;",
"cite_spans": [
{
"start": 106,
"end": 115,
"text": "(pronoun)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": null
},
{
"text": "Mr. Collins Anaphoric (other) \"I see your design, Bingley,\" said his friend.",
"cite_spans": [
{
"start": 22,
"end": 29,
"text": "(other)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": null
},
{
"text": "Mr. Darcy Implicit \"Then, my dear, you may have the advantage of your friend, and introduce Mr. Bingley to her.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": null
},
{
"text": "Mr. Bennet \"Impossible, Mr. Bennet, impossible, when I am not acquainted with him myself; how can you be so teazing?\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": null
},
{
"text": "Mrs. Bennet \"I honour your circumspection. [...] I will take it on myself.\" Mr. Bennet The girls stared at their father. Mrs. Bennet said only, \"Nonsense, nonsense!\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example",
"sec_num": null
},
{
"text": "Mrs. Bennet Table 1 : Quotes where speakers are mentioned explicitly, by anaphor, or implicitly (conversationally).",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Example",
"sec_num": null
},
{
"text": "Early work in quote attribution focused on identifying spans associated with content (quotes), sources (mentions), and cues (speech verbs) in newswire data. This is the approach taken by . More recent work by Almeida et al. (2014) performed entity-level quote attribution and showed that a joint model of coreference and quote attribution can help both tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the literary domain, Glass and Bangay (2007) did early work modeling both the mention-level and entity-level tasks using a rule-based system. However, their system relied on identifying a main speech verb to then identify the actor (i.e. the mention) and link to the speaker (i.e. the entity) from a character list. This system worked very well but was limited to explicitly cued speakers and did not address implicit speakers at all.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Elson and McKeown (2010) took important first steps towards automatic quote attribution. They formulated the task as one of mention identification in which the goal was to link a quote to the mention of its speaker. Their method achieved 83.0% accuracy overall, but used gold-label information at test time. Their corpus, the Columbia Quoted Speech Corpus (CQSC), is the most wellknown corpus and was used by follow-up work. However, a result of their Mechanical Turk-based labeling strategy was that this corpus contains many unannotated quotes (see Table 4 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 551,
"end": 558,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "O'Keefe et al. (2012) also treated quote attribution as mention identification, using a sequence labeling approach. Their approach was successful in the news domain but it failed to beat their baseline in the literary domain (53.5% vs 49.8%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Emma The Steppe with mention 546 (74.4%) 371 (59.6%) with speaker 491 (66.9%) 258 (41.5%) Table 4 : Coverage of the CQSC labels accuracy). This work quantitatively showed that quote attribution in literature was fundamentally different from the task in newswire.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quote Types",
"sec_num": null
},
{
"text": "We compare against He et al. (2013) , the previous state-of-the-art system for quote attribution. They re-formulated quote attribution as quotespeaker labeling rather than quote-mention labeling. They used a supervised learner and a generative actor topic model (Celikyilmaz et al., 2010) to achieve accuracies ranging from 82.5% on Pride & Prejudice to 74.8% on Emma.",
"cite_spans": [
{
"start": 19,
"end": 35,
"text": "He et al. (2013)",
"ref_id": "BIBREF10"
},
{
"start": 262,
"end": 288,
"text": "(Celikyilmaz et al., 2010)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quote Types",
"sec_num": null
},
{
"text": "We build upon the datasets of He et al. (2013) and Elson and McKeown (2010) to create a comprehensive new dataset of quoted speech in literature: QuoteLi3. This dataset covers 3 novels and 3103 individual quotes, each linked to speaker and mention for a total of 6206 labels, more than 3000 of which are newly annotated. It is composed of expert-annotated dialogue from Jane Austen's Pride and Prejudice, Emma, and Anton Chekhov's The Steppe.",
"cite_spans": [
{
"start": 30,
"end": 46,
"text": "He et al. (2013)",
"ref_id": "BIBREF10"
},
{
"start": 51,
"end": 75,
"text": "Elson and McKeown (2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data: The QuoteLiCorpus",
"sec_num": "3"
},
{
"text": "The datasets described in section 2 are valuable but incomplete and hard to integrate with one another given their different designs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Datasets",
"sec_num": "3.1"
},
{
"text": "The Columbia Quoted Speech Corpus is a large dataset that includes both quote-mention and 2010report that 65% of the quotes in CQSC had unanimous agreement and that 17.6% of the quotes in this corpus were unlabeled. To generate quote-speaker labels, an offthe-shelf coreference tool 3 was used to link mentions and form coreference chains. We find that 57.8% of the quotes in this corpus either i) have no speaker label (48.1%) or ii) the speaker cannot be linked to a known character entity (9.7%). O'Keefe et al. (2012) find that 8% of quotes with speaker labels are incorrectly labeled. Our analysis of the relevant part of CQSC for this work is shown in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 658,
"end": 665,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Previous Datasets",
"sec_num": "3.1"
},
{
"text": "The data from He et al. (2013) includes highquality speaker labels but lacks quote-mention labels. There is no overlap in the data provided by He et al. (2013) and CQSC, but this work did evaluate their system on a subset of CQSC. This dataset assumes that all quoted text within a paragraph should be attributed to the same speaker. 4 While this assumption is correct for Pride and Prejudice, it is incorrect for novels like The Steppe, which use more complex conversa-tional structures 5 . This assumption leads to a problematic method of system evaluation in which all quotes within a paragraph are considered in the gold labels to be one quote, even if they were in fact uttered by different characters. We refer to this strategy as having \"collapsed\" quotes in our evaluations and present it for the purpose of providing a faithful comparison to previous work.",
"cite_spans": [
{
"start": 14,
"end": 30,
"text": "He et al. (2013)",
"ref_id": "BIBREF10"
},
{
"start": 143,
"end": 159,
"text": "He et al. (2013)",
"ref_id": "BIBREF10"
},
{
"start": 334,
"end": 335,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Datasets",
"sec_num": "3.1"
},
{
"text": "In QuoteLi3 we add the annotations that are missing from both datasets and correct the existing ones where necessary. A summary of the annotations included in this dataset and comparison to the previous data that we draw from is described in Table 2 . Our final dataset is described in Table 3 . It features a complete set of annotations for both quote-mention and quote-speaker labels.",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 249,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 286,
"end": 294,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Previous Datasets",
"sec_num": "3.1"
},
{
"text": "Two of the authors of the paper were the annotators of our dataset. They used annotation guidelines consisting of an example excerpt and a description, which is included in the supplementary materials \u00a7A.5. The annotators were instructed to identify the speaker (from a character list) for each quote and to identify the mention that most directly helped them determine the speaker. Unlike Elson and McKeown (2010) , mentions can be pronouns and vocatives, not just explicit name referents. Mentions that were closer to the quote and speech verbs were favored over indirect mentions (such as those in conversational chains). Figure 1 shows an example from Pride and Prejudice.",
"cite_spans": [
{
"start": 390,
"end": 414,
"text": "Elson and McKeown (2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 625,
"end": 633,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.2"
},
{
"text": "Annotation was done using a browser-based an- notation tool developed by the authors. Previously developed tools were either not designed for the task (BRAT (Stenetorp et al., 2012) , WebAnno (Yimam et al., 2013), CHARLES (Vala et al., 2016) ) or unavailable (He et al., 2013) . One problem with the CQSC annotations was that the annotators were shown short snippets that lacked the context to determine the speaker and no character list. We designed our tool to provide context and a character list including name, aliases, gender, and description of the character. Similar to CHARLES, the character list is not static and the annotator can add to the list of characters. Our tool also features automatic data consistency checks such as ensuring that all quotes are linked to a mention. Our expert annotators achieved high interannotator agreement with a Cohen's \u03ba of .97 for quote-speaker labels and a \u03ba of .95 for quotemention labels. 6 To preseve the QuoteLi3 data for train, dev, and testing sets, we calculated this inter-annotator agreement on excerpts from Alice in Wonderland and The Adventures of Huckleberry Finn containing 176 quotes spoken by 10 characters, chosen to be similar to the data found in QuoteLi3. Table 3 shows the statistics of our annotated corpus. Unlike He et al. 2013, we do not assume that all quotes in the same paragraph are spoken by the same speaker. To compare with the dataset used by He et al. (2013) , we provide the collapsed statistics as well. As Table 3 shows, we have roughly the same number of annotated quotes for Pride and Prejudice as He et al. (2013) . For Emma and The Steppe, which were taken from the CQSC corpus, we have considerably more quotes because of our added annotations (see Table 4 ).",
"cite_spans": [
{
"start": 157,
"end": 181,
"text": "(Stenetorp et al., 2012)",
"ref_id": "BIBREF22"
},
{
"start": 222,
"end": 241,
"text": "(Vala et al., 2016)",
"ref_id": "BIBREF24"
},
{
"start": 259,
"end": 276,
"text": "(He et al., 2013)",
"ref_id": "BIBREF10"
},
{
"start": 938,
"end": 939,
"text": "6",
"ref_id": null
},
{
"start": 1423,
"end": 1439,
"text": "He et al. (2013)",
"ref_id": "BIBREF10"
},
{
"start": 1584,
"end": 1600,
"text": "He et al. (2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 1223,
"end": 1230,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 1490,
"end": 1497,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 1738,
"end": 1745,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotation",
"sec_num": "3.2"
},
{
"text": "The task of quote attribution can be summarized as \"who said that?\" Given a text as input, the final output is a speaker for each uttered quote in the text. We assume that all quotes have been previously identified. O'Keefe et al. (2012) find that regular-expression approaches to quote detection yield over 99% accuracy for clean Englishlanguage data. A number of other approaches to quote detection have been studied in recent years for more complex data (Pouliquen et al., 2007; Pareti et al., 2013; Muzny et al., 2016; Scheible et al., 2016) . Following He et al. 2013, we assume that there is a predefined list of characters available, with the name, aliases, and gender of each character. 7 Some key challenges in quote attribution are resolving anaphora (i.e., coreference) and following conversational threads. Literature often follows specific patterns that make some quotes easier to attribute than others. Therefore, an approach that anchors conversations on easily identifiable quotes can outperform approaches that do not. Figure 1 shows an example of a complex conversation at the beginning of Pride and Prejudice. This example illustrates the spectrum of easy to difficult cases found in the task: simple explicit named mention (lines 9, 13, 21), nominal mentions (lines 7, 19, 27) , and pronoun mentions (line 5). Sometimes explicitly named mentions embedded in more complex sentences can still be challenging as they require good dependency parses. This example also illustrates a conversational chain with alternating speakers between Mrs. Bennet and Elizabeth Bennet (lines 7 to 11), and between Mr. Bennet and Mrs. Bennet (lines 27 to 34). In this case, vocatives (expressions that indicate the party being addressed) are cues for who the other speaker is (lines 9, 23, 31). When the simple alternation pattern is broken, explicit speech verbs with the speaking character are specified. To summarize, there are several explicit cues and some easy cases in a conversation that can be leveraged to make the hard cases easier to address.",
"cite_spans": [
{
"start": 457,
"end": 481,
"text": "(Pouliquen et al., 2007;",
"ref_id": "BIBREF18"
},
{
"start": 482,
"end": 502,
"text": "Pareti et al., 2013;",
"ref_id": "BIBREF16"
},
{
"start": 503,
"end": 522,
"text": "Muzny et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 523,
"end": 545,
"text": "Scheible et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 695,
"end": 696,
"text": "7",
"ref_id": null
},
{
"start": 1279,
"end": 1296,
"text": "(lines 7, 19, 27)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1036,
"end": 1044,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Quote Attribution Task",
"sec_num": "4"
},
{
"text": "First, consider the quote\u2192mention linking subtask. This is an inherently ambiguous task (i.e. any mention from the same coreference chain is valid,) but we know that if the target quote is linked to the annotated mention that this is one correct option. This means that the evaluation of the quote\u2192mention stage is a lower-bound. In other words, since a given quote may have multiple mentions that could be considered correct, our system may choose a \"wrong\" mention for a quote but link it to the correct speaker in the end. Thus, if our mention\u2192speaker system could perfectly resolve every mention to its correct speaker, our overall quote attribution system would be guaranteed to get at minimum the same results as the quote\u2192mention stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Quote Attribution Task",
"sec_num": "4"
},
{
"text": "The quote\u2192speaker task can be tackled directly without addressing quote\u2192mention, but identifying a mention associated with the speaker allows us to incorporate key outside information. An-other advantage of this approach is that we can then separately analyze and improve the performance of the two stages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Quote Attribution Task",
"sec_num": "4"
},
{
"text": "Therefore we evaluate both subtasks to give a more complete picture of when the system fails and succeeds. We use precision, recall, and F1 so that we can tune the system for different needs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Quote Attribution Task",
"sec_num": "4"
},
{
"text": "Our model is a two-stage deterministic pipeline. The first stage links quotes to specific mentions in the text and the second stage matches mentions to the entity that they refer to.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "5"
},
{
"text": "By doing both quote\u2192mention and mention\u2192entity linking, our system is able to leverage additional contextual information, resulting in a richer, labeled output. Its modular design means that it can be easily updated to account for improvements in various sub-areas such as coreference resolution. We use a sievebased architecture because having accurate labels for the easy cases allows us to first find anchors that help resolve harder, often conversational, cases. Sieve-based systems have been shown to work well for tasks like coreference resolution (Raghunathan et al., 2010; Lee et al., 2013) , entity linking (Hajishirzi et al., 2013) , and event temporal ordering (Chambers et al., 2014) .",
"cite_spans": [
{
"start": 554,
"end": 580,
"text": "(Raghunathan et al., 2010;",
"ref_id": "BIBREF19"
},
{
"start": 581,
"end": 598,
"text": "Lee et al., 2013)",
"ref_id": "BIBREF12"
},
{
"start": 616,
"end": 641,
"text": "(Hajishirzi et al., 2013)",
"ref_id": "BIBREF9"
},
{
"start": 672,
"end": 695,
"text": "(Chambers et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "5"
},
{
"text": "The quote\u2192mention stage is a series of deterministic sieves. We describe each in detail in the following sections and show examples in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 142,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quote\u2192Mention",
"sec_num": "5.1"
},
{
"text": "Trigram Matching This sieve is similar to patterns used in Elson and McKeown (2010) . It uses patterns like Quote-Mention-Verb (e.g ''...'' she said) where the mention is either a character name or pronoun to isolate the mention. Other patterns include Quote-Verb-Mention, Mention-Verb-Quote, and Verb-Mention-Quote.",
"cite_spans": [
{
"start": 59,
"end": 83,
"text": "Elson and McKeown (2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quote\u2192Mention",
"sec_num": "5.1"
},
{
"text": "Dependency Parses The next sieve in our pipeline inspects the dependency parses of the sentences surrounding the target quote. We use the enhanced dependency parses (Schuster and Manning, 2016) produced by Stanford CoreNLP (Chen and Manning, 2014) to extract all verbs and their dependent nsubj nodes. If the verb is a common speech verb 8 and its nsubj relation points to a Sieve Example Trigram Matching \"They have none of them much to recommend them,\" replied he.",
"cite_spans": [
{
"start": 165,
"end": 193,
"text": "(Schuster and Manning, 2016)",
"ref_id": "BIBREF21"
},
{
"start": 223,
"end": 247,
"text": "(Chen and Manning, 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quote\u2192Mention",
"sec_num": "5.1"
},
{
"text": "Mrs. Bennet said only, \"Nonsense, nonsense!\" Single Mention Detection ...Elizabeth impatiently. \"There has been many a one, I fancy, overcome in the same way. I wonder who first discovered the efficacy of poetry in driving away love!\" Vocative Detection \"My dear Mr. Bennet,...\" \"Is that his design in settling here?\" Paragraph Final Mention Linking After a silence of several minutes, he came towards her in an agitated manner, and thus began, \"In vain have I struggled...\" Supervised Sieve -Conversation Detection \"Aye, so it is,\" cried her mother ... \"Then, my dear, you may have the advantage of your friend, and introduce Mr. Bingley to her.\" \"Impossible, Mr. Bennet, impossible, when I am not acquainted with him myself; how can you be so teazing?\" Loose Conversation Detection \"I will not trust myself on the subject,\" replied Wickham; \"I can hardly be just to him.\" Elizabeth was again deep in thought, and after a time exclaimed, \"To treat in ... the favourite of his father!\" She could have added, \"A young man, too,... being amiable\"but she contented herself with, \"and one, too, ... in the closest manner!\" \"We were born in the same parish, within the same park; the greatest part of our youth was passed together;...\" Table 5 : Quote\u2192Mention sieves and example quotes that they apply to.",
"cite_spans": [],
"ref_spans": [
{
"start": 1231,
"end": 1238,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dependency Parses",
"sec_num": null
},
{
"text": "\"Do you really think so?\" cried Elizabeth, brightening up for a moment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sieve Example Exact Name Match",
"sec_num": null
},
{
"text": "\"You are uniformly charming!\" cried he, with an air of awkward gallantry; Conversational Pattern \"Impossible, Mr. Bennet, impossible ...\" (Mrs. Bennet) \"I honour your circumspection...I will take it on myself.\" (Mr. Bennet) The girls stared at their father. Mrs. Bennet said only, \"Nonsense, nonsense!\" (Mrs. Bennet) Family Noun Vocative Disambiguation \"...You know, sister, we agreed long ago never to mention a word about it. And so, is it quite certain he is coming?\" \"You may depend on it,\" replied the other ... Majority Speaker - Table 6 : Mention\u2192Speaker Sieves and example quotes that they apply to. Bold text indicates where the speaker information comes from while italic text indicates the target quote being labeled.",
"cite_spans": [],
"ref_spans": [
{
"start": 536,
"end": 543,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Coreference Disambiguation",
"sec_num": null
},
{
"text": "character name, a pronoun, or an animate noun, 9 we assign the quote to the target mention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Disambiguation",
"sec_num": null
},
{
"text": "If there is only a single mention in the non-quote text in the paragraph of the target quote, link the quote to that mention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single Mention Detection",
"sec_num": null
},
{
"text": "Vocative Detection If the preceding quote contains a vocative pattern (see supplemental section A.2), link the target quote to that mention. Vocative detection only matches character names and animate nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single Mention Detection",
"sec_num": null
},
{
"text": "Paragraph Final Mention Linking If the target quote occurs at the end of a paragraph, link it to the final mention occurring in the preceding sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single Mention Detection",
"sec_num": null
},
{
"text": "Conversational Pattern If a quote in paragraph n has been linked to mention m n , then this sieve links an unattributed quote two paragraphs ahead, n + 2, to mention m n if they appear to be in conversation. We consider two quotes \"in conversation\" if the paragraph between is also a quote, and 9 The list of animate nouns is from Ji and Lin (2009) . the quote in paragraph n + 2 appears without additional (non-quote) text.",
"cite_spans": [
{
"start": 295,
"end": 296,
"text": "9",
"ref_id": null
},
{
"start": 331,
"end": 348,
"text": "Ji and Lin (2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Single Mention Detection",
"sec_num": null
},
{
"text": "We include a looser form of the previous sieve as a final, highrecall, step. If a quote in paragraph n has been linked to mention m n , then this sieve links quotes in paragraph n + 2 to m n without restriction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loose Conversational Pattern",
"sec_num": null
},
{
"text": "The second stage of our system involves linking the mentions identified in the first stage to a speaker entity. We again use several simple, deterministic sieves to determine the entity that each mention and quote should be linked to. A description of these sieves and example mentions and quotes that they are applied to appears in Table 6 . For the following sieves, we construct an ordered list of top speakers by counting proper name and pronoun mentions around the target quote. If gender for the target quote's speaker can be determined either by the gender of a pronoun mention or the gender of an animate noun (Bergsma and Lin, 2006) , this information is used to filter the candidate speakers in the top speakers list.",
"cite_spans": [
{
"start": 618,
"end": 641,
"text": "(Bergsma and Lin, 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 333,
"end": 340,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mention\u2192Speaker",
"sec_num": "5.2"
},
{
"text": "We use a window size from 2000 tokens before the target quote to 500 tokens after the target quote. If no speakers matching in gender can be found in this window, it is expanded by 2000 tokens on both sides.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention\u2192Speaker",
"sec_num": "5.2"
},
{
"text": "Exact Name Match If the mention that a quote is linked to matches a character name or alias in our character list, label the quote with that speaker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention\u2192Speaker",
"sec_num": "5.2"
},
{
"text": "Coreference Disambiguation If the mention is a pronoun, we attempt to disambiguate it to a specific character using the coreference labels provided by BookNLP (Bamman et al., 2014) .",
"cite_spans": [
{
"start": 159,
"end": 180,
"text": "(Bamman et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mention\u2192Speaker",
"sec_num": "5.2"
},
{
"text": "Conversational Pattern Similarly as in the quote\u2192mention section, we match a target quote to the same speaker as a quote in paragraph n + 2, if they are in the same conversation and it is labeled. Next, we match it to the quote in paragraph n \u2212 2 if they are in the same conversation and it is labeled. This sieve receives gender information from the mention that the target quote is linked to.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention\u2192Speaker",
"sec_num": "5.2"
},
{
"text": "Family Noun Vocative Disambiguation If the target quote is linked to a vocative in the list of family relations (e.g. \"papa\"), pick the first speaker in top speakers that matches the last name of the speaker of the quote containing the vocative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention\u2192Speaker",
"sec_num": "5.2"
},
{
"text": "Majority Speaker If none of the previous sieves identified a speaker for the quote, label the quote with the first speaker in the top speakers list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention\u2192Speaker",
"sec_num": "5.2"
},
{
"text": "In all experiments, we divide the data as follows: Pride and Prejudice is split as in He et al. (2013) with chapters 19-26 as the test set, 27-33 as the development set, and all others as training. Emma and The Steppe are not used for training.",
"cite_spans": [
{
"start": 86,
"end": 102,
"text": "He et al. (2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "As a baseline, for the quote\u2192mention stage we choose the mention that is closest to the quote in terms of token distance. This is similar to the approach taken in BookNLP (Bamman et al., 2014) , in which quotes are attributed to a mention by first looking for the closest mention in the same sentence to the left and right of the quote, then before a hard stop or another quote to the left and right of the target quote. For the mention\u2192speaker stage, 14.9 60.4 72.7 Table 9 : Breakdown of the accuracy of our system per type of quote (see Table 3 ) in each test set.",
"cite_spans": [
{
"start": 171,
"end": 192,
"text": "(Bamman et al., 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 467,
"end": 474,
"text": "Table 9",
"ref_id": null
},
{
"start": 540,
"end": 547,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "6.1"
},
{
"text": "we use the Exact Name Match and Coreference Disambiguation sieves. Table 7 shows a direct comparison of our work versus the previous systems. We replicate the test conditions used by He et al. (2013) as closely as possible in this comparison. In this comparison, the evaluations based on CQSC are of non-contiguous subsets of the quotes that are also not necessarily the same between our work and the previous work. As discussed in section 3, CQSC provides an incomplete set of quotespeaker labels. In this work we follow the same methodology as He et al. (2013) to extract a test set of unambiguously labeled quotes by using a list of character names to identify those that are unambiguously labeled. In section 7, we analyze The Steppe and Emma more thoroughly, showing that this method results in an easier subset of the quotes in these novels.",
"cite_spans": [
{
"start": 183,
"end": 199,
"text": "He et al. (2013)",
"ref_id": "BIBREF10"
},
{
"start": 546,
"end": 562,
"text": "He et al. (2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 7",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "6.1"
},
{
"text": "Our preferred evaluation, shown in Table 8 , differs from previous evaluations in four important ways. We hope that this work can establish consistent guidelines for attributing quotes and evaluating system performance to encourage future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 42,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison to Previous Work",
"sec_num": "6.2"
},
{
"text": "\u2022 Each quote is attributed separately. 10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Previous Work",
"sec_num": "6.2"
},
{
"text": "\u2022 The test sets are composed of every quote from the test portion of each novel, no subsets are used because of incomplete annotations. 11 \u2022 No gold data is used at test time. 12 \u2022 Precision and recall are reported in preference to accuracy for a more fine-grained understanding of the underlying system. Table 8 : Precision, recall, and F-Score of our systems on un-collapsed quotations and the fully annotated test sets from the QuoteLi3 dataset.",
"cite_spans": [
{
"start": 176,
"end": 178,
"text": "12",
"ref_id": null
}
],
"ref_spans": [
{
"start": 305,
"end": 312,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison to Previous Work",
"sec_num": "6.2"
},
{
"text": "To test how orthogonal our two-stage approach is to previous systems, we experiment by adding a supervised sieve to the quote\u2192 mention stage. We train a binary classifier, using a maxent model to distinguish between the correct and incorrect candidate mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding a Supervised Component",
"sec_num": "6.3"
},
{
"text": "We take as candidate mentions all token spans corresponding to names, pronouns, and animate nouns in a one-paragraph range on either side of the quote. Names are determined by scanning for matches to the character list. We restrict pronouns to singular gendered pronouns, i.e. 'he' or 'she'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Mentions",
"sec_num": null
},
{
"text": "Features We featurize each (quote, mention) pair based on attributes of the quote, mention, and how far apart they are from one another. These features largely align with previous work and can be found in supplemental section A.3 (Elson and McKeown, 2010; He et al., 2013) .",
"cite_spans": [
{
"start": 226,
"end": 255,
"text": "A.3 (Elson and McKeown, 2010;",
"ref_id": null
},
{
"start": 256,
"end": 272,
"text": "He et al., 2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Mentions",
"sec_num": null
},
{
"text": "Prediction At test time our model predicts for each quote whether each candidate mention is or is not the correct mention to pair with that quote. If the model predicts more than one mention to be correct, we take the most confident result. This sieve goes just before the conversation pat-tern detection sieves in the quote\u2192mention stage (see Table 5 ). This forms our +supervised system.",
"cite_spans": [],
"ref_spans": [
{
"start": 344,
"end": 351,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Candidate Mentions",
"sec_num": null
},
{
"text": "One advantage of our sieve design is that we can easily add and remove sieves from our pipeline. This means that we can determine the combination of sieves that result in the system that achieves the highest precision with respect to the final speaker label. We use an ablation test to find the combination of sieves with the highest precision (95.6%) for speaker labels on the development set from Pride and Prejudice. These results are achieved by removing the Loose Conversation Detection sieve for the quote\u2192mention stage and keeping only the Exact Name Match and Coreference Disambiguation sieves for the mention\u2192speaker stage. Together, these sieves create a system that we call +precision that emphasizes overall precision rather than F-score or accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Creating a High-Precision System",
"sec_num": "6.4"
},
{
"text": "We show that a simple deterministic system can achieve state-of-the-art results. Adding a lightweight supervised component improves the system across all test sets. The sieve design allows us to create a high precision system that might be more appropriate for real-world applications that value precision over recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "The results in Table 8 confirm that the subset of test quotes from Emma and The Steppe used in previous work were an easier subset of the whole set of quotations. When evaluating based off of the whole set of quotations, we lose 0.2 and 11.1 points of accuracy for Emma and The Steppe, respectively. As we show in Table 4 , The Steppe is missing a significant portion (50.9%) of the annotations whereas Emma is missing 28.6%. Our error analysis shows us that The Steppe features more complicated conversation patterns than the novels of Jane Austen, which makes the task of quote attribution more difficult.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 8",
"ref_id": null
},
{
"start": 314,
"end": 321,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "One type of error analysis we performed was inspecting the accuracy of our system by quote type. As seen in Table 9 , the main challenge lies in identifying anaphoric and implicit speakers. We find that resolving non-pronoun anaphora is much more challenging for our system than pronouns. This is because the only mechanism for dealing with these mentions is the Family Noun Vocative Disambiguation sieve; otherwise, the only information we gather from them is gender information. This indicates that adding information about the social network of a novel and attributes of each character (such as job and relationships to other characters) would further increase system performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 115,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "In this paper, we provided an improved, consistently annotated dataset for quote attribution with both quote-mention and quote-speaker annotations. We described a two-stage quote attribution system that first links quotes to mentions and then mentions to speakers, and showed that it outperforms the existing state-of-the-art. We established a thorough evaluation and showed how our system can be tweaked for higher precision or refined with a supervised sieve for better overall performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "A clear direction for future work is to expand the dataset to a more diverse set of novels by leveraging our annotation tool on Mechanical Turk or other crowdsourcing platforms. This work has also provided the background to see the pitfalls that a dataset produced in such a way might encounter. For example, annotators could label mentions and speakers separately, and examples with high uncertainty could be transferred to expert annotators. An expanded dataset would allow us to evaluate how well our system generalizes to other novels and also allow us to train better models. Another interesting direction is to eliminate the use of predefined character lists by automatically extracting the list of characters (Vala et al., 2015) .",
"cite_spans": [
{
"start": 716,
"end": 735,
"text": "(Vala et al., 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "A.1 Nested Conversation Example Figure 2 : An example paragraph that contains multiple speakers from The Steppe Figure 2 shows a screen shot of our annotation tool displaying a paragraph with a complex conversational structure from The Steppe.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Figure 2",
"ref_id": null
},
{
"start": 112,
"end": 120,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Supplemental Material",
"sec_num": null
},
{
"text": "Example between , and ! , Nastasya! between , and ? , Mr. Bennet? between , and . , Yegorushka. between , and ; , papa; between , and , , Emma, between \" and , \"Father Christopher, between , and \" , mother\" after the word \"dear\" Dear Lydia between \"oh\" and ! Oh Henry! Table 10 : Vocative patterns for extracting mentions.",
"cite_spans": [
{
"start": 8,
"end": 196,
"text": "between , and ! , Nastasya! between , and ? , Mr. Bennet? between , and . , Yegorushka. between , and ; , papa; between , and , , Emma, between \" and , \"Father Christopher, between , and \"",
"ref_id": null
}
],
"ref_spans": [
{
"start": 269,
"end": 277,
"text": "Table 10",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pattern",
"sec_num": null
},
{
"text": "We used the following features in our supervised classifier:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 Supervised Classifier Features",
"sec_num": null
},
{
"text": "\u2022 Distance: token distance, ranked distance (relative to mentions), paragraph distance (left paragraph and right paragraph separate) \u2022 Mention: Number of quotes in the mention paragraph, number of words in mention paragraph, the order of the mention within the paragraph (compared to other mentions), whether the mention is within conversation (i.e. no non-quote text in the same paragraph), whether the mention is within a quote, POS of the previous and next words. \u2022 Quote: the length of the quote, the order of the quote (i.e. whether it is the first or second quote in a paragraph), the number of words in the paragraph, number of names in the paragraph, whether the quote contains text in it, whether the present quote contains the name of the mention (if mention is a name).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 Supervised Classifier Features",
"sec_num": null
},
{
"text": "Common Speech Verbs Similar to He et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.4 Words Lists",
"sec_num": null
},
{
"text": "(2013), we use say, cry, reply, add, think, observe, call, and answer, present in the training data from Pride and Prejudice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.4 Words Lists",
"sec_num": null
},
{
"text": "Family Relation Nouns ancestor aunt bride bridegroom brother brother-in-law child children dad daddy daughter daughter-in-law father father-in-law fiancee grampa gramps grandchild grandchildren granddaughter grandfather grandma grandmother grandpa grandparent grandson granny great-granddaughter greatgrandfather great-grandmother great-grandparent great-grandson great-aunt great-uncle groom half-brother half-sister heir heiress husband ma mama mom mommy mother mother-in-law nana nephew niece pa papa parent pop second cousin sister sister-in-law son son-in-law stepbrother stepchild stepchildren stepdad stepdaughter stepfather stepmom stepmother stepsister stepson uncle wife",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.4 Words Lists",
"sec_num": null
},
{
"text": "\u2022 Each quote should be annotated with the character that is that quote's speaker. \u2022 Each quote should be linked to a mention that is the most obvious indication of that quote's speaker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.5 Annotation Guidelines",
"sec_num": null
},
{
"text": "-Quotes can be linked to mentions inside other quotes. -Multiple quotes may be linked to the same mention. \u2022 Mentions should also be annotated with the character that they refer to.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.5 Annotation Guidelines",
"sec_num": null
},
{
"text": "-If a character's name is meaningfully associated with an article (e.g. \"...,\" said the Bear), include that article in the mention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.5 Annotation Guidelines",
"sec_num": null
},
{
"text": "nlp.stanford.edu/\u02dcmuzny/quoteli.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Even current state-of-the-art coreference tools achieve just over 65% average F1 scores(Clark and Manning, 2016).4 For first-level quotes, there is typically just one speaker per paragraph. This assumption breaks down in some cases and it is very rarely true for nested quotes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See supplemental section A.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The reported agreement is the average of the Cohens kappas from these passages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Character lists are available on sites like sparknotes.com. The automatic extraction of characters from a novel has been identified as a separate problem(Vala et al., 2015).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This list of verbs as well as the family relation nouns list are available in supplemental section A.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is in contrast to the work ofHe et al. (2013) 11 This is in contrast to the work ofElson and McKeown (2010) andHe et al. (2013). The work of O'Keefe et al. (2012) is the only previous work to augment the unlabeled portions of CQSC. They achieved 53.3% accuracy on CQSC from a rule-based system similar to our baseline. This data is not available.12 Gold data was used at test time by Elson and McKeown (2010) who achieved 83.0% accuracy on the CQSC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-114747, and by the NSF via IIS IIS-1514268. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. The authors thank their anonymous reviewers and the Stanford NLP group for their helpful feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A joint model for quotation attribution and coreference resolution",
"authors": [
{
"first": "S",
"middle": [
"C"
],
"last": "Mariana",
"suffix": ""
},
{
"first": "Miguel",
"middle": [
"B"
],
"last": "Almeida",
"suffix": ""
},
{
"first": "Andr\u00e9 Ft",
"middle": [],
"last": "Almeida",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mariana S.C. Almeida, Miguel B. Almeida, and Andr\u00e9 FT Martins. 2014. A joint model for quota- tion attribution and coreference resolution. In Pro- ceedings of the 14th Conference of the European Chapter of the Association for Computational Lin- guistics (EACL).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Bayesian mixed effects model of literary character",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Underwood",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bamman, Ted Underwood, and Noah A. Smith. 2014. A Bayesian mixed effects model of literary character. In Proceedings of Association for Com- putational Linguistics (ACL).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bootstrapping path-based pronoun resolution",
"authors": [
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shane Bergsma and Dekang Lin. 2006. Bootstrapping path-based pronoun resolution. In Proceedings of Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The actortopic model for extracting social networks in literary narrative",
"authors": [
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Kondrak",
"suffix": ""
},
{
"first": "Denilson",
"middle": [],
"last": "Barbosa",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of NIPS Workshop: Machine Learning for Social Computing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asli Celikyilmaz, Dilek Hakkani-Tur, Hua He, Greg Kondrak, and Denilson Barbosa. 2010. The actor- topic model for extracting social networks in literary narrative. In Proceedings of NIPS Workshop: Ma- chine Learning for Social Computing.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Dense event ordering with a multi-pass architecture",
"authors": [
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Cassidy",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Mcdowell",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "273--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathanael Chambers, Taylor Cassidy, Bill McDowell, and Steven Bethard. 2014. Dense event ordering with a multi-pass architecture. Transactions of the Association for Computational Linguistics, 2:273- 284.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A fast and accurate dependency parser using neural networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Empirical Methods on Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of Empirical Methods on Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Improving coreference resolution by learning entitylevel distributed representations",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark and Christopher D. Manning. 2016. Im- proving coreference resolution by learning entity- level distributed representations. In Proceedings of Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic attribution of quoted speech in literary narrative",
"authors": [
{
"first": "K",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Elson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of Association for the Advancement of Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David K. Elson and Kathleen McKeown. 2010. Auto- matic attribution of quoted speech in literary narra- tive. In Proceedings of Association for the Advance- ment of Artificial Intelligence (AAAI).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A na\u00efve, salience-based method for speaker identification in fiction books",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "Shaun",
"middle": [],
"last": "Bangay",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 18th International Symposium of the Pattern Recognition Association of South Africa (PRASA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Glass and Shaun Bangay. 2007. A na\u00efve, salience-based method for speaker identification in fiction books. In Proceedings of the 18th Interna- tional Symposium of the Pattern Recognition Asso- ciation of South Africa (PRASA).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Joint coreference resolution and named-entity linking with multi-pass sieves",
"authors": [
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Leila",
"middle": [],
"last": "Zilles",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
},
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "289--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hannaneh Hajishirzi, Leila Zilles, Daniel S. Weld, and Luke S. Zettlemoyer. 2013. Joint coreference res- olution and named-entity linking with multi-pass sieves. In Proceedings of EMNLP, pages 289-299.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Identification of speakers in novels",
"authors": [
{
"first": "Hua",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Denilson",
"middle": [],
"last": "Barbosa",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hua He, Denilson Barbosa, and Grzegorz Kondrak. 2013. Identification of speakers in novels. In Pro- ceedings of Association for Computational Linguis- tics (ACL).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Gender and animacy knowledge discovery from web-scale n-grams for unsupervised person mention detection",
"authors": [
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Pacific Asia Conference on Language, Information and Computation (PACLIC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heng Ji and Dekang Lin. 2009. Gender and animacy knowledge discovery from web-scale n-grams for unsupervised person mention detection. In Proceed- ings of Pacific Asia Conference on Language, Infor- mation and Computation (PACLIC).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Deterministic coreference resolution based on entity-centric, precision-ranked rules",
"authors": [
{
"first": "Heeyoung",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Angel",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Peirsman",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "4",
"pages": "885--916",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Ju- rafsky. 2013. Deterministic coreference resolu- tion based on entity-centric, precision-ranked rules. Computational Linguistics, 39(4):885-916.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A persona-based neural conversation model",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural con- versation model. In Proceedings of Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The dialogic turn and the performance of gender: the English canon 1782-2011",
"authors": [
{
"first": "Grace",
"middle": [],
"last": "Muzny",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Algee-Hewitt",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of Digital Humanities",
"volume": "",
"issue": "",
"pages": "296--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grace Muzny, Mark Algee-Hewitt, and Dan Jurafsky. 2016. The dialogic turn and the performance of gen- der: the English canon 1782-2011. In Proceedings of Digital Humanities, pages 296-299.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A sequence labelling approach to quote attribution",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Tim",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Keefe",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Pareti",
"suffix": ""
},
{
"first": "Irena",
"middle": [],
"last": "Curran",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Koprinska",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Honnibal",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of Empirical Methods on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim O'Keefe, Silvia Pareti, James R. Curran, Irena Koprinska, and Matthew Honnibal. 2012. A se- quence labelling approach to quote attribution. In Proceedings of Empirical Methods on Natural Lan- guage Processing (EMNLP).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatically detecting and attributing indirect quotations",
"authors": [
{
"first": "Silvia",
"middle": [],
"last": "Pareti",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Timothy",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Keefe",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Konstas",
"suffix": ""
},
{
"first": "Irena",
"middle": [],
"last": "Curran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Koprinska",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Empirical Methods on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Silvia Pareti, Timothy O'Keefe, Ioannis Konstas, James R. Curran, and Irena Koprinska. 2013. Au- tomatically detecting and attributing indirect quota- tions. In Proceedings of Empirical Methods on Nat- ural Language Processing (EMNLP).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A database of attribution relations",
"authors": [
{
"first": "Silvia",
"middle": [],
"last": "Pareti",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Silvia Pareti. 2012. A database of attribution relations. In Proceedings of International Conference on Lan- guage Resources and Evaluation (LREC).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatic detection of quotations in multilingual news",
"authors": [
{
"first": "Ralf",
"middle": [],
"last": "Bruno Pouliquen",
"suffix": ""
},
{
"first": "Clive",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Best",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruno Pouliquen, Ralf Steinberger, and Clive Best. 2007. Automatic detection of quotations in multi- lingual news. In Proceedings of Recent Advances in Natural Language Processing.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A multipass sieve for coreference resolution",
"authors": [
{
"first": "Heeyoung",
"middle": [],
"last": "Karthik Raghunathan",
"suffix": ""
},
{
"first": "Sudarshan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Rangarajan",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of Empirical Methods on Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthik Raghunathan, Heeyoung Lee, Sudarshan Ran- garajan, Nathanael Chambers, Mihai Surdeanu, Dan Jurafsky, and Christopher Manning. 2010. A multi- pass sieve for coreference resolution. In Proceed- ings of Empirical Methods on Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Model architectures for quotation detection",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Scheible",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Scheible, Roman Klinger, and Sebastian Pad\u00f3. 2016. Model architectures for quotation de- tection. In Proceedings of Association for Compu- tational Linguistics (ACL).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Enhanced English Universal Dependencies: An improved representation for natural language understanding tasks",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Schuster and Christopher D. Manning. 2016. Enhanced English Universal Dependencies: An im- proved representation for natural language under- standing tasks. In Proceedings of the Tenth Interna- tional Conference on Language Resources and Eval- uation (LREC 2016).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "BRAT: a web-based tool for NLP-assisted text annotation",
"authors": [
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Topi\u0107",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pontus Stenetorp, Sampo Pyysalo, Goran Topi\u0107, Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsu- jii. 2012. BRAT: a web-based tool for NLP-assisted text annotation. In Proceedings of the Demonstra- tions at the 13th Conference of the European Chap- ter of the Association for Computational Linguistics (EACL).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Mr. Bennet, his coachman, and the archbishop walk into a bar but only one of them gets recognized: On the difficulty of detecting characters in literary texts",
"authors": [
{
"first": "Hardik",
"middle": [],
"last": "Vala",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Piper",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Ruths",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Empirical Methods on Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hardik Vala, David Jurgens, Andrew Piper, and Derek Ruths. 2015. Mr. Bennet, his coachman, and the archbishop walk into a bar but only one of them gets recognized: On the difficulty of detecting characters in literary texts. In Proceedings of Empirical Meth- ods on Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Annotating characters in literary corpora: A scheme, the CHARLES tool, and an annotated novel",
"authors": [
{
"first": "Hardik",
"middle": [],
"last": "Vala",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Dimitrov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Piper",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Ruths",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th edition of the Language Resources and Evaluation Conference (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hardik Vala, Stefan Dimitrov, David Jurgens, Andrew Piper, and Derek Ruths. 2016. Annotating charac- ters in literary corpora: A scheme, the CHARLES tool, and an annotated novel. In Proceedings of the 10th edition of the Language Resources and Evalu- ation Conference (LREC).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A neural conversational model",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICML Deep Learning Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. In Proceedings of ICML Deep Learn- ing Workshop.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "WebAnno: A flexible, web-based and visually supported system for distributed annotations",
"authors": [
{
"first": "Iryna",
"middle": [],
"last": "Seid Muhie Yimam",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Eckart De Castilho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Association for Computational Linguistics (ACL) System Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seid Muhie Yimam, Iryna Gurevych, Richard Eckart de Castilho, and Chris Biemann. 2013. WebAnno: A flexible, web-based and visually supported system for distributed annotations. In Proceedings of Asso- ciation for Computational Linguistics (ACL) System Demonstrations.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Conversation from Pride and Prejudice annotated with our annotation tool. Speakers are indicated by color, mentions are marked by dashed outlines, and quote-to-mention links by blue lines.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"text": "Label coverage per novel: is full, is partial, and is no coverage of annotations.",
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">QuoteLi3 (uncollapsed)</td><td colspan=\"3\">QuoteLi3 (collapsed)</td><td/><td>He et al.</td><td/></tr><tr><td>Quote Type</td><td colspan=\"9\">P &amp; P Emma The Steppe P &amp; P Emma The Steppe P &amp; P Emma The Steppe</td></tr><tr><td>Explicit (ES)</td><td>555</td><td>240</td><td>278</td><td>326</td><td>128</td><td>184</td><td>305</td><td>106</td><td>112</td></tr><tr><td>Anaphoric (AS)</td><td>528</td><td>132</td><td>180</td><td>309</td><td>73</td><td>106</td><td>292</td><td>55</td><td>39</td></tr><tr><td>pronoun (AS(p))</td><td>405</td><td>112</td><td>106</td><td>241</td><td>58</td><td>58</td><td/><td/><td/></tr><tr><td>other (AS(o))</td><td>123</td><td>20</td><td>74</td><td>68</td><td>15</td><td>48</td><td/><td/><td/></tr><tr><td>Implicit (IS)</td><td>664</td><td>362</td><td>164</td><td>655</td><td>357</td><td>158</td><td>663</td><td>236</td><td>93</td></tr><tr><td>Total</td><td>1747</td><td>734</td><td>622</td><td>1290</td><td>558</td><td>448</td><td>1260</td><td>397</td><td>244</td></tr><tr><td>All</td><td/><td>3103</td><td/><td/><td>2296</td><td/><td/><td>1901</td><td/></tr></table>",
"type_str": "table",
"num": null
},
"TABREF2": {
"text": "",
"html": null,
"content": "<table><tr><td>: Breakdown of our dataset by novel and type of quote (uncollapsed). For comparison with the</td></tr><tr><td>dataset from He et al. (2013), we provide the collapsed statistics assuming one speaker per paragraph.</td></tr><tr><td>quote-speaker labels (Elson and McKeown, 2010).</td></tr><tr><td>It suffers from problems often associated with</td></tr><tr><td>crowdsourced labels and the use of low-accuracy</td></tr><tr><td>tools. In this corpus, quote-mention labels were</td></tr><tr><td>gathered from Mechanical Turk, where each quote</td></tr><tr><td>was linked to a mention by 3 different annotators.</td></tr><tr><td>Elson and McKeown</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF5": {
"text": "Comparison with previous work. This table reports accuracy and comes with some caveats: * indicates that a non-contiguous subset of the quotations were used (not all subsets are guaranteed to be the same as described in section 6.2), and all quotes within the same paragraph were collapsed. Emma and The Steppe come from CQSC. All systems are trained on Pride and Prejudice.",
"html": null,
"content": "<table><tr><td>System</td><td>Test</td><td colspan=\"6\">Quote\u2192Mention Mention\u2192Speaker</td></tr><tr><td/><td/><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>Accuracy</td></tr><tr><td colspan=\"8\">+supervised Pride and Prejudice 86.7 93.5 89.9 85.1 100 92.0</td><td>85.1</td></tr><tr><td colspan=\"2\">+supervised Emma</td><td colspan=\"6\">75.2 85.2 79.9 75.9 100 86.3</td><td>75.9</td></tr><tr><td colspan=\"2\">+supervised The Steppe</td><td colspan=\"6\">81.7 88.6 85.0 72.7 100 84.2</td><td>72.7</td></tr><tr><td/><td>Average</td><td colspan=\"6\">81.2 89.1 84.9 77.9 100 87.5</td></tr><tr><td>+precision</td><td colspan=\"7\">Pride and Prejudice 90.2 80.1 84.9 92.1 70.9 80.1</td></tr><tr><td>+precision</td><td>Emma</td><td colspan=\"6\">84.6 68.3 75.6 85.7 59.0 69.9</td></tr><tr><td>+precision</td><td>The Steppe</td><td colspan=\"6\">92.5 75.3 83.0 93.3 65.5 77.0</td></tr><tr><td/><td>Average</td><td colspan=\"6\">89.1 74.6 81.2 90.4 65.1 75.7</td></tr></table>",
"type_str": "table",
"num": null
}
}
}
}