ACL-OCL / Base_JSON /prefixL /json /lt4hala /2020.lt4hala-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:12:46.227082Z"
},
"title": "Detecting Direct Speech in Multilingual Collection of 19th-century Novels",
"authors": [
{
"first": "Joanna",
"middle": [],
"last": "Byszuk",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Polish Academy of Sciences",
"location": {}
},
"email": "joanna.byszuk@ijp.pan.pl"
},
{
"first": "Micha\u0142",
"middle": [],
"last": "Wo\u017aniak",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Polish Academy of Sciences",
"location": {}
},
"email": "michal.wozniak@ijp.pan.pl"
},
{
"first": "Mike",
"middle": [],
"last": "Kestemont",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Antwerp",
"location": {}
},
"email": "mike.kestemont@uantwerp.be"
},
{
"first": "Albert",
"middle": [],
"last": "Le\u015bniak",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Polish Academy of Sciences",
"location": {}
},
"email": "albert.lesniak@ijp.pan.pl"
},
{
"first": "Wojciech",
"middle": [],
"last": "\u0141ukasik",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Polish Academy of Sciences",
"location": {}
},
"email": "wojciech.lukasik@ijp.pan.pl"
},
{
"first": "Artjoms",
"middle": [],
"last": "\u0160e\u013ca",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Polish Academy of Sciences",
"location": {}
},
"email": "artjoms.sela@ijp.pan.pl"
},
{
"first": "Maciej",
"middle": [],
"last": "Eder",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Polish Academy of Sciences",
"location": {}
},
"email": "maciej.eder@ijp.pan.pl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Fictional prose can be broadly divided into narrative and discursive forms with direct speech being central to any discourse representation (alongside indirect reported speech and free indirect discourse). This distinction is crucial in digital literary studies and enables interesting forms of narratological or stylistic analysis. The difficulty of automatically detecting direct speech, however, is currently underestimated. Rule-based systems that work reasonably well for modern languages struggle with (the lack of) typographical conventions in 19th-century literature. While machine learning approaches to sequence modeling can be applied to solve the task, they typically face a severed skewness in the availability of training material, especially for lesser resourced languages. In this paper, we report the result of a multilingual approach to direct speech detection in a diverse corpus of 19th-century fiction in 9 European languages. The proposed method fine-tunes a transformer architecture with multilingual sentence embedder on a minimal amount of annotated training in each language, and improves performance across languages with ambiguous direct speech marking, in comparison to a carefully constructed regular expression baseline.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Fictional prose can be broadly divided into narrative and discursive forms with direct speech being central to any discourse representation (alongside indirect reported speech and free indirect discourse). This distinction is crucial in digital literary studies and enables interesting forms of narratological or stylistic analysis. The difficulty of automatically detecting direct speech, however, is currently underestimated. Rule-based systems that work reasonably well for modern languages struggle with (the lack of) typographical conventions in 19th-century literature. While machine learning approaches to sequence modeling can be applied to solve the task, they typically face a severed skewness in the availability of training material, especially for lesser resourced languages. In this paper, we report the result of a multilingual approach to direct speech detection in a diverse corpus of 19th-century fiction in 9 European languages. The proposed method fine-tunes a transformer architecture with multilingual sentence embedder on a minimal amount of annotated training in each language, and improves performance across languages with ambiguous direct speech marking, in comparison to a carefully constructed regular expression baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Fictional prose can be broadly divided into narrative and discursive forms with direct speech being central to any discourse representation (alongside indirect reported speech and free indirect discourse). This distinction is crucial in digital literary studies and drives various forms of narratological or stylistic analysis: direct, or \"mimetic\" speech and thought (Gennette, 1980 ) was used to understand voice of literary characters (Burrows, 1987; Hoover, 2014) and study narrative representations of speech (Conroy, 2014; Katsma, 2014) . Distinction between \"mimetic\" speech and \"narration\" helped to formalize free indirect discourse, defined as a linguistic mixture of these two types (Brooke, Hammond and Hirst, 2017; Muzny, Algee-Hewitt and Jurafsky, 2017) . Sequences of direct exchanges between characters were studied to understand the evolution of dialogue as a literary device (Sobchuk, 2016) and dynamics of \"dialogism\" over the course of novel's history (Muzny, Algee-Hewitt and Jurafsky, 2017) . Direct speech recognition is also closely related to the problem of identification and modeling fictional characters (He, Barbosa and Kondrak, 2013; Bamman, Underwood and Smith, 2014; Vala et al., 2015) .",
"cite_spans": [
{
"start": 368,
"end": 383,
"text": "(Gennette, 1980",
"ref_id": null
},
{
"start": 438,
"end": 453,
"text": "(Burrows, 1987;",
"ref_id": "BIBREF6"
},
{
"start": 454,
"end": 467,
"text": "Hoover, 2014)",
"ref_id": "BIBREF11"
},
{
"start": 514,
"end": 528,
"text": "(Conroy, 2014;",
"ref_id": "BIBREF7"
},
{
"start": 529,
"end": 542,
"text": "Katsma, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 694,
"end": 727,
"text": "(Brooke, Hammond and Hirst, 2017;",
"ref_id": "BIBREF4"
},
{
"start": 728,
"end": 767,
"text": "Muzny, Algee-Hewitt and Jurafsky, 2017)",
"ref_id": "BIBREF16"
},
{
"start": 893,
"end": 908,
"text": "(Sobchuk, 2016)",
"ref_id": "BIBREF20"
},
{
"start": 972,
"end": 1012,
"text": "(Muzny, Algee-Hewitt and Jurafsky, 2017)",
"ref_id": "BIBREF16"
},
{
"start": 1132,
"end": 1163,
"text": "(He, Barbosa and Kondrak, 2013;",
"ref_id": "BIBREF10"
},
{
"start": 1164,
"end": 1198,
"text": "Bamman, Underwood and Smith, 2014;",
"ref_id": "BIBREF2"
},
{
"start": 1199,
"end": 1217,
"text": "Vala et al., 2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The majority of approaches to direct speech recognition (DSR) in prose remain language-specific and heavily rely on deep morphological and syntactic annotation of texts and depend on typographic conventions of marking direct speech within a given tradition. Rule-based solutions variably use punctuation, contextual heuristics, and morphosyntactic patterns within clauses to identify direct and indirect speech (Krestel, Bergler and Witte, 2008; Alrahabi, Descl\u00e9s and Suh, 2010; Brunner, 2013; Brooke, Hammond and Hirst, 2015; Muzny, Algee-Hewitt and Jurafsky, 2017) , sometimes relying on external dictionaries of proper names and reporting verbs (Pouliquen, Steinberger and Best, 2007; Nikishina et al., 2019) . When DSR does not use quotation marks, it utilizes pre-determined linguistic features -tense, personal pronouns, imperative mode or interjections -to guess speech type (Tu, Krug and Brunner, 2019) . Similar assembling of mixed features that might be relevant for direct speech is implemented in supervised machine learning approaches to DSR in twoclass classification task (Brunner, 2013; Sch\u00f6ch et al., 2016) . Jannidis et al. (2018) constructed a deep-learning pipeline for German that does not rely on manually defined features. It uses simple regular expressions for \"weak\" labeling of direct speech and then feeds marked text segments to the two-branch LSTM network (one for the \"past\" and one for the future context of a token) that assigns speech types on a word-to-word basis.",
"cite_spans": [
{
"start": 411,
"end": 445,
"text": "(Krestel, Bergler and Witte, 2008;",
"ref_id": "BIBREF15"
},
{
"start": 446,
"end": 478,
"text": "Alrahabi, Descl\u00e9s and Suh, 2010;",
"ref_id": "BIBREF1"
},
{
"start": 479,
"end": 493,
"text": "Brunner, 2013;",
"ref_id": "BIBREF5"
},
{
"start": 494,
"end": 526,
"text": "Brooke, Hammond and Hirst, 2015;",
"ref_id": "BIBREF3"
},
{
"start": 527,
"end": 566,
"text": "Muzny, Algee-Hewitt and Jurafsky, 2017)",
"ref_id": "BIBREF16"
},
{
"start": 648,
"end": 687,
"text": "(Pouliquen, Steinberger and Best, 2007;",
"ref_id": "BIBREF18"
},
{
"start": 688,
"end": 711,
"text": "Nikishina et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 882,
"end": 910,
"text": "(Tu, Krug and Brunner, 2019)",
"ref_id": "BIBREF21"
},
{
"start": 1087,
"end": 1102,
"text": "(Brunner, 2013;",
"ref_id": "BIBREF5"
},
{
"start": 1103,
"end": 1123,
"text": "Sch\u00f6ch et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 1126,
"end": 1148,
"text": "Jannidis et al. (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "State-of-the-art DSR performance seems to be revolving around 0.9 F1-score with the highest (0.939) for French 19th-century fiction with Random Forests classification (Sch\u00f6ch et al., 2016) , 0.87 (Brunner, 2013) or 0.9 (Jannidis et al., 2018) for German novels, 0.85 for Anglophone texts with noisy OCR (Muzny, Algee-Hewitt and Jurafsky, 2017) . Despite relatively high performance, all implementations require either a general language-specific models (for tagging corpus and extracting features) or standardized typographic and orthographic conventions, which we cannot expect in historical texts across uneven literary and linguistic landscape. Few attempts to make multilingual DSR used highly conventional modern news texts and benefited from databases specific to the media; at their core these implementations remain a collection of rules adjusted to several selected languages (Pouliquen, Steinberger and Best, 2007; Alrahabi, Descl\u00e9s and Suh, 2010) .",
"cite_spans": [
{
"start": 167,
"end": 188,
"text": "(Sch\u00f6ch et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 196,
"end": 211,
"text": "(Brunner, 2013)",
"ref_id": "BIBREF5"
},
{
"start": 303,
"end": 343,
"text": "(Muzny, Algee-Hewitt and Jurafsky, 2017)",
"ref_id": "BIBREF16"
},
{
"start": 885,
"end": 924,
"text": "(Pouliquen, Steinberger and Best, 2007;",
"ref_id": "BIBREF18"
},
{
"start": 925,
"end": 957,
"text": "Alrahabi, Descl\u00e9s and Suh, 2010)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this paper we propose a multilingual solution for direct speech recognition in historic fictional prose that uses transformer architecture with multilingual sentence embedding and requires minimum amount of \"golden standard\" annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The project was born in relation to Distant Reading for European Literary History (COST Action CA16204) project, and one of its subtasks -direct speech markup. We have therefore focused on the problems as observed in the corpus created within the project: European Literary Text Collection (ELTeC), which is aimed to consist of \"around 2,500 full-text novels in at least 10 different languages\" (https://www.distant-reading.net/). Spanning from 1840 to 1920, ELTeC provides a cross-view of literary traditions and typography conventions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2."
},
{
"text": "The collection presents a number of challenges due to its historic variation, from typographic and orthographic differences, to old vocabulary, to the status of given languages at the time, with some, most notably Norwegian, undergoing at the time the process of being established as a standardized written language. Another challenge results from the varying origin of the texts in the subcollectionssome were contributed from existing open-source collections, while others, e.g. Romanian, due to lack of digitized collections in respective languages were scanned, OCR-ed and annotated by the Action members specifically for EL-TeC. Detailed information on the process and rules guiding the creation of the corpus can be found on the dedicated website https://distantreading.github.io/sampling _ pr o posal.html .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2."
},
{
"text": "We use ELTeC as in its first official release in Level 1 encoding (basic XML-TEI compliant annotation of the texts' division into chapters and paragraphs), covering the following languages: English, German, Italian, French, Romanian, Slovene, Norwegian, Portuguese, Serbian. We do not introduce changes in the original texts and select five samples per language of around 10,000 words each, with every sample drawn from a different novel. We use random sampling and preserve information about paragraphs and sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2."
},
{
"text": "The samples were manually annotated by JB, W\u0141 and A\u0160, with two-fold purpose in mind: 1) they were used to train the model, 2) they were \"the golden standard\" to compare baseline performance to. At this early stage of the project we did not calculate inter-annotator agreement as in the case of some languages with which only one of us would be familiar the texts were annotated twice by the same person. In the next stage of the project we plan to involve the Action members in providing and verifying annotations, which will allow us to examine the quality of the annotations better. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2."
},
{
"text": "Typographic conventions such as various quotation marks or dashes (see Table 2 below) are strong indicators of the direct speech. Based on them, we have constructed a baseline that relies on regular expressions to extract occurrences of unambiguously marked direct speech. In the languages that use dashes to mark dialogue, the challenge was to separate reporting clauses embedded in a sentence. The results obtained using this baseline were compared with those of manual annotation to assess its performance. Table 2 : Conventions of marking direct speech across languages, as accounted for in the baseline (the above conventions apply to non-normalized ELTeC corpus, but not necessarily to the 19th-century typographic traditions in general).",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 78,
"text": "Table 2",
"ref_id": null
},
{
"start": 510,
"end": 517,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Rule-based Approach and Baseline to Evaluate Model",
"sec_num": "3.1"
},
{
"text": "For many European languages with a high degree of standardization of typographic conventions this approach is extremely effective. For example, in English where the words spoken are enclosed in double quotation marks, narrator's inclusions are easy to identify, therefore the example sentence: \"I see,\" said Rachel; \"it is the same figure, but not the same shaped picture.\" may be captured using simple regular expression: (\".+?\"). Other languages, like French, not only use different symbols for quotations (\u00ab\u2026\u00bb), but also tend to omit them in dialogues for the initial dashes. Despite this, the performance of the rulesbased approach decreases only slightly. With the lack of clear separation of the direct speech, which is often the case for the early 19th-century editions, baseline performance drops substantially: for the German sample without proper marks it achieves 0.68 accuracy and only 0.18 recall (F1 = 0.04).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based Approach and Baseline to Evaluate Model",
"sec_num": "3.1"
},
{
"text": "Other common problems include no clear mark at the end of an utterance, no difference in marking direct speech and proper names, irony, or other pragmatic shifts that introduce subjective perspective, such as characters using metaphorical phrases, e.g. \"little man\" indicating not that the person addressed this way is short, but is treated with less respect by the speaker. These irregularities are the reason behind the decrease in baseline performance, with the worst results for Norwegian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based Approach and Baseline to Evaluate Model",
"sec_num": "3.1"
},
{
"text": "Deep learning solution that has distributed understanding of the direct speech features in multilingual environment may provide a way to get beyond typographic conventions or language-specific models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based Approach and Baseline to Evaluate Model",
"sec_num": "3.1"
},
{
"text": "While new developments in deep learning have had a significant impact on numerous natural language processing (NLP) tasks, one solution that has gained increased attention in recent months is BERT (Devlin et al., 2018) , or Bidirectional Encoder Representations from Transformers. This new representation model holds a promise of greater efficiency of solving NLP problems where the availability of training data is scarce. Inspired by its developers' proposed examples of studies done on Named Entity Recognition (https://huggingface.co/transformers/ examples.html), we adjusted discussed classifying method to work on the data annotated for direct speech utterances.",
"cite_spans": [
{
"start": 197,
"end": 218,
"text": "(Devlin et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adopted Deep Learning Solution",
"sec_num": "3.2"
},
{
"text": "BERT is based on Transformer architecture, \"an attention mechanism that learns contextual relations between words (or sub-words) in a text. In its vanilla form, Transformer includes two separate mechanisms -an encoder that reads the text input and a decoder that produces a prediction for the task.\" (Horev, 2018) . As learning in BERT happens both in left-to-right and right-to-left contexts, it manages to detect semantic and syntactic relations with greater accuracy than previous approaches. The model is trained on the entire Wikipedia and Book Corpus (a total of ~3,300 million tokens), currently covering 70 languages. The last part was specifically important for our purposes, given that we aimed to provide a solution that could work well across all languages in ELTeC corpus.",
"cite_spans": [
{
"start": 300,
"end": 313,
"text": "(Horev, 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adopted Deep Learning Solution",
"sec_num": "3.2"
},
{
"text": "Our solution consisted of several steps. First, we sampled five 10,000 word samples per language collection of EL-TeC and manually annotated it for direct speech. We followed TEI guidelines annotating spoken and marked thought-out utterances into <said> </said> tags. Based on that, we converted our datasets into BERT-accepted column format of token and label (I for direct, O for indirect speech), with spaces marking the end of a paragraph (in alteration to NER solution that divided the text into sentences). Our sample paragraph <said>\u00bbIch bin derselben Meinung\u00ab</said>, rief Benno T\u00f6nnchen eifrig.</p> would thus be turned into:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adopted Deep Learning Solution",
"sec_num": "3.2"
},
{
"text": "Ich I bin I derselben I Meinung I , O rief O Benno O T\u00f6nnchen O eifrig O . O",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adopted Deep Learning Solution",
"sec_num": "3.2"
},
{
"text": "In the next step, we collated our samples together and divided our dataset into train, test, and dev text files, following proportion of 0.8, 0.1, 0.1, ending with ~40,000 tokens per language, and 360,000 or 320,000 tokens total in training data, depending on the test conducted. The number depended on whether we included all languages or conducted a leave-one-out test. To ensure that the model learned a multilingual perspective, we introduced paragraph mixing, so a paragraph in a given language would occur every 8 or 9 paragraphs. We trained our model with similar parameters as the NER solution we followed, that is with 3 or 2 epochs and batch size of 32. We found that decreasing the number of epochs to 2 improved model performance by 1-2%. We also increased the maximal length of a sequence, due to errors coming from longer sentences in some of the languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adopted Deep Learning Solution",
"sec_num": "3.2"
},
{
"text": "While we attempted increasing the number of epochs in the training, we realized the model performance was reaching its plateau at 3, pointing to the need to adopt other solutions to further boost its efficiency. We have also tried training on 1/2 and 3/4 of the training dataset, noting that performance drop would only occur when going to half of the training set, again indicating the possibility of having reached plateau, or a need for introducing more variance of conventions when increasing the amount of training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adopted Deep Learning Solution",
"sec_num": "3.2"
},
{
"text": "General model performance is presented in Table 4 . Aligning with our intuition, the overall behavior of the multi-language model performs slightly worse than the rule-based approach applied individually to each language. Table 4 : General model performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 49,
"text": "Table 4",
"ref_id": null
},
{
"start": 222,
"end": 229,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4."
},
{
"text": "To scrutinize the above intuition, we performed a series of leave-one-out tests, recording the performance of each model with one of the languages being excluded. The results are shown in Table 5 . The scores obtained while excluding Norwegian and Italian suggest that in our composite model, some of the less-standardized languages might distort the final results. While this in itself might speak against choosing a multi-language approach, the fact that inclusion of the more-standardized languages in the model improves direct speech recognition for all languages indicates the usefulness of such model for auto-matic tagging of these parts of multilingual corpora for which regular expression based solutions are not good enough. The difference between the general model and the set of its leave-one-out variants turned out to be minor, leading to a conclusion that the general model exhibits some potential to extract direct speech despite local differences between the languages -suffice to say that the dispersion between the languages in the rule-based approach was much more noticeable. Table 5 : Leave-one-out performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 188,
"end": 195,
"text": "Table 5",
"ref_id": null
},
{
"start": 1097,
"end": 1104,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4."
},
{
"text": "Examination of the misclassifications of the model reveal three major sources of errors: narrative structures, size-related uncertainty and noise in pattern-learning. First person narration is often labeled as \"direct speech\" and linguistically these cases may appear inseparable. This applies not only to a general narrative mode of a novel, but also to the pseudo-documental entries (like letters, diaries) and other \"intradiagetic\" shifts, with characters becoming narrators. This points to the possible need of using separate DSR models for different narrative modes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4."
},
{
"text": "Size of the paragraph seems to influence model's judgement substantially: in longer paragraphs the model expects a mix of direct and indirect clauses (even if the text is homogenous), while one-sentence paragraphs tend to be marked as direct speech. This is in line with findings of Kovaleva et al. (2019) and Clark et al. (2019) , showing that attention of BERT is strongly connected to delimiters between BERT input chunks and token alignment within them, as well as sentences across the training data that share similar syntax structure but not semantics. We also observed that many cases that would be easily detected by a rule-based approach are recognized wrongly by BERTbased model: this suggests a certain level of noise in model's decisions (e.g., quotation marks are used for different purposes within the corpus). Abundance of the [reported clause] -> [reporting clause] -> [reported clause] pattern also blurs the model and forces it to anticipate this structure.",
"cite_spans": [
{
"start": 283,
"end": 305,
"text": "Kovaleva et al. (2019)",
"ref_id": null
},
{
"start": 310,
"end": 329,
"text": "Clark et al. (2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4."
},
{
"text": "It is unclear how important are linguistic features of direct and non-direct speech for the model, but errors suggest it pays some attention to imperative mode, personal pronouns, proper names, interjections and verb forms, while heavily relying on punctuation. The last one seems particularly important for misclassifications originating from the expectation that a sentence preceded by a colon or ending with a question or exclamation mark should be classified as direct speech. In a few cases we do not know if the model is wrong or right, because a context of one paragraph could be not enough for a human reader to make a correct judgement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4."
},
{
"text": "Our project gave us a number of findings in regard to the possibility of developing a uniform solution for direct speech annotation. First of all, we observe that inclusion of languages marking direct speech in more standardized conventions in the model boosts its general performance, improving classification also for literary traditions (or languages) with less regularities in spelling and typography. This is particularly important in the context of corpora such as ELTeC, which gather texts from several languages, including ones that are given relatively little attention in terms of the development of suitable NLP solutions, and present historical variants of the languages, often not well covered in contemporary language representations. It is also important for annotation of texts that feature extensive interjections from other languages, e.g. French dialogue in Polish and Russian novels, a phenomenon common in 19th-century literature involving gentry and bourgeoise characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
},
{
"text": "The performance of the model also hints at possible latent imbalances in the corpus which may introduce additional noise and structural problems. In future tests it will be necessary to control the effects of texts coming from first editions (historical language and typographic conventions) and modern reprints (used in some of the ELTeC subcollections); and, while we have not observed significant correlated impact on the results, perhaps also account for language families (Germanic vs. Romance vs. Slavic) and scripts (Cyrillic vs. Latin). The impact of first-person narratives on the instability of the performance also seems to be a factor. Finally, imbalance of \"quote\"-based and \"dash\"-based conventions of marking direct speech in the corpus may have introduced additional punctuation-driven noise. Given the above, it is reasonable to attempt conducting experiments with removed direct speech marks altogether, examining the possibility of guiding a model away from the surface-level punctuation features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
},
{
"text": "Since the transformers-based solution performs better than the baseline in the situations of increased uncertainty and lack of orthographical marks, it is feasible to expect its stable performance also in texts with poor OCR or in historic texts in European languages unseen by the model. These conditions are easily testable in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
}
],
"back_matter": [
{
"text": "The project was launched as a part of a three-year collaborative research project \"Deep Learning in Computational Stylistics\" between the University of Antwerp and the Institute of Polish Language (Polish Academy of Sciences), supported by Research Foundation of Flanders (FWO) and the Polish Academy of Sciences. JB, ME, AL, A\u0160 and MW were funded by \"Large-Scale Text Analysis and Methodological Foundations of Computational Stylistics\" (NCN 2017/26/E/ HS2/01019) project supported by Polish National Science Centre.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "6."
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Direct Reported Speech in Multilingual Texts: Automatic Annotation and Semantic Categorization",
"authors": [
{
"first": "M",
"middle": [],
"last": "Alrahabi",
"suffix": ""
},
{
"first": "J.-P",
"middle": [],
"last": "Descl\u00e9s",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Suh",
"suffix": ""
}
],
"year": 2010,
"venue": "Twenty-Third International FLAIRS Conference",
"volume": "",
"issue": "",
"pages": "162--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alrahabi, M., Descl\u00e9s, J.-P. & Suh J. (2010). Direct Re- ported Speech in Multilingual Texts: Automatic An- notation and Semantic Categorization. In Twenty-Third International FLAIRS Conference. Menlo Park: AAAI Press, pp. 162-167.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Bayesian Mixed Effects Model of Literary Character",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Underwood",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "370--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bamman, D., Underwood, T., & Smith N.A. (2014). A Bayesian Mixed Effects Model of Literary Character. In Proceedings of the 52nd Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers). Baltimore, Maryland: Association for Compu- tational Linguistics, pp. 370-379.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "GutenTag: An NLP-Driven Tool for Digital Humanities Research in the Project Gutenberg Corpus",
"authors": [
{
"first": "J",
"middle": [],
"last": "Brooke",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hammond",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Fourth Workshop on Computational Linguistics for Literature. Denver: Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "42--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brooke, J., Hammond, A., & Hirst G. (2015). GutenTag: An NLP-Driven Tool for Digital Humanities Research in the Project Gutenberg Corpus. In Proceedings of the Fourth Workshop on Computational Linguistics for Lit- erature. Denver: Association for Computational Lin- guistics, pp. 42-47.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Using Models of Lexical Style to Quantify Free Indirect Discourse in Modernist Fiction",
"authors": [
{
"first": "J",
"middle": [],
"last": "Brooke",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hammond",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2017,
"venue": "Digital Scholarship in the Humanities",
"volume": "32",
"issue": "2",
"pages": "234--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brooke, J., Hammond, A. & Hirst G. (2017). Using Mod- els of Lexical Style to Quantify Free Indirect Discourse in Modernist Fiction. Digital Scholarship in the Hu- manities, 32(2), pp. 234-250.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic Recognition of Speech, Thought, and Writing Representation in German Narrative Texts",
"authors": [
{
"first": "A",
"middle": [],
"last": "Brunner",
"suffix": ""
}
],
"year": 2013,
"venue": "Literary and Linguistic Computing",
"volume": "28",
"issue": "4",
"pages": "563--575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brunner, A. (2013). Automatic Recognition of Speech, Thought, and Writing Representation in German Narrat- ive Texts. Literary and Linguistic Computing, 28(4), 563-575.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Computation into Criticism: A Study of Jane Austen's Novels",
"authors": [
{
"first": "J",
"middle": [],
"last": "Burrows",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burrows, J. (1987). Computation into Criticism: A Study of Jane Austen's Novels. Oxford: Clarendon Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Before the 'Inward Turn': Tracing Represented Thought in the French Novel (1800-1929)",
"authors": [
{
"first": "M",
"middle": [],
"last": "Conroy",
"suffix": ""
}
],
"year": 2014,
"venue": "Poetics Today",
"volume": "35",
"issue": "1-2",
"pages": "117--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conroy, M. (2014). Before the 'Inward Turn': Tracing Represented Thought in the French Novel (1800-1929). Poetics Today, 35(1-2), pp. 117-171.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M.-W., Lee K., and Toutanova K. (2019). BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Narrative Discourse: An Essay in Method",
"authors": [
{
"first": "G",
"middle": [],
"last": "Genette",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Genette, G. (1980). Narrative Discourse: An Essay in Method. Ithaca, NY: Cornell University Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Identification of Speakers in Novels",
"authors": [
{
"first": "H",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Barbosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1312--1320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "He, H., Barbosa, D., & Kondrak, G. (2013). Identification of Speakers in Novels. In Proceedings of the 51st An- nual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers). Sofia, Bulgaria: As- sociation for Computational Linguistics, pp. 1312- 1320.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The Moonstone and The Coquette : Narrative and Epistolary Style",
"authors": [
{
"first": "D",
"middle": [
"L. ; D L"
],
"last": "Hoover",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hoover",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Culpeper",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "O'halloran",
"suffix": ""
}
],
"year": 2014,
"venue": "Digital Literary Studies: Corpus Approaches to Poetry, Prose and Drama. NY: Routledge",
"volume": "",
"issue": "",
"pages": "64--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoover, D.L. (2014). The Moonstone and The Coquette : Narrative and Epistolary Style. In D.L. Hoover, J. Culpeper, K. O'Halloran. Digital Literary Studies: Cor- pus Approaches to Poetry, Prose and Drama. NY: Routledge, pp. 64-89.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BERT Explained: State of the art language model for NLP",
"authors": [
{
"first": "R",
"middle": [],
"last": "Horev",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Horev, R. (2018). BERT Explained: State of the art lan- guage model for NLP. Medium, 17.11.2018. https://towardsdatascience.com/bert-explained-state-of- the-art-language-model-for-nlp-f8b21a9b6270",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Analysing Direct Speech in German Novels",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jannidis",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zehe",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Konle",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hotho",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Krug",
"suffix": ""
}
],
"year": 2018,
"venue": "DHd 2018: Digital Humanities. Konferenzabstracts",
"volume": "",
"issue": "",
"pages": "114--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jannidis, F., Zehe, A., Konle, L., Hotho, A., & Krug M. (2018). Analysing Direct Speech in German Novels. In DHd 2018: Digital Humanities. Konferenzabstracts, pp. 114-118.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Loudness in the Novel. Stanford Literary Lab Pamphlets",
"authors": [
{
"first": "Holst",
"middle": [],
"last": "Katsma",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katsma, Holst. (2014). Loudness in the Novel. Stanford Literary Lab Pamphlets, 7.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Minding the Source: Automatic Tagging of Reported Speech in Newspaper Articles",
"authors": [
{
"first": "R",
"middle": [],
"last": "Krestel",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bergler",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Witte",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)",
"volume": "",
"issue": "",
"pages": "2823--2828",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krestel, R., Bergler, S., & Witte, R. (2008). Minding the Source: Automatic Tagging of Reported Speech in Newspaper Articles. In Proceedings of the Sixth Inter- national Conference on Language Resources and Eval- uation (LREC'08). Marrakech, Morocco: European Language Resources Association, pp. 2823-2828.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Dialogism in the Novel: A Computational Model of the Dialogic Nature of Narration and Quotations",
"authors": [
{
"first": "G",
"middle": [],
"last": "Muzny",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Algee-Hewitt",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2017,
"venue": "Digital Scholarship in the Humanities",
"volume": "32",
"issue": "",
"pages": "1131--1152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muzny, G., Algee-Hewitt M., & Jurafsky D. (2017). Dia- logism in the Novel: A Computational Model of the Dialogic Nature of Narration and Quotations. Digital Scholarship in the Humanities, 32(suppl. 2), pp. 1131- 1152.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic Direct Speech Tagging in Russian prose markup and parser",
"authors": [
{
"first": "I",
"middle": [
"A"
],
"last": "Nikishina",
"suffix": ""
},
{
"first": "I",
"middle": [
"S"
],
"last": "Sokolova",
"suffix": ""
},
{
"first": "D",
"middle": [
"O"
],
"last": "Tikhomirov",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bonch-Osmolovskaya",
"suffix": ""
}
],
"year": 2019,
"venue": "Computational Linguistics and Intellectual Technologies",
"volume": "18",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikishina, I.A., Sokolova I.S., Tikhomirov D.O., and Bonch-Osmolovskaya, A. (2019). Automatic Direct Speech Tagging in Russian prose markup and parser. In Computational Linguistics and Intellectual Technolo- gies, 18.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatic Detection of Quotations in Multilingual News",
"authors": [
{
"first": "B",
"middle": [],
"last": "Pouliquen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Best",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "487--492",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pouliquen, B., Steinberger R. & Best C. (2007). Auto- matic Detection of Quotations in Multilingual News. In Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP'2007). Borovets, Bulgaria, pp. 487-492.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Straight Talk! Automatic Recognition of Direct Speech in Nineteenth-Century French Novels",
"authors": [
{
"first": "C",
"middle": [],
"last": "Sch\u00f6ch",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Schl\u00f6r",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Popp",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Brunner",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Henny",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Calvo Tello",
"suffix": ""
}
],
"year": 2016,
"venue": "Digital Humanities 2016: Conference Abstracts",
"volume": "",
"issue": "",
"pages": "346--353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sch\u00f6ch, C., Schl\u00f6r D., Popp S., Brunner A., Henny U. & Calvo Tello J. (2016). Straight Talk! Automatic Recog- nition of Direct Speech in Nineteenth-Century French Novels. In Digital Humanities 2016: Conference Ab- stracts. Krak\u00f3w: Jagiellonian University & Pedagogical University, pp. 346-353.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The Evolution of Dialogues: A Quantitative Study of Russian Novels (1830-1900). Poetics Today",
"authors": [
{
"first": "O",
"middle": [],
"last": "Sobchuk",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "37",
"issue": "",
"pages": "137--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sobchuk, O. (2016). The Evolution of Dialogues: A Quantitative Study of Russian Novels (1830-1900). Po- etics Today, 37(1), pp. 137-154.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Automatic Recognition of Direct Speech without Quotation Marks. A Rule-Based Approach",
"authors": [
{
"first": "N",
"middle": [
"D T"
],
"last": "Tu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Krug",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Brunner",
"suffix": ""
}
],
"year": 2019,
"venue": "DHd 2019 Digital Humanities: multimedial & multimodal. Konferenzabstracts. Frankfurt am Main",
"volume": "",
"issue": "",
"pages": "87--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tu, N.D.T., Krug, M. & Brunner, A. (2019). Automatic Recognition of Direct Speech without Quotation Marks. A Rule-Based Approach. In DHd 2019 Digital Human- ities: multimedial & multimodal. Konferenzabstracts. Frankfurt am Main, Mainz, pp. 87-89.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Mr. Bennet, His Coachman, and the Archbishop Walk into a Bar but Only One of Them Gets Recognized: On the Difficulty of Detecting Characters in Literary Texts",
"authors": [
{
"first": "H",
"middle": [],
"last": "Vala",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Piper",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ruths",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "769--774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vala, H., Jurgens D., Piper A., & Ruths, D. (2015). Mr. Bennet, His Coachman, and the Archbishop Walk into a Bar but Only One of Them Gets Recognized: On the Difficulty of Detecting Characters in Literary Texts. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon, Por- tugal: Association for Computational Linguistics, pp. 769-774.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "European Literary Text Collection. Distant Reading for European Literary History (COST Action CA16204",
"authors": [
{
"first": "",
"middle": [],
"last": "Eltec",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ELTeC (2019). European Literary Text Collection. Dis- tant Reading for European Literary History (COST Ac- tion CA16204), https://github.com/COST-ELTeC.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": ". ; -..., -; \u00ab ... \u00bb ; \" ... \" Norwegian -... ; \u00ab ... \u00bb Portuguese -... ; -..., -Romanian -... ; \" ... \" Serbian -... ; -... -Slovene \" ... \" ; \" ... \"",
"uris": null
},
"TABREF1": {
"type_str": "table",
"text": "Sample summaries and direct speech ratio (word level).",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}