ACL-OCL / Base_JSON /prefixC /json /C16 /C16-1028.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C16-1028",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:01:36.884378Z"
},
"title": "Detecting Sentence Boundaries in Sanskrit Texts",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Hellwig",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "D\u00fcsseldorf University",
"location": {
"postCode": "SFB 991"
}
},
"email": "ohellwig@phil-fak.uni-duesseldorf.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The paper applies a deep recurrent neural network to the task of sentence boundary detection in Sanskrit, an important, yet underresourced ancient Indian language. The deep learning approach improves the F scores set by a metrical baseline and by a Conditional Random Field classifier by more than 10%.",
"pdf_parse": {
"paper_id": "C16-1028",
"_pdf_hash": "",
"abstract": [
{
"text": "The paper applies a deep recurrent neural network to the task of sentence boundary detection in Sanskrit, an important, yet underresourced ancient Indian language. The deep learning approach improves the F scores set by a metrical baseline and by a Conditional Random Field classifier by more than 10%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most NLP tasks that deal with written texts take it for granted that sentences are separated reliably by punctuation marks, although punctuation has been added quite late to many writing systems. The large corpora in Old-and Middle-Indian languages, which belong to the central sources for understanding the history of South Asia, generally lack dedicated punctuation marks. This paper applies deep recurrent neural networks (RNN) to a combination of morphological and lexical features for detecting sentence boundaries (SB) in Sanskrit, the oldest and most important of these Indian languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Traditional editions of Sanskrit texts use a scriptio continua, which lacks several orthographic elements that structure texts in modern Western languages. Single words are frequently not separated by blank spaces due to missing orthographic regulation, or because the words are merged through the euphonic rules called sandhi (\"connection\"). 1 Moreover, Sanskrit texts don't have a consistent and unambiguous system for marking SBs. Editors and scribes insert so called (double) dan . d . as (\"sticks\", indicated by | and || in this paper) to mark the end of metrical structures. The position of these dan . d . as can be derived directly from the prosodic structure of a text, and dan . d . as always occur at the end of text lines, which coincide with half-verses in most printed editions. While single dan . d . as mark the end of a half-verse, double dan . d . as should, at least theoretically, indicate, where a stanza in the given metre is completed. Double dan . d . as typically occur after every second line or half-verse of a metrical text, because the stanzas are finished at these points. In this function, they are meant to improve the readability of a text. As many sentences terminate at the end of a half-verse or of a stanza, dan . d . as provide a good baseline for punctuation prediction (refer to Table 3 ). Many editors, however, also insert double dan . d . as after a single or after three metrical lines, when they feel that a sentence is completed at these positions. 2 In this way, the purely metrical motivation of double dan . d . as is mingled with the new function of a punctuation mark -leaving aside the fact that the philologically interesting inner-line SBs cannot be marked in the dan . d . a system.",
"cite_spans": [
{
"start": 1495,
"end": 1496,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1319,
"end": 1326,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Linguistic peculiarities of Sanskrit complicate the task. English, for example, encodes a large amount of its syntax through a strongly regulated word order, and structures its sentences by subordinating conjuctions. While these data provide a lot of the information necessary for restoring punctuation, Sanskrit has a rather loose word order with a tendency to subject-object-verb constructions, it uses conjunctions quite sparingly, and their position provides only weak indications for the presence of SBs. As the Indian This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http: //creativecommons.org/licenses/by/4.0/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 The two words parvatasya agre, for example, are merged into one string parvatasy\u0101gre by the rule a+a=\u0101; refer to Kielhorn (1888, 6ff.) for an overview. Sandhi is one of the main problems for Sanskrit NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Refer to Hopkins (1901, 194) : \"The number of verses in a (. . . ) stanza may be decreased or increased by one or two (. . . ). Sometimes, however, where one or three hemistichs make a stanza, it is merely a matter of editing.\" grammatical tradition has emphasized (Section 2), determining the boundaries of a sentence is equivalent to grasping its full semantic meaning.",
"cite_spans": [
{
"start": 11,
"end": 30,
"text": "Hopkins (1901, 194)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The need for reliable punctuation on sentence level is beyond question. Access to full sentences is central for NLP tasks such as dependency parsing or role labeling. In addition, detecting SBs is also important from a philological perspective. The metrical texts considered in this paper belong to a tradition of (pseudo-)oral poetry that still survives in parts of India (Smith, 1987) . The constituent structure of sentences (e.g., extensive right branchings) or the presence of enjambements, which are easily detected when SBs are known, provide important evidence for understanding the transition of these epics from an oral to a written state (Sellmer, 2015; Parry, 1930) . More generally, Sanskrit provides a challenging application scenario for NLP due to the richness of its phonetics (sandhi), morphology, lexicon, and semantics. In spite of its historical importance, it is heavily underressourced from the perspective of NLP, and the size of its corpus prevents a purely manual annotation of linguistic phenomena. 3 The remainder of the paper is structured as follows. Section 2 sketches how a sentence was defined in the tradition of classical Indian grammar, and summarizes related research from NLP and automatic speech recognition. Section 3 reports results of a test annotation, details the annotation guideline, and describes the data prepared for this study. Section 4 introduces the features and the deep learning model. Section 5 describes the evaluation baselines given by prosodical markers and a CRF model, discusses the performance of the model, and identifies critical areas. Section 6 summarizes the paper.",
"cite_spans": [
{
"start": 373,
"end": 386,
"text": "(Smith, 1987)",
"ref_id": "BIBREF30"
},
{
"start": 649,
"end": 664,
"text": "(Sellmer, 2015;",
"ref_id": "BIBREF29"
},
{
"start": 665,
"end": 677,
"text": "Parry, 1930)",
"ref_id": "BIBREF24"
},
{
"start": 1026,
"end": 1027,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Classical Sanskrit was systematically de-and prescribed in the famous grammar As . t .\u0101 dhy\u0101y\u012b of P\u0101n . ini (around 350 BCE, Scharfe (1977) ), who used Sanskrit as a metalanguage, and applied methods such as rewrite rules and rule inheritance for minimizing the text length (Kiparsky, 2009) . While the As . t .\u0101 dhy\u0101y\u012b deals exhaustively with phonetics and morphology, syntax only plays a subordinate role. Its main syntactic contribution is the k\u0101raka theory, which describes the interaction between nominal case suffixes and verbs (Cardona, 1976, 215ff.) . The grammatical tradition following P\u0101n . ini provided empirical, verb-centered definitions of sentences (Matilal, 1966, 377ff.) . Because many Sanskrit sentences don't overtly express the copula \"to be\", these definitions gave rise to extended discussions about the underlying grammatical and cognitive structures of sentences such as pus . pam . raktam (flower:NSG red:NSG), which may mean \"a red flower\" or the complete sentence \"the flower is red\" (Deshpande, 1991) . Missing copulae introduce a high degree of ambiguity in the SB detection task, as will be seen in Section 5.3. The later philosophical school of Ny\u0101ya concentrated on the conditions that make a sentence meaningful and complete for a competent speaker of Sanskrit (Matilal, 1966, 385ff.) , and that include the semantic compatibility of the words (yogyat\u0101) and their correct grouping (sam . nidhi; see Kulkarni et al. (2015) ). If an utterance fulfills these conditions, it creates the intended cognition (\u015b\u0101bdabodha) in the listener. So, the Indian tradition claims that only a competent speaker can determine the boundary of a sentence, but does not provide formal criteria for deciding if a sentence is complete or not.",
"cite_spans": [
{
"start": 125,
"end": 139,
"text": "Scharfe (1977)",
"ref_id": "BIBREF26"
},
{
"start": 274,
"end": 290,
"text": "(Kiparsky, 2009)",
"ref_id": "BIBREF16"
},
{
"start": 534,
"end": 557,
"text": "(Cardona, 1976, 215ff.)",
"ref_id": null
},
{
"start": 665,
"end": 688,
"text": "(Matilal, 1966, 377ff.)",
"ref_id": null
},
{
"start": 1012,
"end": 1029,
"text": "(Deshpande, 1991)",
"ref_id": "BIBREF5"
},
{
"start": 1295,
"end": 1318,
"text": "(Matilal, 1966, 385ff.)",
"ref_id": null
},
{
"start": 1433,
"end": 1455,
"text": "Kulkarni et al. (2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Related research in NLP mainly deals with punctuation restoration in speech transcripts, and in languages such as Chinese that traditionally don't use punctuation marks for structuring syntactic sequences. Liu et al. (2005) contrast Hidden Markov Models (HMM), Maximum Entropy classifiers and Conditional Random Fields (CRF). They obtain a significant decrease of the SB detection error when processing lexical and automatically induced POS features using a CRF. Baldwin and Joseph (2009) perform simultaneous case and punctuation restoration in English texts. They process automatically annotated lexical, POS, and chunk features with a linear kernel Support Vector Machine. The authors report the highest F score for punctuation restoration, when they iteratively label the training and test sets with the output of the classifier, and retrain with the augmented feature space (\"iterative retagging\"). Zhao et al. (2012) train CRFs on the task of inserting punctuation in Chinese text, using features from different annotation levels of a Chinese treebank, and observe an increase in the F score, when higher level features such as POS tags or chunks are combined with lexical information. Tilk and Alum\u00e4e (2015) model the restoration of commas and periods in Estonian speech transcripts with a two-stage Long Short-term Memory (LSTM) approach. The first LSTM is trained on a large written corpus with lexical information in 1-hotencoding as predictors and the associated punctuation as predicted classes. Following Seide et al. (2011, 26) , the authors combine the output of the last hidden layer of this text LSTM with duration features from a smaller corpus of punctuated speech transcripts. This combined feature set is fed into a second LSTM that performs the final classification.",
"cite_spans": [
{
"start": 206,
"end": 223,
"text": "Liu et al. (2005)",
"ref_id": "BIBREF20"
},
{
"start": 463,
"end": 488,
"text": "Baldwin and Joseph (2009)",
"ref_id": "BIBREF0"
},
{
"start": 904,
"end": 922,
"text": "Zhao et al. (2012)",
"ref_id": "BIBREF36"
},
{
"start": 1518,
"end": 1541,
"text": "Seide et al. (2011, 26)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Algorithms based on short range models (n-grams, HMMs) or those requiring strict positional information may not be applicable to Sanskrit for several reasons. Sanskrit has a relatively free word order (Gillon and Shaer, 2005; Hock, 2013) , and encodes many syntactical relations through its morphology, so that the positional information inherent in an n-gram model may not contribute as strongly as in English or Chinese. In addition, Sanskrit NLP suffers from data sparsity in the lexical domain. The corpus on which the models are trained contains 3,950,000 disambiguated lexical tokens. New data for pretraining a lexical model cannot be generated on the fly, because the phonetic phenomenon of Sandhi introduces a high degree of ambiguity (Hellwig, 2015b) , and sufficiently large digitized Sanskrit corpora are missing.",
"cite_spans": [
{
"start": 201,
"end": 225,
"text": "(Gillon and Shaer, 2005;",
"ref_id": "BIBREF7"
},
{
"start": 226,
"end": 237,
"text": "Hock, 2013)",
"ref_id": "BIBREF12"
},
{
"start": 744,
"end": 760,
"text": "(Hellwig, 2015b)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "CRFs as used by Liu et al. (2005) and Zhao et al. (2012) are more flexible than HMMs in modeling the feature space involved in SB detection, because their input features can, in principle, come from arbitrarily long ranges around a focus word, and because they are trained to maximize the classification accuracy. RNNs as used by Tilk and Alum\u00e4e (2015) are equally able to capture the long-range interactions between morphology, lexicon, and output symbols that can be hypothesized to play an important role in SB detection. Section 5 will compare their efficiency in the present task. The problems of exploding and vanishing gradients (Pascanu et al., 2013) can be handled with Long Short-Term Memory units (LSTM, for the vanishing ones (Hochreiter and Schmidhuber, 1997) , combined with a gradient cutoff) or with Hessian free training of the network (Martens and Sutskever, 2011) . Stacked LSTMs as used by Sutskever et al. (2014) with bidirectional units (Schuster and Paliwal, 1997) seem to provide a promising approach for labeling SBs in Sanskrit.",
"cite_spans": [
{
"start": 16,
"end": 33,
"text": "Liu et al. (2005)",
"ref_id": "BIBREF20"
},
{
"start": 38,
"end": 56,
"text": "Zhao et al. (2012)",
"ref_id": "BIBREF36"
},
{
"start": 330,
"end": 352,
"text": "Tilk and Alum\u00e4e (2015)",
"ref_id": "BIBREF32"
},
{
"start": 636,
"end": 658,
"text": "(Pascanu et al., 2013)",
"ref_id": "BIBREF25"
},
{
"start": 738,
"end": 772,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF11"
},
{
"start": 853,
"end": 882,
"text": "(Martens and Sutskever, 2011)",
"ref_id": "BIBREF21"
},
{
"start": 910,
"end": 933,
"text": "Sutskever et al. (2014)",
"ref_id": "BIBREF31"
},
{
"start": 959,
"end": 987,
"text": "(Schuster and Paliwal, 1997)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The discussion in Section 2 has shown, that the Sanskrit grammatical tradition does not provide a solid basis for developing a practical annotation guideline for SBs. As a consequence, ten sequences of at least two metrical lines that contain complex syntactic phenomena were annotated independently by three external annotators and the author of the paper. Given the small size of the data set, this annotation was not primarily meant to determine the true inter-annotator agreement (IAA), but rather to obtain quantitative support for ambiguous cases in the annotation guideline. The lines were tokenized according to Western editorial standards without resolving Sandhis and compounds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test annotation",
"sec_num": "3.1"
},
{
"text": "Assuming that a period can be inserted after each of the 360 tokens, the annotation yielded an IAA of 0.805, using Fleiss' \u03ba (Fleiss, 1971) . When only those tokens are considered after which at least one annotator inserted a period, the IAA drops to \u03ba = 0.312. A detailed analysis shows that almost all unanimous annotations concern periods that coincide with (double) dan . d . as, while there is substantial disagreement about inner-line periods.",
"cite_spans": [
{
"start": 125,
"end": 139,
"text": "(Fleiss, 1971)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test annotation",
"sec_num": "3.1"
},
{
"text": "Drawing from the results of the initial annotation and from ideas proposed in Matilal (1966) , this paper defines a Sanskrit sentence as a sequence of words that contains at least an overtly expressed finite verb (type s 1 ; minimal sentence length: one word 4 ), or two non-verbal elements with an unexpressed copula denoting equivalence or existence 5 (type s 2 ). s 1 and s 2 can be expanded by (recursive and/or compound) subordinate clauses and matrix sentences. As a direct consequence, sentences on the s 1 or s 2 levels that are connected by a (coordinating) conjunction such as ca 'and' are interpreted as separate sentences in this paper. The following three cases need special consideration:",
"cite_spans": [
{
"start": 78,
"end": 92,
"text": "Matilal (1966)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Guideline",
"sec_num": "3.2"
},
{
"text": "Overtly Expressed Subjects No period is inserted between main clauses separated by a coordinating conjunction such as ca 'and', if the first sentence overtly expresses the subject, and the following sentences use the same subject without overtly expressing it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guideline",
"sec_num": "3.2"
},
{
"text": "The particle iti The particle iti 'thus' marks the end of a direct speech, or of a personal opinion presented as a direct speech. The direct speech terminated by iti is interpreted as a matrix sentence and, therefore, not separated by an SB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guideline",
"sec_num": "3.2"
},
{
"text": "Formulae and interjections Interjections and formulaic phrases are marked as separate clauses, if they are not embedded as matrix sentences. 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guideline",
"sec_num": "3.2"
},
{
"text": "One annotator used the guideline (Section 3.2) to mark sentence ends in 226 chapters with 96,292 lexical tokens, which were drawn from the metrical texts in the Digital Corpus of Sanskrit (DCS, Hellwig (2015a) 7 ). Although each chapter constitutes a single long sequence with unknown punctuation, most metrical texts simulate an oral presentation by inserting the stock line \"[some person] said 8 \" between closed narrative blocks. Therefore, the chapters have been split up into a total of 609 of such blocks (\"sequences\"), which represent the individual statements of the persons participating in a conversation. The epic Mah\u0101bh\u0101rata (MBH) contributes most of the data. Because the text has probably grown over centuries and incorporated diverse written and oral sources (Brockington, 1998) , the predominance of the MBH does not bias the data unduly towards the style of one author. A total of 9,562 SBs has been annotated. 85.6% of the SBs coincide with (double) dan . d . as, which provide a strong baseline for SB detection (Table 3) . The annotated chapters contain a total of 9,027 word types, 3,838 of which are hapax legomena. The sentences have a mean length of 10 tokens (median: 8), and 90% of all sentence lengths are found in the interval [3, 20] .",
"cite_spans": [
{
"start": 774,
"end": 793,
"text": "(Brockington, 1998)",
"ref_id": "BIBREF2"
},
{
"start": 1255,
"end": 1258,
"text": "[3,",
"ref_id": null
},
{
"start": 1259,
"end": 1262,
"text": "20]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1031,
"end": 1040,
"text": "(Table 3)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.3"
},
{
"text": "This section describes which features were considered for SB detection, and motivates their use. Their influence on the prediction accuracy is reported in Section 5.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "Dan . d . a information Because dan . d . as provide a strong baseline for SB detection (see Table 3 ), and omitting them drastically reduces the F score in all configurations, they are used as features in all settings. (Double) dan . d . as are encoded as dummy variables.",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "Morphological Information Sanskrit has a rich, though partly ambiguous Indo-Aryan morphology. Nouns, adjectives, pronouns, and declinable verbal participles are inflected in eight cases, three numbers (including dual), and three genders, while finite verbal forms occur in three numbers, three persons, and over tenses and modes (aspects). Although Sanskrit also uses conjunctions to join subordinate and main clauses, verbal subordination is typically expressed by the indeclinable absolutive (gerund; tv\u0101nta and lyabanta). 9 As morphology provides strong indications for the inner structure of a sentence, it is included in the feature set either in 1-hot-(1h, each observed combination of morphological subfeatures is mapped to a distinct position in a 1-hot vector v M ) or in a decomposed encoding, in which each position of v M encodes the presense or absence of a subfeature such as 'nominative' or 'perfect tense' (dec, refer to the featurization in Cotterell and Sch\u00fctze (2015, 1289) ). Both encoding modes (1h, dec) don't distinguish between different morphological derivations of tenses and modes.",
"cite_spans": [
{
"start": 525,
"end": 526,
"text": "9",
"ref_id": null
},
{
"start": 958,
"end": 971,
"text": "Cotterell and",
"ref_id": "BIBREF4"
},
{
"start": 972,
"end": 992,
"text": "Sch\u00fctze (2015, 1289)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "Because the DCS does not contain syntactic annotations, morphological information is also used to generate possible syntactic links. Given a context size of s = 5, a word w p at position p in a sequence, and the set of words W q = {w q | |q \u2212 p| \u2264 s, q = p}, a link between w p and w q is generated, if (1) p < q and w p belongs to the same compound (sam\u0101sa) as w q , (2) w p and w q are nominal forms with the same case, number, and gender, (3) if one of w p and w q is a verb and the other one a congruent nominative, (4) if one of w p and w q is an absolutive and the other one a finite verb, or (5) if w p and w q belong to a set of correlative conjunctions and pronouns such as yad\u0101 'when'-tad\u0101 'then' or yad 'which'-tad 'that'. These links are encoded as two sums weighted with 1 |p\u2212q| for the left and right contexts. Although the existence of a link does not guarantee that w p and w q belong to the same sentence, t-tests of the weighted values with the SB labels as binary factor yield highly significant test statistics of t = \u221231.99 (left) and t = 45.70 (right), such that testing the predictive power of these features appears justified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "Lexical Information Sanskrit has a rich vocabulary, and Sanskrit authors put importance on the use of synonyms. An unsophisticated text such as the MBH, for example, uses 35 synonyms to denote the warrior Arjuna, or 14 for the concept \"mountain\". As a consequence, one faces considerable data sparsity, when lexical information is used in 1-hot encoding. Low dimensional embeddings built from reduced vector space models (VSM, Turney and Pantel (2010)) or from neural networks (Bengio et al., 2003; Mikolov et al., 2011) have been shown to offer a workaround for this problem. Therefore, word embeddings generated with the word2vec tool (Mikolov et al., 2011) 10 are used as lexical features in all configurations marked with w2v.",
"cite_spans": [
{
"start": 477,
"end": 498,
"text": "(Bengio et al., 2003;",
"ref_id": "BIBREF1"
},
{
"start": 499,
"end": 520,
"text": "Mikolov et al., 2011)",
"ref_id": "BIBREF23"
},
{
"start": 637,
"end": 659,
"text": "(Mikolov et al., 2011)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "As an alternative to a fully lexicalized model, the setting indecl uses the set of the most frequent 100 conjunctions and indeclinables in 1-hot encoding as the sole lexical information. This setting is motivated by the idea that these words indicate the basic structure of a sentence. Yuret (1998) has shown that pointwise mutual information (PMI) between words can be used for building dependency structures. Therefore, normalized PMI for a window of size s = 5 around each w p is added to the feature space in analogy to the syntactic links described above. PMI is either calculated from lexical information (lpmi), or from a mixture of lexical and word semantic data (lspmi).",
"cite_spans": [
{
"start": 286,
"end": 298,
"text": "Yuret (1998)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "The RNN consists of a linear input layer with a dropout rate of 0.1 (Hinton et al., 2012) , one or more bidirectional LSTM layers without peephole units, and a softmax output layer. All network weights are randomly initialized with a uniform distribution in [\u22120.01, +0.01]. The initial learning rate is set to 0.0008, and linearly decreased to the value of 0.0001. Training is performed with stochastic gradient descent, gradient clipping at the LSTM units, and a constant momentum of 0.95. Because the output of the network is a single binary variable, it is decoded using a threshold of 0.5. The model is implemented in C++.",
"cite_spans": [
{
"start": 68,
"end": 89,
"text": "(Hinton et al., 2012)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Network Architecture and Settings",
"sec_num": "4.2"
},
{
"text": "The experiments reported in Section 5 are performed with a ten-fold cross-validation (CV). In order to make the results comparable to each other, the same random split of the data was used in all experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Network Architecture and Settings",
"sec_num": "4.2"
},
{
"text": "In order to assess how the features (Section 4.1) influence the classification results, flat networks with dropout and one bidirectional LSTM are trained on subsets of morphological and lexical features. Table 1 shows that the decomposed morphological encoding creates better results than the 1-hot encoding. It may be conjectured that the decomposed version can estimate the relevance of rare morphological features (e.g., genitive dual) from their more frequent subfeatures (e.g., genitive in all numbers). The hard-coded morphological links don't improve the performance, and are therefore discarded from the feature set. Table 2 shows that lexical features have a noticeable, though not too large effect on the classification results. While the fully unlexicalized setting produces the worst results, the combination of word embeddings (w2v) with semantically enriched lexical links (lspmi) produces the best F scores (Table 2 ). This result demonstrates that neural language models create meaningful embeddings even from small training corpora, although the full lexical disambiguation of the training data is certainly helpful in learning proper representations.",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 212,
"text": "Table 1",
"ref_id": null
},
{
"start": 626,
"end": 633,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 923,
"end": 931,
"text": "(Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "5.1"
},
{
"text": "The LSTM models for the final evaluation are trained on decomposed morphological features, neural embeddings of size 200, and semantically enriched lexical links. The basic architecture follows the description in Section 4.2, and the number of inner bidirectional LSTM layers is set to 2 or 3. Table 3 presents the baselines and the results for different LSTM architectures. The prosodical baselines are calculated by inserting an SB either at each dan . d . a or double dan . d . a (\"baseline dan . d . a\"), or at each double dan . d . a only (\"baseline double dan . d . a\"). As remarked in Section 1, editors tend to move double dan . d . as by one line, if they are able to indicate an SB in this way. Therefore, double dan . d . as present a rather precise baseline for SB detection in metrical Sanskrit texts, although their recall is low.",
"cite_spans": [],
"ref_spans": [
{
"start": 294,
"end": 301,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "5.1"
},
{
"text": "As many previous papers apply CRFs to SB detection (Section 2), CRFs trained with morphological features and different levels of lexical features are used as a second set of baselines. The central rows in Table 3 show that the F scores of CRFs are only slightly higher than those of the prosodical baselines. Error analysis reveals that CRFs base their predictions mainly on dan . d . a information, which explains the comparatively small differences between the F scores of the two baselines (75.55 vs. 73.80).",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Comparing baseline models and LSTMs",
"sec_num": "5.2"
},
{
"text": "Bidirectional LSTMs significantly outperform both baselines. Comparing Tables 1, 2 and 3 shows that their F scores increase with their depth, i.e. the number of stacked LSTM layers. When using a deeper architecture, the strongest improvements are observed in the model recall. The best single model in Table 3 almost reaches precision and recall of the two metrical baselines, and improves the F score of the double dan . d . a baseline by almost 13%. Table 3 , stratified by sentence length classes (columns 3ff.). Column 1: b i = 1: b(eginning) of sentence s i is detected; e i = 1: e(nd) detected; i i = 1: the i(nner) part of s i does not contain superfluous SBs. Table 3 demonstrates that RNNs clearly outperform both baselines, and that stacking the bidirectional LSTM layers further improves the performance. However, the results don't tell much about the actual usability of the SB labeler, especially about how many full sentences were labeled correctly, and which sentence structures or types are prone for errors. To assess these questions, a metric similar to the \"strict\" evaluation in Liu and Shriberg (2007) is used. Instead of measuring the annotation precision for single instances of SBs, this metric considers if full sentences have been annotated correctly, and where errors occur in their annotation. For every sentence s i in the gold annotation it is tested, if the RNN has marked the beginning b i of s i (= the end of s i\u22121 ) and the end e i of s i correctly, and if it has inserted additional SBs in between b i and e i (variable i i ). A sentence is accepted as correct in this evaluation, if b i and e i are correct (b i = 1, e i = 1), and if there exist no superfluous SBs between them (i i = 1). Table 4 shows the proportions of the 2 3 = 8 combinations of these three binary labels for the output of the best RNN from Table 3 (3 hidden layers, dropout). The model achieves an overall \"strict\" accuracy of 65.08% (configuration 1-1-1, column 2). If the configurations in which only one SB has been missed (0-1-1, 1-0-1) or in which wrong SBs have been inserted between two correctly labeled SBs (1-1-0) are accepted as partial matches, this \"lenient\" accuracy goes up to 95.54%.",
"cite_spans": [
{
"start": 1099,
"end": 1122,
"text": "Liu and Shriberg (2007)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 71,
"end": 88,
"text": "Tables 1, 2 and 3",
"ref_id": "TABREF1"
},
{
"start": 302,
"end": 309,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 452,
"end": 459,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 668,
"end": 675,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 1726,
"end": 1733,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 1849,
"end": 1856,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Comparing baseline models and LSTMs",
"sec_num": "5.2"
},
{
"text": "It has been noted in Section 1 that sentences starting or ending in the middle of a text line convey important text historical information. In addition, the CRF failed almost completely to detect these more unusual SBs. In order to examine how these cases are handled by the LSTM, the output of the best LSTM from Table 3 has been stratified according to the start and end positions of the sentences. It may be expected that sentences starting at the beginning of a text line (b) and ending at a dan . d . a (d) or double dan . d . a (d2) may have lower error rates than those starting and/or ending in the middle of a line (m). The results displayed in Table 4 provides another view of the same data, which have been stratified with regard to length classes of sentences. As could be hypothesized from Table 5 , the highest accuracy is observed for length class 3, which contains, among others, all sentences that extend over two lines between two double dan . d . as (subset of configuration b-d2). A closer inspection of class 5 (sentences containing at least 30 words) shows, that most of the correct instances of this class have between 30 and 50 words, although the model also marks two very long sentences correctly. MBH 1.19.3-15, a description of the ocean, is a right-branching construction typical for poetic style (k\u0101vya). The initial phrase dadr .\u015b\u0101 te tad\u0101 tatra samudram (\"Then, both of them saw the ocean there\") is expanded by several lines of accusative constructions that depend on the head word samudram. Apart from congruent adjectives and appositions, the expansions also contain subordinate participle clauses. This means that the whole sentence can not be reduced to an easily memorizable pattern in the form verb-adverb*-acc*, and demonstrates that stacked LSTM units are in principle able to capture such long-range syntactic dependencies.",
"cite_spans": [],
"ref_spans": [
{
"start": 314,
"end": 321,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 654,
"end": 661,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 803,
"end": 810,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "A considerable number of errors is produced by short sentences that start or end in the middle of a text line (*-m-* configurations), and for which only one boundary is detected correctly (configurations 0-1-1 and 1-0-1 with length class 1 in Table 4 ). One of the syntactical patterns that produce most of the errors in this class consists of sequences of words in nom. sg. lacking a copula as observed in MBH 1.147.11: As soon as more training data are available, a Viterbi search over decoded sentence patterns or an additional CRF layer (Huang et al., 2015) may help to reduce the number of such errors.",
"cite_spans": [
{
"start": 541,
"end": 561,
"text": "(Huang et al., 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 243,
"end": 250,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "Although the proposed deep bidirectional LSTM model clearly outperforms the metrical and CRF baselines, its accuracy is currently not high enough for performing a reliable unsupervised annotation of SBs. As the evaluation of short sentences has shown, many of the problematic cases cannot be solved on the morpho-syntactic level, but require comprehensive lexical and word semantic information. This finding suggests that a larger amount of training data, including more tokenized texts and more annotated SBs, may improve the performance. In this context, the LSTM model will be used for pre-annotating SBs. Another line of future research will concentrate on the representation of input features. Recent studies such as Labeau et al. (2015) , but also Hellwig (2015b) for Sanskrit have demonstrated that the processing of morphologically rich languages may benefit from using sub-word units, skipping lexicalization altogether, or integrating it into a \"deeper level\" of the network architecture. Given the complexity of Sanskrit phonetics and the richness of its vocabulary, such an approach may prove useful, as soon as more SB annotations are available.",
"cite_spans": [
{
"start": 722,
"end": 742,
"text": "Labeau et al. (2015)",
"ref_id": "BIBREF18"
},
{
"start": 754,
"end": 769,
"text": "Hellwig (2015b)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "There exist no reliable estimations of the real size of Sanskrit literature. The GRETIL website (http://gretil. sub.uni-goettingen.de/), which provides digital transcripts of a few percent of all printed Sanskrit texts, may contain around 15 million lexical tokens (estimation of the author; numbers may be significantly higher due to Sandhi). Large parts of the Sanskrit literature are still only transmitted as manuscripts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Sentences such as gacch\u0101ni 'I shall go' don't need to overtly express the personal pronoun aham. 5 Existence: hastin\u0101pure van . ik \"[there is/was a] merchant in [the city of] Hastin\u0101pura\"; equivalence: pus . pam . raktam, see page 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Refer toWackernagel (1978 (reprint from 1896) on short sentences with a particle-like function.7 This corpus collects 279 texts from different domains with 3,950,000 tokens with gold-annotations on the morphological, lexical, and word semantic level.8 vy\u0101sa uv\u0101ca \"[The sage] Vy\u0101sa said\" is a typical example of these stock lines, which are always terminated by a single dan . d . a, and not written in\u015bloka metre. 9 A typical toy example for this construction runs like: r\u0101mo ('R\u0101ma' NSG) vanam . ('forest' ASG) gatv\u0101 ('go' ABS) s\u012bt\u0101m . ('S\u012bt\u0101' ASG) pa\u015byati ('see' PR3.SG), \"R\u0101ma, having gone to the forest, sees S\u012bt\u0101.\" = \"After R\u0101ma has gone to the forest, he sees S\u012bt\u0101.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Settings: Trained with full chapters, i.e. dan . d . as were ignored; embedding size: 200, bow, window size: 10, 5 iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Restoring punctuation and casing in English text",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Manuel Paul Anil Kumar",
"middle": [],
"last": "Joseph",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "547--556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Baldwin and Manuel Paul Anil Kumar Joseph. 2009. Restoring punctuation and casing in English text. In Advances in Artificial Intelligence, Springer, pages 547-556.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A neural probabilistic language model",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Jauvin",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research 3:1137-1155.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Sanskrit Epics",
"authors": [
{
"first": "John",
"middle": [],
"last": "Brockington",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Brockington. 1998. The Sanskrit Epics. Brill, Leiden.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Morphological word-embeddings",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Annual Conference of the NACL, ACL",
"volume": "",
"issue": "",
"pages": "1287--1292",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell and Hinrich Sch\u00fctze. 2015. Morphological word-embeddings. In Proceedings of the 2015 Annual Conference of the NACL, ACL, pages 1287-1292.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "P\u0101n . inian syntax and the changing notion of sentence",
"authors": [
{
"first": "M",
"middle": [],
"last": "Madhav",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Deshpande",
"suffix": ""
}
],
"year": 1991,
"venue": "Studies in Sanskrit Syntax",
"volume": "",
"issue": "",
"pages": "31--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Madhav M. Deshpande. 1991. P\u0101n . inian syntax and the changing notion of sentence. In Hans Heinrich Hock, editor, Studies in Sanskrit Syntax, Motilal Banarsidass Publishers, Delhi, pages 31-43.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Measuring nominal scale agreement among many raters",
"authors": [
{
"first": "Joseph",
"middle": [
"L"
],
"last": "Fleiss",
"suffix": ""
}
],
"year": 1971,
"venue": "Psychological Bulletin",
"volume": "76",
"issue": "5",
"pages": "378--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin 76(5):378-382.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Classical Sanskrit, \"wild trees\", and the properties of free word order languages",
"authors": [
{
"first": "Brendan",
"middle": [],
"last": "Gillon",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Shaer",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "457--494",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brendan Gillon and Benjamin Shaer. 2005. Classical Sanskrit, \"wild trees\", and the properties of free word order languages. In Katalin \u00c9. Kiss, editor, Universal Grammar in the Reconstruction of Ancient Languages, De Gruyter, Berlin, Boston, pages 457-494.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Morphological disambiguation of Classical Sanskrit",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Hellwig",
"suffix": ""
}
],
"year": 2015,
"venue": "Systems and Frameworks for Computational Morphology",
"volume": "",
"issue": "",
"pages": "41--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver Hellwig. 2015a. Morphological disambiguation of Classical Sanskrit. In Cerstin Mahlow and Michael Piotrowski, editors, Systems and Frameworks for Computational Morphology. Springer, Cham, pages 41-59.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Using Recurrent Neural Networks for joint compound splitting and Sandhi resolution in Sanskrit",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Hellwig",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 7th LTC",
"volume": "",
"issue": "",
"pages": "289--293",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oliver Hellwig. 2015b. Using Recurrent Neural Networks for joint compound splitting and Sandhi resolution in Sanskrit. In Zygmunt Vetulani and Joseph Mariani, editors, Proceedings of the 7th LTC. pages 289-293.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improving neural networks by preventing co-adaptation of feature detectors",
"authors": [
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [
"R"
],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1207.0580"
]
},
"num": null,
"urls": [],
"raw_text": "Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R. Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 .",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8):1735-1780.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Some issues in Sanskrit syntax",
"authors": [
{
"first": "",
"middle": [],
"last": "Hans Henrich Hock",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seminar on Sanskrit syntax and discourse structures",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hans Henrich Hock. 2013. Some issues in Sanskrit syntax. In Peter M. Scharf and G\u00e9rard Huet, editors, Proceedings of the Seminar on Sanskrit syntax and discourse structures. Paris.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The Great Epic of India. Its Character and Origin",
"authors": [
{
"first": "E",
"middle": [
"Washburn"
],
"last": "Hopkins",
"suffix": ""
}
],
"year": 1901,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Washburn Hopkins. 1901. The Great Epic of India. Its Character and Origin. Charles Scribner's Sons, New York.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bidirectional LSTM-CRF models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.01991"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 .",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Grammatik der Sanskrit-Sprache",
"authors": [
{
"first": "Franz",
"middle": [],
"last": "Kielhorn",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Kielhorn. 1888. Grammatik der Sanskrit-Sprache. D\u00fcmmler Verlag, Berlin.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "On the architecture of P\u0101n . ini's grammar",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Kiparsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Sanskrit Computational Linguistics",
"volume": "",
"issue": "",
"pages": "33--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Kiparsky. 2009. On the architecture of P\u0101n . ini's grammar. In G\u00e9rard Huet, Amba Kulkarni, and Peter Scharf, editors, Sanskrit Computational Linguistics, Springer, Berlin, Heidelberg, pages 33-94.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "How free is 'free' word order in Sanskrit",
"authors": [
{
"first": "Amba",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Preeti",
"middle": [],
"last": "Shukla",
"suffix": ""
},
{
"first": "Pavankumar",
"middle": [],
"last": "Satuluri",
"suffix": ""
},
{
"first": "Devanand",
"middle": [],
"last": "Shukl",
"suffix": ""
}
],
"year": 2015,
"venue": "Sanskrit syntax",
"volume": "",
"issue": "",
"pages": "269--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amba Kulkarni, Preeti Shukla, Pavankumar Satuluri, and Devanand Shukl. 2015. How free is 'free' word order in Sanskrit? In Peter M. Scharf, editor, Sanskrit syntax. pages 269-304.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Non-lexical neural architecture for finegrained POS tagging",
"authors": [
{
"first": "Matthieu",
"middle": [],
"last": "Labeau",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "L\u00f6ser",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Allauzen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on EMNLP",
"volume": "",
"issue": "",
"pages": "232--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthieu Labeau, Kevin L\u00f6ser, and Alexandre Allauzen. 2015. Non-lexical neural architecture for fine- grained POS tagging. In Proceedings of the 2015 Conference on EMNLP. pages 232-237.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Comparing evaluation metrics for sentence boundary detection",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 2007,
"venue": "2007 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "4",
"issue": "",
"pages": "182--185",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu and Elizabeth Shriberg. 2007. Comparing evaluation metrics for sentence boundary detection. In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing. volume 4, pages 182-185.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Using Conditional Random Fields for sentence boundary detection in speech",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Harper",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "451--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Andreas Stolcke, Elizabeth Shriberg, and Mary Harper. 2005. Using Conditional Random Fields for sentence boundary detection in speech. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. pages 451-458.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning recurrent neural networks with hessian-free optimization",
"authors": [
{
"first": "James",
"middle": [],
"last": "Martens",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 28th International Conference on Machine Learning (ICML-11)",
"volume": "",
"issue": "",
"pages": "1033--1040",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Martens and Ilya Sutskever. 2011. Learning recurrent neural networks with hessian-free opti- mization. In Proceedings of the 28th International Conference on Machine Learning (ICML-11). pages 1033-1040.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Indian theorists on the nature of the sentence (v\u0101kya)",
"authors": [
{
"first": "Krishna",
"middle": [],
"last": "Bimal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Matilal",
"suffix": ""
}
],
"year": 1966,
"venue": "Foundations of Language",
"volume": "2",
"issue": "",
"pages": "377--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bimal Krishna Matilal. 1966. Indian theorists on the nature of the sentence (v\u0101kya). Foundations of Language 2:377-393.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Strategies for training large scale neural network language models",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Deoras",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "Luk\u00e1\u0161",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Jan\u010dernock\u1ef3",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "2011 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)",
"volume": "",
"issue": "",
"pages": "196--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Mikolov, Anoop Deoras, Daniel Povey, Luk\u00e1\u0161 Burget, and Jan\u010cernock\u1ef3. 2011. Strategies for training large scale neural network language models. In 2011 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). pages 196-201.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Studies in the epic technique of oral verse-making i: Homer and Homeric style",
"authors": [
{
"first": "Milman",
"middle": [],
"last": "Parry",
"suffix": ""
}
],
"year": 1930,
"venue": "Harvard Studies in Classical Philology",
"volume": "41",
"issue": "",
"pages": "73--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milman Parry. 1930. Studies in the epic technique of oral verse-making i: Homer and Homeric style. Harvard Studies in Classical Philology 41:73-147.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "On the difficulty of training recurrent neural networks",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Pascanu",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 30th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Grammatical Literature. A History of Indian Literature",
"authors": [
{
"first": "Hartmut",
"middle": [],
"last": "Scharfe",
"suffix": ""
}
],
"year": 1977,
"venue": "",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hartmut Scharfe. 1977. Grammatical Literature. A History of Indian Literature, Volume 5, Fasc. 2. Otto Harrassowitz, Wiesbaden.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "M",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "K",
"middle": [
"K"
],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on Signal Processing",
"volume": "45",
"issue": "11",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Schuster and K.K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673-2681.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Feature engineering in context-dependent deep neural networks for conversational speech transcription",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Seide",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2011,
"venue": "Automatic Speech Recognition and Understanding (ASRU)",
"volume": "",
"issue": "",
"pages": "24--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Seide, Gang Li, Xie Chen, and Dong Yu. 2011. Feature engineering in context-dependent deep neural networks for conversational speech transcription. In Automatic Speech Recognition and Under- standing (ASRU). IEEE, pages 24-29.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Formulaic Diction and Versification in the Mah\u0101bh\u0101rata",
"authors": [
{
"first": "Sven",
"middle": [],
"last": "Sellmer",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sven Sellmer. 2015. Formulaic Diction and Versification in the Mah\u0101bh\u0101rata. Adam Mickiewicz Uni- versity Press, Pozna\u0144.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The heroic process: Form, Function, and Fantasy in Folk Epic",
"authors": [
{
"first": "D",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "591--611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John D. Smith. 1987. Formulaic language in the epics of India. In B. Almqvist, S.\u00d3. Cath\u00e1in, and P.\u00d3. H\u00e9ala\u00ed, editors, The heroic process: Form, Function, and Fantasy in Folk Epic. Glendale Press, pages 591-611.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information processing systems. pages 3104-3112.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "LSTM for punctuation restoration in speech transcripts",
"authors": [
{
"first": "Ottokar",
"middle": [],
"last": "Tilk",
"suffix": ""
},
{
"first": "Tanel",
"middle": [],
"last": "Alum\u00e4e",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ottokar Tilk and Tanel Alum\u00e4e. 2015. LSTM for punctuation restoration in speech transcripts. In Interspeech 2015. Dresden, Germany.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "1",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research 37(1):141-188.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Altindische Grammatik. Vandenhoek und Ruprecht",
"authors": [
{
"first": "Jakob",
"middle": [],
"last": "Wackernagel",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jakob Wackernagel. 1978 (reprint from 1896). Altindische Grammatik. Vandenhoek und Ruprecht, G\u00f6ttingen.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Discovery of Linguistic Relations Using Lexical Attraction",
"authors": [
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deniz Yuret. 1998. Discovery of Linguistic Relations Using Lexical Attraction. Ph.D. thesis, Mas- sachusetts Institute of Technology.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A CRF sequence labeling approach to Chinese punctuation prediction",
"authors": [
{
"first": "Yanqing",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chaoyue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guohong",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 2012,
"venue": "26th Pacific Asia Conference on Language, Information and Computation (PACLIC 26)",
"volume": "",
"issue": "",
"pages": "508--514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanqing Zhao, Chaoyue Wang, and Guohong Fu. 2012. A CRF sequence labeling approach to Chinese punctuation prediction. In 26th Pacific Asia Conference on Language, Information and Computation (PACLIC 26). pages 508-514.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "] son [is the] self. [The] wife [is a] friend.\" Another problematic pattern is formed by sequences of the form verb-sg acc-sg* (MBH 6.41.64): you [for permission]. I will fight . . . \""
},
"TABREF0": {
"type_str": "table",
"num": null,
"text": "Influence of morphological features on precision and recall of LSTMs. Enc.: encoding type; links: hardcoded morphological links used? LSTM architecture: dropout \u2192 bidirectional LSTM \u2192 softmax, 100 hidden units, 10 CVs, 50 iterations; lexical features: freqindecl, lexical links: lpmi",
"content": "<table><tr><td colspan=\"3\">Enc. Links? P</td><td>R</td><td>F</td></tr><tr><td colspan=\"2\">dec no</td><td colspan=\"3\">85.08 79.41 82.15</td></tr><tr><td colspan=\"2\">dec yes</td><td colspan=\"3\">85.58 77.92 81.57</td></tr><tr><td>1h</td><td>no</td><td colspan=\"3\">84.83 78.42 81.50</td></tr><tr><td>1h</td><td>yes</td><td colspan=\"3\">84.62 78.02 81.19</td></tr><tr><td>Table 1: Lexicon</td><td colspan=\"2\">Links P</td><td>R</td><td>F</td></tr><tr><td>none</td><td colspan=\"4\">none 83.50 78.59 80.97</td></tr><tr><td colspan=\"2\">freqindecl lpmi</td><td colspan=\"3\">84.44 78.68 81.46</td></tr><tr><td colspan=\"5\">freqindecl none 84.86 77.65 81.10</td></tr><tr><td colspan=\"5\">freqindecl lspmi 85.08 79.24 82.05</td></tr><tr><td>w2v</td><td>lpmi</td><td colspan=\"3\">= Table 1, row 1</td></tr><tr><td>w2v</td><td colspan=\"4\">none 85.78 79.01 82.26</td></tr><tr><td>w2v</td><td colspan=\"4\">lspmi 85.50 79.31 82.28</td></tr></table>",
"html": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table/>",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"text": "Prosodical baseline dan . d . a 59.34 85.54 70.07 double dan . d . a 89.34 62.86 73.80",
"content": "<table><tr><td>Classifier</td><td>Architecture</td><td>P</td><td>R</td><td>F</td></tr><tr><td>CRF</td><td>no lex.</td><td colspan=\"3\">83.85 68.75 75.55</td></tr><tr><td/><td>freqindecl</td><td colspan=\"3\">83.82 68.74 75.54</td></tr><tr><td/><td>w2v</td><td colspan=\"3\">85.73 67.24 75.37</td></tr><tr><td>LSTM</td><td>2, dropout</td><td colspan=\"3\">86.92 80.70 83.70</td></tr><tr><td/><td>3</td><td colspan=\"3\">87.07 75.62 80.94</td></tr><tr><td/><td>3, dropout</td><td colspan=\"3\">88.53 85.13 86.79</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "Comparison of baselines and LSTM architectures; settings for CRF: lpmi, dec, features extracted from a window of size \u00b16 around each word, 10 CVs; settings for LSTM: w2v, embedding size: 200, lspmi, no morph. links; 100 hidden units in each bidirectional LSTM layer, 10 CVs, 50 epochs",
"content": "<table><tr><td>Length class</td></tr></table>",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"text": "Sentence based evaluation of the output of the best RNN from",
"content": "<table/>",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": "support this hypothesis. The highest accuracy rates are observed for the Start-end Corr. 1 err. > 1 err. Acc.",
"content": "<table><tr><td>b-m</td><td>463</td><td>673</td><td>149</td><td>36.03</td></tr><tr><td>b-d</td><td colspan=\"2\">1185 495</td><td>72</td><td>67.64</td></tr><tr><td>b-d2</td><td colspan=\"3\">4034 1044 65</td><td>78.44</td></tr><tr><td>m-m</td><td>27</td><td>37</td><td>34</td><td>27.55</td></tr><tr><td>m-d</td><td>146</td><td>202</td><td>68</td><td>35.10</td></tr><tr><td>m-d2</td><td>368</td><td>461</td><td>39</td><td>42.40</td></tr></table>",
"html": null
},
"TABREF6": {
"type_str": "table",
"num": null,
"text": "Sentence based evaluation of the output of the best RNN from Table 3, stratified by start and end positions of sentences. b: Sentence starts at the b(eginning) of a text line, m: in the m(iddle); d/d2: sentence ends at a dan . d . a/double dan . d . a configurations b-d and b-d2, while accuracy rates for *-m-* configurations are clearly lower. Although these cases constitute only a minority of the training data, and their accuracy rates may rise when more labeled data are available, the presence of the (double) dan . d . a feature (page 4) is certainly most relevant for the observed differences in the accuracy levels.Columns 3ff. of",
"content": "<table/>",
"html": null
}
}
}
}