ACL-OCL / Base_JSON /prefixQ /json /Q19 /Q19-1018.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q19-1018",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:08:57.417532Z"
},
"title": "Calculating the Optimal Step in Shift-Reduce Dependency Parsing: From Cubic to Linear Time",
"authors": [
{
"first": "Mark-Jan",
"middle": [],
"last": "Nederhof",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of St Andrews",
"location": {
"country": "UK"
}
},
"email": "markjan.nederhof@googlemail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a new cubic-time algorithm to calculate the optimal next step in shift-reduce dependency parsing, relative to ground truth, commonly referred to as dynamic oracle. Unlike existing algorithms, it is applicable if the training corpus contains non-projective structures. We then show that for a projective training corpus, the time complexity can be improved from cubic to linear.",
"pdf_parse": {
"paper_id": "Q19-1018",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a new cubic-time algorithm to calculate the optimal next step in shift-reduce dependency parsing, relative to ground truth, commonly referred to as dynamic oracle. Unlike existing algorithms, it is applicable if the training corpus contains non-projective structures. We then show that for a projective training corpus, the time complexity can be improved from cubic to linear.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A deterministic parser may rely on a classifier that predicts the next step, given features extracted from the present configuration (Yamada and Matsumoto, 2003; Nivre et al., 2004) . It was found that accuracy improves if the classifier is trained not just on configurations that correspond to the ground-truth, or ''gold'', tree, but also on configurations that a parser would typically reach when a classifier strays from the optimal predictions. This is known as a dynamic oracle. 1 The effective calculation of the optimal step for some kinds of parsing relies on 'arc-decomposibility', as in the case of Nivre (2012, 2013) . This generally requires a projective training corpus; an attempt to extend this to non-projective training corpora had to resort to an approximation (Aufrant et al., 2018) . It is known how to calculate the optimal step for a number of non-1 A term we avoid here, as dynamic oracles are neither oracles nor dynamic, especially in our formulation, which allows gold trees to be non-projective. Following, for example, Kay (2000) , an oracle informs a parser whether a step may lead to the correct parse. If the gold tree is non-projective and the parsing strategy only allows projective trees, then there are no steps that lead to the correct parse. At best, there is an optimal step, by some definition of optimality. An algorithm to compute the optimal step, for a given configuration, would typically not change over time, and therefore is not dynamic in any generally accepted sense of the word. projective parsing algorithms, however (G\u00f3mez-Rodr\u00edguez et al., 2014; G\u00f3mez-Rodr\u00edguez and Fern\u00e1ndez-Gonz\u00e1lez, 2015; Fern\u00e1ndez-Gonz\u00e1lez G\u00f3mez-Rodr\u00edguez, 2018a) ; see also de Lhoneux et al. (2017) .",
"cite_spans": [
{
"start": 133,
"end": 161,
"text": "(Yamada and Matsumoto, 2003;",
"ref_id": "BIBREF25"
},
{
"start": 162,
"end": 181,
"text": "Nivre et al., 2004)",
"ref_id": "BIBREF21"
},
{
"start": 485,
"end": 486,
"text": "1",
"ref_id": null
},
{
"start": 610,
"end": 628,
"text": "Nivre (2012, 2013)",
"ref_id": null
},
{
"start": 780,
"end": 802,
"text": "(Aufrant et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 1048,
"end": 1058,
"text": "Kay (2000)",
"ref_id": "BIBREF16"
},
{
"start": 1569,
"end": 1599,
"text": "(G\u00f3mez-Rodr\u00edguez et al., 2014;",
"ref_id": "BIBREF12"
},
{
"start": 1600,
"end": 1645,
"text": "G\u00f3mez-Rodr\u00edguez and Fern\u00e1ndez-Gonz\u00e1lez, 2015;",
"ref_id": "BIBREF11"
},
{
"start": 1646,
"end": 1688,
"text": "Fern\u00e1ndez-Gonz\u00e1lez G\u00f3mez-Rodr\u00edguez, 2018a)",
"ref_id": "BIBREF4"
},
{
"start": 1700,
"end": 1724,
"text": "de Lhoneux et al. (2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Ordinary shift-reduce dependency parsing is known at least since Fraser (1989) ; see also Nasr (1995) . Nivre (2008) calls it ''arc-standard parsing.'' For shift-reduce dependency parsing, calculation of the optimal step is regarded to be difficult. The best known algorithm is cubic and is only applicable if the training corpus is projective (Goldberg et al., 2014) . We present a new cubic-time algorithm that is also applicable to non-projective training corpora. Moreover, its architecture is modular, expressible as a generic tabular algorithm for dependency parsing plus a context-free grammar that expresses the allowable transitions of the parsing strategy. This differs from approaches that require specialized tabular algorithms for different kinds of parsing (G\u00f3mez-Rodr\u00edguez et al., 2008; Huang and Sagae, 2010; Kuhlmann et al., 2011) .",
"cite_spans": [
{
"start": 65,
"end": 78,
"text": "Fraser (1989)",
"ref_id": "BIBREF6"
},
{
"start": 90,
"end": 101,
"text": "Nasr (1995)",
"ref_id": "BIBREF19"
},
{
"start": 104,
"end": 116,
"text": "Nivre (2008)",
"ref_id": "BIBREF20"
},
{
"start": 344,
"end": 367,
"text": "(Goldberg et al., 2014)",
"ref_id": "BIBREF9"
},
{
"start": 771,
"end": 801,
"text": "(G\u00f3mez-Rodr\u00edguez et al., 2008;",
"ref_id": "BIBREF10"
},
{
"start": 802,
"end": 824,
"text": "Huang and Sagae, 2010;",
"ref_id": "BIBREF13"
},
{
"start": 825,
"end": 847,
"text": "Kuhlmann et al., 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The generic tabular algorithm is interesting in its own right, and can be used to determine the optimal projectivization of a non-projective tree. This is not to be confused with pseudo-projectivization (Kahane et al., 1998; Nivre and Nilsson, 2005) , which generally has a different architecture and is used for a different purpose, namely, to allow a projective parser to produce non-projective structures, by encoding non-projectivity into projective structures before training, and then reconstructing potential non-projectivity after parsing.",
"cite_spans": [
{
"start": 203,
"end": 224,
"text": "(Kahane et al., 1998;",
"ref_id": "BIBREF15"
},
{
"start": 225,
"end": 249,
"text": "Nivre and Nilsson, 2005)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A presentational difference with earlier work is that we do not define optimality in terms of ''loss'' or ''cost'' functions but directly in terms of attainable accuracy. This perspective is shared by Straka et al. (2015) , who also relate accuracies of competing steps, albeit by means of actual parser output and not in terms of best attainable accuracies.",
"cite_spans": [
{
"start": 201,
"end": 221,
"text": "Straka et al. (2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We further show that if the training corpus is projective, then the time complexity can be reduced to linear. To achieve this, we develop a new approach of excluding computations whose accuracies are guaranteed not to exceed the accuracies of the remaining computations. The main theoretical conclusion is that arc-decomposibility is not a necessary requirement for efficient calculation of the optimal step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite advances in unrestricted non-projective parsing, as, for example, Fern\u00e1ndez-Gonz\u00e1lez and G\u00f3mez-Rodr\u00edguez (2018b), many state-ofthe-art dependency parsers are projective, as, for example, Qi and Manning (2017) . One main practical contribution of the current paper is that it introduces new ways to train projective parsers using non-projective trees, thereby enlarging the portion of trees from a corpus that is available for training. This can be done either after applying optimal projectivization, or by computing optimal steps directly for non-projective trees. This can be expected to lead to more accurate parsers, especially if a training corpus is small and a large proportion of it is non-projective.",
"cite_spans": [
{
"start": 195,
"end": 216,
"text": "Qi and Manning (2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, a configuration (for sentence length n) is a 3-tuple (\u03b1, \u03b2, T ) consisting of a stack \u03b1, which is a string of integers each between 0 and n, a remaining input \u03b2, which is a suffix of the string 1 \u2022 \u2022 \u2022 n, and a set T of pairs (a, a ) of integers, with 0 \u2264 a \u2264 n and 1 \u2264 a \u2264 n. Further, \u03b1\u03b2 is a subsequence of 0 1 \u2022 \u2022 \u2022 n, starting with 0. Integer 0 represents an artificial input position, not corresponding to any actual token of an input sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "An integer a (1 \u2264 a \u2264 n) occurs as second element of a pair (a, a ) \u2208 T if and only if it does not occur in \u03b1\u03b2. Furthermore, for each a there is at most one a such that (a, a ) \u2208 T . If (a, a ) \u2208 T then a is generally called a dependent of a, but as we will frequently need concepts from graph theory in the remainder of this article, we will consistently call a a child of a and a the parent of a ; if a < a then a is a left child and if a < a then it is a right child. The terminology is extended in the usual way to include descendants and ancestors. Pairs (a, a ) will henceforth be called edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "For sentence length n, the initial configuration is (0, 1 2 \u2022 \u2022 \u2022 n, \u2205), and a final configuration is shift: of the form (0, \u03b5, T ), where \u03b5 denotes the empty string. The three transitions of shift-reduce dependency parsing are given in Table 1 . By step we mean the application of a transition on a particular configuration. By computation we mean a series of steps, the formal notation of which uses * , the reflexive, transition closure of . If",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 244,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "(\u03b1, b\u03b2, T ) (\u03b1b, \u03b2, T ) reduce left: (\u03b1a 1 a 2 , \u03b2, T ) (\u03b1a 1 , \u03b2, T \u222a {(a 1 , a 2 )}) reduce right: (\u03b1a 1 a 2 , \u03b2, T ) (\u03b1a 2 , \u03b2, T \u222a {(a 2 , a 1 )}), provided |\u03b1| > 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "(0, 1 2 \u2022 \u2022 \u2022 n, \u2205) * (0, \u03b5, T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": ", then T represents a tree, with 0 as root element, and T is projective, which means that for each node, the set of its descendants (including that node itself) is of the form {a, a + 1, . . . , a \u2212 1, a }, for some a and a . In general, a dependency tree is any tree of nodes labelled 0, 1, . . . , n, with 0 being the root.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "The score of a tree T for a sentence is the number of edges that it has in common with a given gold tree T g for that sentence, or formally |T \u2229 T g |. The accuracy is the score divided by n. Note that neither tree need be projective for the score to be defined, but in this paper the first tree, T , will normally be projective. Where indicated, also T g is assumed to be projective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "Assume an arbitrary configuration (\u03b1, \u03b2, T ) for sentence length n and assume a gold tree T g for a sentence of that same length, and assume three steps (\u03b1, \u03b2, T ) (\u03b1 i , \u03b2 i , T i ), with i = 1, 2, 3, obtainable by a shift, reduce left or reduce right, respectively. (If \u03b2 = \u03b5, or |\u03b1| \u2264 2, then naturally some of the three transitions need to be left out of consideration.) We now wish to calculate, for each of i = 1, 2, 3, the maximum value of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "|T i \u2229 T g |, for any T i such that (\u03b1 i , \u03b2 i , T i ) * (0, \u03b5, T i ). For i = 1, 2, 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": ", let \u03c3 i be this maximum value. The absolute scores \u03c3 i are strictly speaking irrelevant; the relative values determine which is the optimal step, or which are the optimal steps, to reach a tree with the highest score. Note that |{i | \u03c3 i = max j \u03c3 j }| is either 1, 2, or 3. In the remainder of this article, we find it more convenient to calculate \u03c3 i \u2212 |T \u2229 T g | for each i-or, in other words, gold edges that were previously found are left out of consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "We can put restrictions on the set of allowable computations (\u03b1, \u03b2, T ) * (0, \u03b5, T \u222a T ). The left-before-right strategy demands that all edges (a, a ) \u2208 T with a < a are found before any edges (a, a ) \u2208 T with a < a , for each a that is rightmost in \u03b1 or that occurs in \u03b2. The strict leftbefore-right strategy in addition disallows edges (a, a ) \u2208 T with a < a for each a in \u03b1 other than the rightmost element. The intuition is that a non-strict strategy allows us to correct mistakes already made: If we have already pushed other elements on top of a stack element a, then a will necessarily obtain right children before it occurs on top of the stack again, when it can take (more) left children. By contrast, the strict strategy would not allow these left children.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "The definition of the right-before-left strategy is symmetric to that of the left-before-right strategy, but there is no independent strict right-beforeleft strategy. In this paper we consider all three strategies in order to emphasize the power of our framework. It is our understanding that Goldberg et al. (2014) does not commit to any particular strategy.",
"cite_spans": [
{
"start": 293,
"end": 315,
"text": "Goldberg et al. (2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "2"
},
{
"text": "We here consider context-free grammars (CFGs) of a special form, with nonterminals in N \u222a (N \u00d7 N r ), for appropriate finite sets N , N , N r , which need not be disjoint. The finite set of terminals is denoted \u03a3. There is a single start symbol S \u2208 N . Rules are of one of the forms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u2022 (B, C) \u2192 a, \u2022 A \u2192 (B, C), \u2022 (B , C) \u2192 A (B, C), \u2022 (B, C ) \u2192 (B, C) A, where A \u2208 N , B, B \u2208 N , C, C \u2208 N r , a \u2208 \u03a3. A first additional requirement is that if (B , C) \u2192 A (B, C) is a rule, then (B , C ) \u2192 A (B, C ), for any C \u2208 N r ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "is also a rule, and if (B, C ) \u2192 (B, C) A is a rule, then (B , C ) \u2192 (B , C) A, for any B \u2208 N , is also a rule. This justifies our notation of such rules in the remainder of this paper as (B , ) \u2192 A (B, ) and ( , C ) \u2192 ( , C) A, respectively. These two kinds of rules correspond to attachment of left and right children, respectively, in dependency parsing. Secondly, we require that there is precisely one rule (B, C) \u2192 a for each a \u2208 \u03a3. Note that the additional requirements make the grammar explicitly ''split'' in the sense of Eisner and Satta (1999) , Eisner (2000) , and Johnson (2007) . That is, the two processes of attaching left and right children, respectively, are independent, with rules (B, C) \u2192 a creating ''initial states'' B and C, respectively, for these two processes. Rules of the form A \u2192 (B, C) then combine the end results of these two processes, possibly placing constraints on allowable combinations of B and C.",
"cite_spans": [
{
"start": 531,
"end": 554,
"text": "Eisner and Satta (1999)",
"ref_id": "BIBREF3"
},
{
"start": 557,
"end": 570,
"text": "Eisner (2000)",
"ref_id": "BIBREF2"
},
{
"start": 573,
"end": 591,
"text": "and Johnson (2007)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "W (B, i, i) = 1, if (B, C) \u2192 a i 0, otherwise W r (C, i, i) = 1, if (B, C) \u2192 a i 0, otherwise W (B, C, i, j) = k W r (B, i, k) \u2297 W (C, k + 1, j) \u2297 w(j, i) W r (B, C, i, j) = k W r (B, i, k) \u2297 W (C, k + 1, j) \u2297 w(i, j) W (C , i, j) = A\u2192(D,B), (C , )\u2192A (C, ), k W (D, i, k) \u2297 W (B, C, k, j) W r (B , i, j) = ( ,B )\u2192( ,B) A, A\u2192(C,D), k W r (B, C, i, k) \u2297 W r (D, k, j) W = S\u2192(B,C) W (B, 0, 0) \u2297 W r (C, 0, n)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "To bring out the relation between our subclass of CFGs and bilexical grammars, one could explicitly write",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "(B, C)(a) \u2192 a, A(a) \u2192 (B, C)(a), (B , )(b) \u2192 A(a) (B, )(b), and ( , C )(c) \u2192 ( , C)(c) A(a).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "Purely symbolic parsing is extended to weighted parsing much as usual, except that instead of attaching weights to rules, we attach a score w(i, j) to each pair (i, j), which is a potential edge. This can be done for any semiring. In the semiring we will first use, a value is either a non-negative integer or \u2212\u221e. Further, w 1 \u2295 w 2 = max(w 1 , w 2 ) and w 1 \u2297 w 2 = w 1 + w 2 if w 1 = \u2212\u221e and w 2 = \u2212\u221e and w 1 \u2297w 2 = \u2212\u221e otherwise. Naturally, the identity element of \u2295 is 0 = \u2212\u221e and the identity element of \u2297 is 1 = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "Tabular weighted parsing can be realized following Eisner and Satta (1999) . We assume the input is a string a 0 a 1 \u2022 \u2022 \u2022 a n \u2208 \u03a3 * , with a 0 being the prospective root of a tree. Table 2 presents the cubic-time algorithm in the form of a system of recursive equations. With the semiring we chose above, W (B, i, j) represents the highest score of any right-most derivation of the form (B, ) ",
"cite_spans": [
{
"start": 51,
"end": 74,
"text": "Eisner and Satta (1999)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 182,
"end": 189,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u21d2 A 1 (B 1 , ) \u21d2 A 1 A 2 (B 2 , ) \u21d2 * (S, S) \u2192 a S \u2192 (S, S) ( , S) \u2192 ( , S) S (S, ) \u2192 S (S, )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "= N = N r = {S}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "A 1 \u2022 \u2022 \u2022 A m (B m , ) \u21d2 A 1 \u2022 \u2022 \u2022 A m a j \u21d2 * a i \u2022 \u2022 \u2022 a j , for some m \u2265 0, and W r (C, i, j) has symmetric meaning. Intuitively, W (B, i, j) considers a j and its left dependents and W r (C, i, j) considers a i and its right dependents. A value W (B, C, i, j), or W r (B, C, i, j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": ", represents the highest score combining a i and its right dependents and a j and its left dependents, meeting in the middle at some k, including also an edge from a i to a j , or from a j to a i , respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "One may interpret the grammar in Table 3 as encoding all possible computations of a shiftreduce parser, and thereby all projective trees. As there is only one way to instantiate the underscores, we obtain rule (S, S) \u2192 (S, S) S, which corresponds to reduce left, and rule (S, S) \u2192 S (S, S), which corresponds to reduce right. Figure 1 presents a parse tree for the grammar and the corresponding dependency tree. Note that if we are not given a particular strategy, such as left-before-right, then the parse tree underspecifies whether left children or right children are attached first. This is necessarily the case because the grammar is split. Therefore, the computation in this example may consist of three shifts, followed by one reduce left, one reduce right, and one reduce left, or it may consist of two shifts, one reduce right, one shift, and two reduce lefts. ",
"cite_spans": [],
"ref_spans": [
{
"start": 33,
"end": 40,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 326,
"end": 334,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "(P, P ) \u2192 p (S, S) \u2192 s P \u2192 (P, P ) S \u2192 (P, S) S \u2192 (S, S) (S, ) \u2192 P (S, ) (S, ) \u2192 S (S, ) ( , S) \u2192 ( , P ) S ( , S) \u2192 ( , S) S (S, ) \u2192 P (P, )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "= N r = N = {P, S}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "The last rule would be excluded for the strict left-to-right strategy, or alternatively one can set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "w(i, j) = \u2212\u221e for j < i < k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "For a given gold tree T g , which may or may not be projective, we let w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "(i, j) = \u03b4 g (i, j), where we define \u03b4 g (i, j) = 1 if (i, j) \u2208 T g and \u03b4 g (i, j) = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "otherwise. With the grammar from Table 3 , the value W found by weighted parsing is now the score of the most accurate projective tree. By backtracing from W as usual, we can construct the (or more correctly, a) tree with that highest accuracy. We have thereby found an effective way to projectivize a treebank in an optimal way. By a different semiring, we can count the number of trees with the highest accuracy, which reflects the degree of ''choice'' when projectivizing a treebank.",
"cite_spans": [],
"ref_spans": [
{
"start": 33,
"end": 40,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "4 O(n 3 ) O(n 3 ) O(n 3 ) Time Algorithm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "In a computation starting from a configuration",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "(a 0 \u2022 \u2022 \u2022 a k , b 1 \u2022 \u2022 \u2022 b m , T ), not every projective parse of the string a 0 \u2022 \u2022 \u2022 a k b 1 \u2022 \u2022 \u2022 b m is achievable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "The structures that are achievable are captured by the grammar in Table 4 , with P for prefix and S for suffix (also for ''start symbol''). Nonterminals P and (P, P ) correspond to a node a i (0 \u2264 i < k) that does not have children. Nonterminal S corresponds to a node that has either a k or some b j (1 \u2264 j \u2264 m) among its descendants. This then means that the node will appear on top of the stack at some point in the computation. Nonterminal (S, S) also corresponds to a node that has one of the rightmost m + 1 nodes among its descendants, and, in addition, if it itself is not one of the rightmost m + 1 nodes, then it must have a left child. Nonterminal (P, S) corresponds to a node a i (0 \u2264 i < k) that has a k among its descendants but that does not have a left child. Nonterminal (S, P ) corresponds to a node a i (0 \u2264 i < k) that has a left child but no right children. For a i to be given a left child, it is required that it eventually appear on top of the stack. This requirement is encoded in the absence of a rule with right-hand side (S, P ). In other words, (S, P ) cannot be part of a successful derivation, unless the rule (S, S) \u2192 (S, P ) S is subsequently used, which then corresponds to giving a i a right child that has a k among its descendants. Figure 2 shows an example. Note that we can partition a parse tree into ''columns'', each consisting of a path starting with a label in N , then a series of labels in N \u00d7 N r and ending with a label in \u03a3.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 73,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 1269,
"end": 1277,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "A dependency structure that is not achievable, and that appropriately does not correspond to a parse tree, for a stack of height 4 and remaining input of length 1, is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u2022 \u2022 \u2022 \u2022 | \u2022 Suppose we have a configuration (a 0 \u2022 \u2022 \u2022 a k , b 1 \u2022 \u2022 \u2022 b m , T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "for sentence length n, which implies k + m \u2264 n. We need to decide whether a shift, reduce left, or reduce right should be done in order to achieve the highest accuracy, for given gold tree T g . For this, we calculate three values \u03c3 1 , \u03c3 2 and \u03c3 3 , and determine which is highest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "The first value \u03c3 1 is obtained by investigating the configuration",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "(a 0 \u2022 \u2022 \u2022 a k b 1 , b 2 \u2022 \u2022 \u2022 b m , \u2205)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "resulting after a shift. We run our generic tabular algorithm for the grammar in Table 4 , for input p k+1 s m , to obtain \u03c3 1 = W . The scores are obtained by translating indices of",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "a 0 \u2022 \u2022 \u2022 a k b 1 \u2022 \u2022 \u2022 b m = c 0 \u2022 \u2022 \u2022 c k+m to indices in the orig- inal input, that is, we let w(i, j) = \u03b4 g (c i , c j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "However, the shift, which pushes an element on top of a k , implies that a k will obtain right children before it can obtain left children. If we assume the left-before-right strategy, then we should avoid that a k obtains left children. We could do that by refining the grammar, but find it easier to set w(k, i) = \u2212\u221e for all i < k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "For the second value \u03c3 2 , we investigate the con-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "figuration (a 0 \u2022 \u2022 \u2022 a k\u22121 , b 1 \u2022 \u2022 \u2022 b m , \u2205)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "resulting after a reduce left. The same grammar and algorithm are used, now for input",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "p k\u22121 s m+1 . With a 0 \u2022 \u2022 \u2022 a k\u22121 b 1 \u2022 \u2022 \u2022 b m = c 0 \u2022 \u2022 \u2022 c k+m\u22121 , we let w(i, j) = \u03b4 g (c i , c j ). We let \u03c3 2 = W \u2297 \u03b4 g (a k\u22121 , a k ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "In case of a strict left-before-right strategy, we set w(k \u2212 1, i) = \u2212\u221e for i < k \u2212 1, to avoid that a k\u22121 obtains left children after having obtained a right child a k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "If k \u2264 1 then the third value is \u03c3 3 = \u2212\u221e, as no reduce right is applicable. Otherwise we investigate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "(a 0 \u2022 \u2022 \u2022 a k\u22122 a k , b 1 \u2022 \u2022 \u2022 b m , \u2205).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "The same grammar and algorithm are used as before, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "w(i, j) = \u03b4 g (c i , c j ) with a 0 \u2022 \u2022 \u2022 a k\u22122 a k b 1 \u2022 \u2022 \u2022 b m = c 0 \u2022 \u2022 \u2022 c k+m\u22121 . Now \u03c3 3 = W \u2297 \u03b4 g (a k , a k\u22121 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "In case of a right-before-left strategy, we set w(k, i) = \u2212\u221e for k < i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "We conclude that the time complexity of calculating the optimal step is three times the time complexity of the algorithm of Table 2 , hence cubic in n.",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "For a proof of correctness, it is sufficient to show that each parse tree by the grammar in Table 4 corresponds to a computation with the same score, and conversely that each computation corresponds to an equivalent parse tree. Our grammar has spurious ambiguity, just as the shift-reduce parser from Table 1 , and this can be resolved in the same way, depending on whether the intended strategy is (non-)strict left-before-right or rightbefore-left, and whether the configuration is the result of a shift, reduce left, or reduce right. Concretely, we can restrict parse trees to attach children lower in the tree if they would be attached earlier in the computation, and thereby we obtain",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 99,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 301,
"end": 308,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "Figure 3: A node \u03bd with label in N \u00d7 N r translates to configuration (d 1 \u2022 \u2022 \u2022d k ,\u0113 1 \u2022 \u2022 \u2022\u0113 m , T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": ", via its shortest path to the root. The overlined symbols denote the integers between 0 and n corresponding to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "d 1 \u2022 \u2022 \u2022 d k e 1 \u2022 \u2022 \u2022 e m \u2208 p + s * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "a bijection between parse trees and computations. For example, in the middle column of the parse tree in Figure 2 , the (P, S) and its right child occur below the (S, S) and its left child, to indicate the reduce left precedes the reduce right.",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 113,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "The proof in one direction assumes a parse tree, which is traversed to gather the steps of a computation. This traversal is post-order, from left to right, but skipping the nodes representing stack elements below the top of the stack, starting from the leftmost node labeled s. Each node \u03bd with a label in N \u00d7 N r corresponds to a step. If the child of \u03bd is labeled s, then we have a shift, and if it has a right or left child with a label in N , then it corresponds to a reduce left or reduce right, respectively. The configuration resulting from that step can be constructed as sketched in Figure 3 . We follow the shortest path from \u03bd to the root. All the leaves to the right of the path correspond to the remaining input. For the stack, we gather the leaves in the columns of the nodes on the path, as well as those of the left children of nodes on the path. Compare this with the concept of right-sentential forms in the theory of context-free parsing.",
"cite_spans": [],
"ref_spans": [
{
"start": 592,
"end": 600,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "For a proof in the other direction, we can make use of existing parsing theory, which tells us how to translate a computation of the shift-reduce parser to a dependency structure, which in turn is easily translated to an undecorated parse tree. It then remains to show that the nodes in that tree can be decorated (in fact in a unique way), according to the rules from Table 4 . This is straightforward given the meanings of P and S described earlier in this section. Most notably, the absence of a rule Figure 4 : Components C 1 , C 2 , C 3 partitioning nodes in \u03b2, and gold edges linking them to \u03b1.",
"cite_spans": [],
"ref_spans": [
{
"start": 369,
"end": 376,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 504,
"end": 512,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "with right-hand side ( , P ) P does not prevent the decoration of a tree that was constructed out of a computation, because a reduction involving two nodes within the stack is only possible if the rightmost of these nodes eventually appears on top of the stack, which is only possible when the computation has previously made a k a descendant of that node, hence we would have S rather than P .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "5 O(n 2 O(n 2 O(n 2 ) Time Algorithm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "Assume a given configuration (\u03b1, \u03b2, T ) as before, resulting from a shift or reduction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "Let \u03b1 = a 0 \u2022 \u2022 \u2022 a k , A = {a 0 , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": ". . , a k }, and let B be the set of nodes in \u03b2. We again wish to calculate the maximum value of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "|T \u2229 T g | for any T such that (\u03b1, \u03b2, \u2205) * (0, \u03b5, T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": ", but now under the assumption that T g is projective. Let us call this value \u03c3 max . We define w in terms of \u03b4 g as in the previous section, setting w(i, j) = \u2212\u221e for an appropriate subset of pairs (i, j) to enforce a strategy that is (non-)strict left-before-right or right-before-left. The edges in T g \u2229 (B \u00d7 B) partition the remaining input into maximal connected components. Within these components, a node b \u2208 B is called critical if it satisfies one or both of the following two conditions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u2022 At least one descendant of b (according to T g ) is not in B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u2022 The parent of b (according to T g ) is not in B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "Let B crit \u2286 B be the set of critical nodes, listed in order as b 1 , . . . , b m , and let B ncrit = B \\ B crit . Figure 4 sketches three components as well as edges in T g \u2229 (A \u00d7 B) and T g \u2229 (B \u00d7 A). Component C 1 , for example, contains the critical elements b 1 , b 2 , and b 3 . The triangles under b 1 , . . . , b 7 represent subtrees consisting of edges leading to non-critical nodes. For each b \u2208 B crit , |T g \u2229 ({b} \u00d7 A)| is zero or more, or in words, critical nodes have zero or more children in the stack. Further, if (a, b) \u2208 T g \u2229 (A \u00d7 B crit ), then b is the rightmost critical node in a component; examples are b 5 and b 7 in the figure. Let T max be any tree such that (\u03b1, \u03b2, \u2205) * (0, \u03b5, T max ) and |T max \u2229 T g | = \u03c3 max . Then we can find another tree T max that has the same properties and in addition satisfies:",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "1. T g \u2229 (B \u00d7 B ncrit ) \u2286 T max , 2. T max \u2229 (B ncrit \u00d7 A) = \u2205, 3. T max \u2229 (B \u00d7 B crit ) \u2286 T g ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "or in words, (1) the subtrees rooted in the critical nodes are entirely included, (2) no child of a non-critical node is in the stack, and (3) within the remaining input, all edges to critical nodes are gold. Very similar observations were made before by Goldberg et al. (2014) , and therefore we will not give full proofs here. The structure of the proof is in each case that all violations of a property can be systematically removed, by rearranging the computation, in a way that does not decrease the score.",
"cite_spans": [
{
"start": 255,
"end": 277,
"text": "Goldberg et al. (2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "We need two more properties:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "4. If (a, b) \u2208 T max \u2229 (A \u00d7 B crit ) \\ T g then either:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u2022 b is the rightmost critical node in its component, or",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u2022 there is (b, a ) \u2208 T max \u2229 T g , for some a \u2208 A and there is at least one other critical node b to the right of b, but in the same component, such that (b , a ) \u2208",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "T max \u2229 T g or (a , b ) \u2208 T max \u2229 T g , for some a \u2208 A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "5. If (b, a) \u2208 T max \u2229 (B crit \u00d7 A) \\ T g then there is (b, a ) \u2208 T max , for some a \u2208 A, such that a is a sibling of a immediately to its right. Figure 5 , to be discussed in more detail later, illustrates property (4) for the non-gold edge from a 4 ; this edge leads to b 4 (which has outgoing gold edge to a 5 ) rather than to b 5 or b 6 . It further respects property (4) because of the gold edges connected to b 7 and b 8 , which occur to the right of b 4 but in the same component. Property (5) is illustrated for the non-gold edge from b 3 to a 8 , which has sibling a 9 immediately to the right. The proof that property (4) may be assumed to hold, without loss of generality, again involves making local changes to the computation, in particular replacing the b in an offending nongold edge (a, b) \u2208 A \u00d7 B crit by another critical node b further to the left or at the right end of the component. Similarly, for property (5), if we have an offending non-gold edge (b, a), then we can rearrange the computation, such that node a is reduced not into b but into one of the descendants of b in B that was given children in A. If none of the descendants of b in B was given children in A, then a can instead be reduced into its neighbor in the stack immediately to the left, without affecting the score.",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 154,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "By properties (1)-(3), we can from here on ignore non-critical nodes, so that the remaining task is to calculate \u03c3 max \u2212 |B ncrit |. In fact, we go further than that and calculate \u03c3 max \u2212 M , where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "M = |T g \u2229 (B \u00d7 B)|.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "In other words, we take for granted that the score can be at least as much as the number of gold edges within the remaining input, which leaves us with the task of counting the additional gold edges in the optimal computation. For any given component C we can consider the sequence of edges that the computation creates between A and C, in the order in which they are created:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u2022 for the first gold edge between C and A, we count +1,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u2022 for each subsequent gold edge between C to A, we count +1,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u2022 we ignore interspersed non-gold edges from C to A,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u2022 but following a non-gold edge from A to C, the immediately next gold edge between C and A is not counted, because that nongold edge implies that another gold edge in B crit \u00d7 B crit cannot be created. This is illustrated by Figure 5 . For (b 3 , a 9 ) we count +1, it being the first gold edge connected to the component. For the subsequent three gold edges, we count +1 for each, ignoring the nongold edge (b 3 , a 8 ). The non-gold edge (a 4 , b 4 ) implies that the parent of b 4 is already determined. One would then perhaps expect we count \u22121 for non-creation of (b 5 , b 4 ), considering (b 5 , b 4 ) was already counted as part of M . Instead, we let this \u22121 cancel out against the following (b 7 , a 3 ) , by letting the latter contribute +0 rather than +1. The subsequent edge (b 7 , a 2 ) again contributes +1, but the non-gold edge (a 1 , b 7 ) means that the subsequent (a 0 , b 8 ) contributes +0. Hence the net count in this component is 5.",
"cite_spans": [],
"ref_spans": [
{
"start": 226,
"end": 234,
"text": "Figure 5",
"ref_id": "FIGREF2"
},
{
"start": 701,
"end": 713,
"text": "(b 7 , a 3 )",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "The main motivation for properties (1)-(5) is that they limit the input positions that can be relevant for a node that is on top of the stack, thereby eliminating one factor m from the time complexity. More specifically, the gold edges relate a stack element to a ''current critical node'' in a ''current component''. We need to distinguish however between three possible states:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u2022 N (none): none of the critical nodes from the current component were shifted on to the stack yet,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u2022 C (consumed): the current critical node was 'consumed' by it having been shifted and assigned a parent,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u2022 F (fresh): the current critical node was not consumed, but at least one of the preceding critical nodes in the same component was consumed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "For 0 \u2264 i \u2264 k, we define p(i) to be the index j such that (b j , a i ) \u2208 T g , and if there is no such j, then p(i) = \u22a5, where \u22a5 denotes 'undefined'. For 0 \u2264 i < k, we let p \u2265 (i) = p(i) if p(i) = \u22a5, and p \u2265 (i) = p \u2265 (i + 1) otherwise, and further p \u2265 (k) = p(k). Intuitively, we seek a critical node that is the parent of a i , or if there is none, of a i+1 , . . . We define c(i) to be the smallest j such that (a i , b j ) \u2208 T g , or in words, the index of the leftmost child in the remaining input, and c(i) = \u22a5 if there is none.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "As representative element of a component with critical element b j we take the critical element that is rightmost in that component, or formally, we define R(j) to be the largest j such that b j is an ancestor (by T g \u2229 (B crit \u00d7 B crit )) of b j . For completeness, we define R(\u22a5) = \u22a5. We let P (i) = R(p(i)) and P \u2265 (i) = R(p \u2265 (i)). Note that score(i, j, q) = 0 if i < 0, otherwise: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "score(i, j, q) = [nchildren(i)\u2212\u2206(c(i) = P \u2265 (j) \u2227 q = N )] \u2297 w(i, j) \u2297 score(i \u2212 1, i, \u03c4 (i, j, q)) \u2295 w(j, i) \u2297 score(i \u2212 1, j, q) \u2295 [if p(j) = \u22a5 \u2228 q = C then \u2212\u221e else \u2206(q = N ) \u2297 score (i, p(j))] score (i, j) = 0 if i < 0, otherwise: score (i, j) = [if p (i, j) = \u22a5 then score (i \u2212 1, j) else 1 \u2297 score (i \u2212 1, p (i, j))] \u2295 nchildren(i) \u2297 score(i \u2212 1, i, \u03c4 (i, j)) nchildren(i) = |{j | w(i, j + k) = 1}| \u03c4 (i, j, q) = if q = N \u2228 P \u2265 (i) = P \u2265 (j) then N else if p \u2265 (i) = p \u2265 (j) then F else q \u03c4 (i, j) = if P \u2265 (i) = R(j) then N else if p \u2265 (i) = j then F else C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "R(c(i)) = c(i) for each i. For 0 \u2264 i \u2264 k and 1 \u2264 j \u2264 m, we let p (i, j) = p(i) if P (i) = R(j) and p (i, j) = \u22a5 otherwise; or in words, p (i, j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "is the index of the parent of a i in the remaining input, provided it is in the same component as b j . Table 5 presents the algorithm, expressed as system of recursive equations. Here score(i, j, q) represents the maximum number of gold edges (in addition to M ) in a computation from",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 111,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "(a 0 \u2022 \u2022 \u2022 a i a j , b \u2022 \u2022 \u2022 b k , \u2205)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": ", where depends on the state q \u2208 {N , C, F}. If q = N , then is the smallest number such that R( ) = P \u2265 (j); critical nodes from the current component were not yet shifted. If q = C, then = p \u2265 (j) + 1 or = P \u2265 (j) + 1; this can be related to the two cases distinguished by property (4). If q = F, then is greater than the smallest number such that R( ) = P \u2265 (j), but smaller than or equal to p \u2265 (j) or equal to = P \u2265 (j) + 1. Similarly, score (i, j) represents the maximum number of gold edges in a computation from (a 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u2022 \u2022 \u2022 a i b j , b j+1 \u2022 \u2022 \u2022 b k , \u2205).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "For i \u2265 0, the value of score(i, j, q) is the maximum (by \u2295) of three values. The first corresponds to a reduction of a j into a i , which turns the stack into a 0 \u2022 \u2022 \u2022 a i\u22121 a i ; this would also include shifts of any remaining right children of a i , if there are any, and their reduction into a i . Because there is a new top-of-stack, the state is updated using \u03c4 . The function nchildren counts the critical nodes that are children of a i . We define nchildren in terms of w rather than T g , as in the case of the right-before-left strategy Figure 6 : Graphical representation of the first value in the definition of score, for the case q = F, assuming c(i) = P \u2265 (j) = and a i further has children b +1 and b +2 . Because q = F, there was some other node b in the same component that was shifted on to the stack earlier and given a (non-gold) parent; let us assume = \u2212 1. We can add 3 children to the score, but should subtract \u2206(c(i) = P \u2265 (j) \u2227 q = N ) = 1, to compensate for the fact that edge (b , b \u22121 ) cannot be constructed, as b \u22121 can only have one parent. If we further assume a i has a parent among the critical nodes, then that parent must be in a different component, and",
"cite_spans": [],
"ref_spans": [
{
"start": 548,
"end": 556,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "therefore \u03c4 (i, j, q) = N .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "after a reduce right we would preclude right children of a k by setting w(k, i) = \u2212\u221e for k < i. The leftmost of the children, at index c(i), is not counted (or in other words, 1 is subtracted from the number of children) if it is in the current component P \u2265 (j) and that component is anything other than 'none'; here \u2206 is the indicator function, which returns 1 if its Boolean argument evaluates to true, and 0 otherwise. Figure 6 illustrates one possible case.",
"cite_spans": [],
"ref_spans": [
{
"start": 423,
"end": 431,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "The second value corresponds to a reduction of a i into a j , which turns the stack into a 0 \u2022 \u2022 \u2022 a i\u22121 a j , leaving the state unchanged as the top of the stack is unchanged. The third value is applicable if a j has parent b that has not yet been consumed, and it corresponds to a shift of b and a reduction of a i into b (and possibly further shifts and reductions that are implicit), resulting in stack a 0 \u2022 \u2022 \u2022 a i b . If this creates the first gold edge connected to the current component, then we add +1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "For i \u2265 0, the value of score (i, j) is the maximum of two values. The first value distinguishes two cases. In the first case, a i does not have a parent in the same component as b j , and a i is reduced into b j without counting the (non-gold) edge. In the second case, a i is reduced into its parent, which is b j or another critical node that is an ancestor of b j ; in this case we count the gold edge. The second value in the definition of score (i, j) corresponds to a reduction of b j into a i (as well as shifts of any critical nodes that are children of a i , and their reduction into a i ), resulting in stack Figure 7 : Assuming the thick edges are gold, then the thin edge cannot be gold as well, as the gold tree is projective. A score obtained from a stack a 0 \u2022 \u2022 \u2022 a i\u22121 a i is therefore at least as high as a score obtained from a stack a 0 \u2022 \u2022 \u2022 a i\u22121 a j , unless all of a +1 , . . . , a i first become children of a j via a series of reduce right steps, all producing non-gold edges, and therefore adding nothing to the score. The \u03ba function implements such a series of reduce right steps.",
"cite_spans": [],
"ref_spans": [
{
"start": 620,
"end": 628,
"text": "Figure 7",
"ref_id": null
},
{
"start": 858,
"end": 898,
"text": "\u2022 \u2022 \u2022 a i\u22121 a j , unless all of a +1 , .",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "a 0 \u2022 \u2022 \u2022 a i\u22121 a i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "The state is updated using \u03c4 , in the light of the new top-of-stack.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "The top-level call is score(k \u2212 1, k, N ). As this does not account for right children of the top of stack a k , we need to add nchildren(k). Putting everything together, we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "\u03c3 max = M \u2297 score(k \u2212 1, k, N ) \u2297 nchildren(k).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "The time complexity is quadratic in k + m \u2264 n, given the quadratically many combinations of i and j in score(i, j, q) and score (i, j).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tabular Dependency Parsing",
"sec_num": "3"
},
{
"text": "Under the same assumption as in the previous section, namely, that T g is projective, we can further reduce the time complexity of computing \u03c3 max , by two observations. First, let us define \u03bb(i, j) to be true if and only if there is an < i such that (a , a j ) \u2208 T g or (a j , a ) \u2208 T g . If (a j , a i ) / \u2208 T g and \u03bb(i, j) is false, then the highest score attainable from a configuration (a 0 \u2022 \u2022 \u2022 a i\u22121 a j , \u03b2, \u2205) is no higher than the highest score attainable from",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "O(n) O(n) O(n) Time Algorithm",
"sec_num": "6"
},
{
"text": "(a 0 \u2022 \u2022 \u2022 a i\u22121 a i , \u03b2, \u2205), or, if a j has a parent b j , from (a 0 \u2022 \u2022 \u2022 a i b j , \u03b2 , \u2205), for appropriate suffix \u03b2 of \u03b2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "O(n) O(n) O(n) Time Algorithm",
"sec_num": "6"
},
{
"text": "This means that in order to calculate score(i, j, q) we do not need to calculate score(i \u2212 1, j, q) in this case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "O(n) O(n) O(n) Time Algorithm",
"sec_num": "6"
},
{
"text": "Secondly, if (a j , a i ) / \u2208 T g and \u03bb(i, j) is true, and if there is < i such that (a , a i ) \u2208 T g or (a i , a ) \u2208 T g , then there are no edges between a j and a i for any i with < i < i, because of projectivity of T g . We therefore do not need to calculate score (i , j, q) for such values of i in order to find the computation with the highest score. This is illustrated in Figure 7 . Let us define \u03ba(i) to be the smallest such that (a , a i ) \u2208 T g or (a i , a ) \u2208 T g , or i \u2212 1 if there is no such . In the definition of score, we may now replace w(j, i) \u2297 score(i \u2212 1, j, q) by:",
"cite_spans": [
{
"start": 269,
"end": 279,
"text": "(i , j, q)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 381,
"end": 389,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "O(n) O(n) O(n) Time Algorithm",
"sec_num": "6"
},
{
"text": "[if w(j, i) = 1 then 1 \u2297 score(i \u2212 1, j, q) else if w(j, i) = 0 \u2227 \u03bb(i, j) then score(\u03ba(i), j, q) else \u2212\u221e]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "O(n) O(n) O(n) Time Algorithm",
"sec_num": "6"
},
{
"text": "Similarly, we define \u03bb (i, j) to be true if and only if there is an < i such that (a , b j ) \u2208 T g or (b j , a ) \u2208 T g for some j with R(j ) = R(j).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "O(n) O(n) O(n) Time Algorithm",
"sec_num": "6"
},
{
"text": "In the definition of score , we may now replace score (i \u2212 1, j) by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "O(n) O(n) O(n) Time Algorithm",
"sec_num": "6"
},
{
"text": "[if \u03bb (i, j) then score (\u03ba(i), j) else \u2212\u221e]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "O(n) O(n) O(n) Time Algorithm",
"sec_num": "6"
},
{
"text": "Thereby the algorithm becomes linear-time, because the number of values score(i, j, q) and score (i, j) that are calculated for any i is now linear. To see this, consider that for any i, score(i, j, q) would be calculated only if",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "O(n) O(n) O(n) Time Algorithm",
"sec_num": "6"
},
{
"text": "j = i + 1, if (a i , a j ) \u2208 T g or (a j , a i ) \u2208 T g , if (a j , a i+1 ) \u2208 T g , or if j is smallest such that there is < i with (a , a j ) \u2208 T g or (a j , a ) \u2208 T g .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "O(n) O(n) O(n) Time Algorithm",
"sec_num": "6"
},
{
"text": "Similarly, score(i, j) would be calculated only if score(i, j , q) would be calculated and (b j , a j ) \u2208 T g , if (b j , a i+1 ) \u2208 T g , or if j is smallest such that there is \u2264 i with (a , b j ) \u2208 T g or (b j , a ) \u2208 T g for some j such that b j an ancestor of b j in the same component.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "O(n) O(n) O(n) Time Algorithm",
"sec_num": "6"
},
{
"text": "A typical application would calculate the optimal step for several or even all configurations within one computation. Between one configuration and the next, the stack differs at most in the two rightmost elements and the remaining input differs at most in that it loses its leftmost element. Therefore, all but a constant number of values of score(i, j, q) and score (i, j) can be reused, to make the time complexity closer to constant time for each calculation of the optimal step. The practical relevance of this is limited however if one would typically reload the data structures containing the relevant values, which are of linear size. Hence we have not pursued this further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Towards Constant Time Per Calculation",
"sec_num": "7"
},
{
"text": "Our experiments were run on a laptop with an Intel i7-7500U processor (4 cores, 2.70 GHz) with 8 GB of RAM. The implementation language is Java, with DL4J 2 for the classifier, realized as a neural network with a single layer of 256 hidden nodes. Training is with batch size 100, and 20 epochs. Features are the (gold) parts of speech and length-100 word2vec representations of the word forms of the top-most three stack elements, as well as of the left-most three elements of the remaining input, and the left-most and right-most dependency relations in the top-most two stack elements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "8"
},
{
"text": "We need to projectivize our training corpus for the experiments in Section 8.2, using the algorithm described at the end of Section 3. As we are not aware of literature reporting experiments with optimal projectivization, we briefly describe our findings here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimal Projectivization",
"sec_num": "8.1"
},
{
"text": "Projectivizing all the training sets in Universal Dependencies v2.2 3 took 244 sec in total, or 0.342 ms per tree. As mentioned earlier, there may be multiple projectivized trees that are optimal in terms of accuracy, for a single gold tree. We are not aware of meaningful criteria that tell us how to choose any particular one of them, and for our experiments in Section 8.2 we have chosen an arbitrary one. It is conceivable, however, that the choices of the projectivized trees would affect the accuracy of a parser trained on them. Figure 8 illustrates the degree of ''choice'' when projectiving trees. We consider Table 6 : Accuracy (LAS or UAS, which here are identical) of pseudo-projectivization and of optimal projectivization. two languages that are known to differ widely in the prevalence of non-projectivity, namely Ancient Greek (PROIEL) and Japanese (BCCWJ), and we consider one more language, German (GSD), that falls in between (Straka et al., 2015) . As can be expected, the degree of choice grows roughly exponentially in sentence length. Table 6 shows that pseudo-projectivization is non-optimal. We realized pseudo-projectivization using MaltParser 1.9.0. 4",
"cite_spans": [
{
"start": 945,
"end": 966,
"text": "(Straka et al., 2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 536,
"end": 544,
"text": "Figure 8",
"ref_id": "FIGREF3"
},
{
"start": 619,
"end": 626,
"text": "Table 6",
"ref_id": null
},
{
"start": 1058,
"end": 1065,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Optimal Projectivization",
"sec_num": "8.1"
},
{
"text": "To investigate the run-time behavior of the algorithms, we trained our shift-reduce dependency parser on the German training corpus, after it was projectivized as in Section 8.1. In a second pass over the same corpus, the parser followed the steps returned by the trained classifier. For each configuration that was obtained in this way, the running time was recorded of calculating the optimal step, with the non-strict left-before-right strategy. For each configuration, it was verified that the calculated scores, for shift, reduce left, and reduce right, were the same between the three algorithms from Sections 4, 5, and 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the Optimal Step",
"sec_num": "8.2"
},
{
"text": "The two-pass design was inspired by Choi and Palmer (2011). We chose this design, rather than online learning, as we found it easiest to implement. Goldberg and Nivre (2012) discuss the relation between multi-pass and online learning approaches.",
"cite_spans": [
{
"start": 148,
"end": 173,
"text": "Goldberg and Nivre (2012)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing the Optimal Step",
"sec_num": "8.2"
},
{
"text": "As Figure 9 shows, the running times of the algorithms from Sections 5 and 6 grow slowly as the summed length of stack and remaining input grows; note the logarithmic scale. The improvement of the linear-time algorithm over the quadratic-time algorithm is perhaps less than one may expect. This is because the calculation of the critical nodes and the construction of the necessary tables, such as p, p , and R, is considerable compared to the costs of the memoized recursive calls of score and score . Both these algorithms contrast with the algorithm from Section 4, applied on projectivized trees as above (hence tagged proj in Figure 9) , and with the remaining input simplified to just its critical nodes. For k + m = 80, the cubic-time algorithm is slower than the linear-time algorithm by a factor of about 65. Nonetheless, we find that the cubic-time algorithm is practically relevant, even for long sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 9",
"ref_id": "FIGREF4"
},
{
"start": 631,
"end": 640,
"text": "Figure 9)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Computing the Optimal Step",
"sec_num": "8.2"
},
{
"text": "The decreases at roughly k + m = 88, which are most visible for Section 4 (proj), are explained by the fact that the running time is primarily determined by k + m , where m is the number of critical nodes. Because k + m is bounded by the sentence length and the stack height k tends to be much less than the sentence length, high values of k + m tend to result from the length m of the remaining input being large, which in turn implies that there will be more non-critical nodes that are removed before the most time-consuming part of the analyses is entered. This is confirmed by Figure 10. (1) 2(3) (4) Table 7 : Accuracies, with percentage of trees that are non-projective, and number of tokens. Only gold computations are considered in a single pass (1,2) or there is a second pass as well (3,4,5). The first pass is on the subset of projective trees (1,3) or on all trees after optimal projectivization (2,4,5). The second pass is on projectivized trees (3,4) or on unprojectivized trees (5).",
"cite_spans": [],
"ref_spans": [
{
"start": 582,
"end": 592,
"text": "Figure 10.",
"ref_id": "FIGREF0"
},
{
"start": 606,
"end": 613,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Computing the Optimal Step",
"sec_num": "8.2"
},
{
"text": "The main advantage of the cubic-time algorithm is that it is also applicable if the training corpus has not been projectivized. To explore this we have run this algorithm on the same corpus again, but now without projectivization in the second pass (for training the classifier in the first pass, projectivization was done as before). In this case, we can no longer remove non-critical nodes (without it affecting correctness), and now the curve is monotone increasing, as shown by Section 4 (unproj) in Figure 9 . Nevertheless, with mean running times below 0.25 sec even for input longer than 100 tokens, this algorithm is practically relevant.",
"cite_spans": [],
"ref_spans": [
{
"start": 504,
"end": 512,
"text": "Figure 9",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Computing the Optimal Step",
"sec_num": "8.2"
},
{
"text": "If a corpus is large enough for the parameters of a classifier to be reliably estimated, or if the vast majority of trees is projective, then accuracy is not likely to be much affected by the work in this paper. We therefore also consider six languages that have some of the smallest corpora in UD v2.2 in combination with a relatively large proportion of non-projective trees: Danish, Basque, Greek, Old Church Slanovic, Gothic, and Hungarian. For these languages, Table 7 shows that accuracy is generally higher if training can benefit from all trees. In a few cases, it appears to be slightly better to train directly on non-projective trees rather than on optimally projectivized trees.",
"cite_spans": [],
"ref_spans": [
{
"start": 466,
"end": 473,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy",
"sec_num": "8.3"
},
{
"text": "We have presented the first algorithm to calculate the optimal step for shift-reduce dependency parsing that is applicable on non-projective training corpora. Perhaps even more innovative than its functionality is its modular architecture, which implies that the same is possible for related kinds of parsing, as long as the set of allowable transitions can be described in terms of a split context-free grammar. The application of the framework to, among others, arc-eager dependency parsing is to be reported elsewhere.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "We have also shown that calculation of the optimal step is possible in linear time if the training corpus is projective. This is the first time this has been shown for a form of projective, deterministic dependency parsing that does not have the property of arc-decomposibility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "https://deeplearning4j.org/. 3 https://universaldependencies.org/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.maltparser.org/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The author wishes to thank the reviewers for comments and suggestions, which led to substantial improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Exploiting dynamic oracles to train projective dependency parsers on non-projective trees",
"authors": [
{
"first": "Lauriane",
"middle": [],
"last": "Aufrant",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wisniewski",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2018,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "413--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lauriane Aufrant, Guillaume Wisniewski, and Fran\u00e7ois Yvon. 2018. Exploiting dynamic ora- cles to train projective dependency parsers on non-projective trees. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, volume 2, pages 413-419. New Orleans, LA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Getting the most out of transition-based dependency parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Jinho",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2011,
"venue": "49th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "687--692",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinho D. Choi and Martha Palmer. 2011. Getting the most out of transition-based dependency pars- ing. In 49th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 687-692. Portland, OR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bilexical grammars and their cubic-time parsing algorithms",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2000,
"venue": "Advances in Probabilistic and other Parsing Technologies",
"volume": "",
"issue": "",
"pages": "29--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner. 2000. Bilexical grammars and their cubic-time parsing algorithms. In Harry Bunt and Anton Nijholt, editors, Advances in Probabilis- tic and other Parsing Technologies, chapter 3, pages 29-61. Kluwer Academic Publishers.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Efficient parsing for bilexical context-free grammars and head automaton grammars",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 1999,
"venue": "37th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "457--464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner and Giorgio Satta. 1999. Efficient parsing for bilexical context-free grammars and head automaton grammars. In 37th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 457-464. Maryland.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A dynamic oracle for lineartime 2-planar dependency parsing",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Gonz\u00e1lez",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2018,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "386--392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Fern\u00e1ndez-Gonz\u00e1lez and Carlos G\u00f3mez- Rodr\u00edguez. 2018a. A dynamic oracle for linear- time 2-planar dependency parsing. In Conference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, volume 2, pages 386-392. New Orleans, LA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Non-projective dependency parsing with non-local transitions",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Gonz\u00e1lez",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2018,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "693--700",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Fern\u00e1ndez-Gonz\u00e1lez and Carlos G\u00f3mez- Rodr\u00edguez. 2018b. Non-projective dependency parsing with non-local transitions. In Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 2, pages 693-700. New Orleans, LA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Parsing and dependency grammar",
"authors": [
{
"first": "Norman",
"middle": [],
"last": "Fraser",
"suffix": ""
}
],
"year": 1989,
"venue": "UCL Working Papers in Linguistics",
"volume": "1",
"issue": "",
"pages": "296--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Norman Fraser. 1989. Parsing and dependency grammar. UCL Working Papers in Linguistics, 1:296-319.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A dynamic oracle for arc-eager dependency parsing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2012,
"venue": "The 24th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "959--976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Joakim Nivre. 2012. A dy- namic oracle for arc-eager dependency parsing. In The 24th International Conference on Com- putational Linguistics, pages 959-976. Mumbai.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Training deterministic parsers with non-deterministic oracles",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "403--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. Transactions of the Association for Computational Linguistics, 1:403-414.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A tabular method for dynamic oracles in transition-based parsing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Sartorio",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "119--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg, Francesco Sartorio, and Giorgio Satta. 2014. A tabular method for dynamic ora- cles in transition-based parsing. Transactions of the Association for Computational Linguistics, 2:119-130.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A deductive approach to dependency parsing",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 2008,
"venue": "46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "968--976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlos G\u00f3mez-Rodr\u00edguez, John Carroll, and David Weir. 2008. A deductive approach to depen- dency parsing. In 46th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 968-976. Columbus, OH.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An efficient dynamic oracle for unrestricted non-projective parsing",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez",
"suffix": ""
},
{
"first": "-Rodr\u00edguez",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Fern\u00e1ndez-Gonz\u00e1lez",
"suffix": ""
}
],
"year": 2015,
"venue": "53rd Annual Meeting of the Association for Computational Linguistics and 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "256--261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlos G\u00f3mez-Rodr\u00edguez and Daniel Fern\u00e1ndez- Gonz\u00e1lez. 2015. An efficient dynamic oracle for unrestricted non-projective parsing. In 53rd Annual Meeting of the Association for Compu- tational Linguistics and 7th International Joint Conference on Natural Language Processing, volume 2, pages 256-261. Beijing.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A polynomial-time dynamic oracle for non-projective dependency parsing",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Sartorio",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2014,
"venue": "Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "917--927",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlos G\u00f3mez-Rodr\u00edguez, Francesco Sartorio, and Giorgio Satta. 2014. A polynomial-time dynamic oracle for non-projective dependency parsing. In Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, pages 917-927. Doha.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dynamic programming for linear-time incremental parsing",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1077--1086",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and Kenji Sagae. 2010. Dynamic pro- gramming for linear-time incremental parsing. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1077-1086. Uppsala.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Transforming projective bilexical dependency grammars into efficientlyparsable CFGs with Unfold-Fold",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2007,
"venue": "45th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "168--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson. 2007. Transforming projective bilexical dependency grammars into efficiently- parsable CFGs with Unfold-Fold. In 45th Annual Meeting of the Association for Com- putational Linguistics, Proceedings of the Con- ference, pages 168-175. Prague.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Pseudo-projectivity, a polynomially parsable non-projective dependency grammar",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Sylvain Kahane",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Nasr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 1998,
"venue": "36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "646--652",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sylvain Kahane, Alexis Nasr, and Owen Rambow. 1998. Pseudo-projectivity, a polynomially pars- able non-projective dependency grammar. In 36th Annual Meeting of the Association for Computational Linguistics and 17th Interna- tional Conference on Computational Linguis- tics, volume 1, pages 646-652. Montreal.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Guides and oracles for lineartime parsing",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Sixth International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "6--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Kay. 2000. Guides and oracles for linear- time parsing. In Proceedings of the Sixth International Workshop on Parsing Technolo- gies, pages 6-9. Trento.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Dynamic programming algorithms for transition-based dependency parsers",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2011,
"venue": "49th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "673--682",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Kuhlmann, Carlos G\u00f3mez-Rodr\u00edguez, and Giorgio Satta. 2011. Dynamic programming algorithms for transition-based dependency pars- ers. In 49th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 673-682. Portland, OR.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Arc-hybrid non-projective dependency parsing with a static-dynamic oracle",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Miryam De Lhoneux",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Stymne",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2017,
"venue": "15th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "99--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miryam de Lhoneux, Sara Stymne, and Joakim Nivre. 2017. Arc-hybrid non-projective depen- dency parsing with a static-dynamic oracle. In 15th International Conference on Parsing Technologies, pages 99-104. Pisa.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A formalism and a parser for lexicalised dependency grammars",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Nasr",
"suffix": ""
}
],
"year": 1995,
"venue": "Fourth International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "186--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Nasr. 1995. A formalism and a parser for lexicalised dependency grammars. In Fourth International Workshop on Parsing Technolo- gies, pages 186-195. Prague and Karlovy Vary.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Algorithms for deterministic incremental dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "4",
"pages": "513--553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computa- tional Linguistics, 34(4):513-553.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Memory-based dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Eighth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "49--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Johan Hall, and Jens Nilsson. 2004. Memory-based dependency parsing. In Proceedings of the Eighth Conference on Computational Natural Language Learning, pages 49-56. Boston, MA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Pseudoprojective dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2005,
"venue": "43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "99--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre and Jens Nilsson. 2005. Pseudo- projective dependency parsing. In 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pages 99-106. Ann Arbor, MI.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Arc-swift: A novel transition system for dependency parsing",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference",
"volume": "2",
"issue": "",
"pages": "110--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qi and Christopher D. Manning. 2017. Arc-swift: A novel transition system for depen- dency parsing. In 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, volume 2, pages 110-117. Vancouver.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Parsing universal dependency treebanks using neural networks and searchbased oracle",
"authors": [
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Strakov\u00e1",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Fourteenth International Workshop on Treebanks and Linguistic Theories",
"volume": "",
"issue": "",
"pages": "208--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milan Straka, Jan Haji\u010d, Jana Strakov\u00e1, and Jan Haji\u010d, jr. 2015. Parsing universal dependency treebanks using neural networks and search- based oracle. In Proceedings of the Fourteenth International Workshop on Treebanks and Linguistic Theories, pages 208-220. Warsaw.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Statistical dependency analysis with support vector machines",
"authors": [
{
"first": "Hiroyasu",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "8th International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "195--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In 8th International Workshop on Parsing Technologies, pages 195-206. LORIA, Nancy.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Dependency structure and corresponding parse tree that encodes a computation of a shift-reduce parser.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Dependency structure and corresponding parse tree, for stack of height 4 and remaining input of length 1.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Counting additional gold edges in A\u00d7B crit \u222a B crit \u00d7 A. Gold edges are thick, others are thin. Gold edges that are not created appear dotted.",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "Geometric mean of the number of optimal projectivized trees against sentence length.",
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"text": "Mean running time per step (milliseconds) against length of input, for projectivized and unprojectivized trees.",
"uris": null
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"text": "Mean k + m against k + m.",
"uris": null
},
"TABREF0": {
"content": "<table/>",
"num": null,
"text": "Shift-reduce dependency parsing.",
"type_str": "table",
"html": null
},
"TABREF1": {
"content": "<table><tr><td>: Weighted parsing, for an arbitrary semi-</td></tr><tr><td>ring, with 0 \u2264 i &lt; j \u2264 n.</td></tr></table>",
"num": null,
"text": "",
"type_str": "table",
"html": null
},
"TABREF2": {
"content": "<table/>",
"num": null,
"text": "Grammar for projective dependency parsing, with \u03a3 = {a} and N",
"type_str": "table",
"html": null
},
"TABREF3": {
"content": "<table/>",
"num": null,
"text": "Grammar for dependency parsing of p k s m+1 , representing a stack of length k + 1 and remaining input of length m, with \u03a3 = {p, s}, N",
"type_str": "table",
"html": null
},
"TABREF4": {
"content": "<table/>",
"num": null,
"text": "",
"type_str": "table",
"html": null
},
"TABREF6": {
"content": "<table><tr><td>LAS UAS</td><td>subset</td><td>all</td><td>subset</td><td>all</td><td>all</td></tr><tr><td/><td/><td/><td>Sec. 6</td><td/><td/></tr></table>",
"num": null,
"text": "Sec. 6 Sec. 4 de, 13% 71.15 71.33 71.69 72.57 72.55 263,804 78.09 78.14 78.96 79.78 79.77 da, 13% 69.11 71.42 69.95 72.18 72.25 80,378 75.13 76.98 76.30 78.00 78.21 eu, 34% 54.69 58.27 54.11 57.49 57.81 72,974 67.49 70.07 67.71 70.07 70.13 el, 12% 71.62 72.78 70.49 72.66 72.34 42,326 77.45 78.34 77.14 78.91 78.36 cu, 20% 56.25 59.09 56.31 58.78 59.52 37,432 68.08 69.95 69.07 70.10 70.94 got, 22% 51.96 55.00 53.44 55.94 56.20 35,024 64.48 66.58 65.85 67.85 68.09 hu, 26% 52.70 56.20 54.09 57.37 57.62 20,166 65.72 68.96 67.55 70.20 70.30",
"type_str": "table",
"html": null
}
}
}
}