{ "paper_id": "P09-1040", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:54:48.275196Z" }, "title": "Non-Projective Dependency Parsing in Expected Linear Time", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "", "affiliation": { "laboratory": "", "institution": "Uppsala V\u00e4xj\u00f6 University", "location": { "postCode": "SE-75126, SE-35195", "settlement": "V\u00e4xj\u00f6" } }, "email": "joakim.nivre@lingfil.uu.se" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a novel transition system for dependency parsing, which constructs arcs only between adjacent words but can parse arbitrary non-projective trees by swapping the order of words in the input. Adding the swapping operation changes the time complexity for deterministic parsing from linear to quadratic in the worst case, but empirical estimates based on treebank data show that the expected running time is in fact linear for the range of data attested in the corpora. Evaluation on data from five languages shows state-of-the-art accuracy, with especially good results for the labeled exact match score.", "pdf_parse": { "paper_id": "P09-1040", "_pdf_hash": "", "abstract": [ { "text": "We present a novel transition system for dependency parsing, which constructs arcs only between adjacent words but can parse arbitrary non-projective trees by swapping the order of words in the input. Adding the swapping operation changes the time complexity for deterministic parsing from linear to quadratic in the worst case, but empirical estimates based on treebank data show that the expected running time is in fact linear for the range of data attested in the corpora. Evaluation on data from five languages shows state-of-the-art accuracy, with especially good results for the labeled exact match score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Syntactic parsing using dependency structures has become a standard technique in natural language processing with many different parsing models, in particular data-driven models that can be trained on syntactically annotated corpora (Yamada and Matsumoto, 2003; McDonald et al., 2005a; Attardi, 2006; Titov and Henderson, 2007) . A hallmark of many of these models is that they can be implemented very efficiently. Thus, transition-based parsers normally run in linear or quadratic time, using greedy deterministic search or fixed-width beam search Attardi, 2006; Johansson and Nugues, 2007; Titov and Henderson, 2007) , and graph-based models support exact inference in at most cubic time, which is efficient enough to make global discriminative training practically feasible (McDonald et al., 2005a; McDonald et al., 2005b) .", "cite_spans": [ { "start": 233, "end": 261, "text": "(Yamada and Matsumoto, 2003;", "ref_id": "BIBREF26" }, { "start": 262, "end": 285, "text": "McDonald et al., 2005a;", "ref_id": "BIBREF11" }, { "start": 286, "end": 300, "text": "Attardi, 2006;", "ref_id": "BIBREF0" }, { "start": 301, "end": 327, "text": "Titov and Henderson, 2007)", "ref_id": "BIBREF24" }, { "start": 549, "end": 563, "text": "Attardi, 2006;", "ref_id": "BIBREF0" }, { "start": 564, "end": 591, "text": "Johansson and Nugues, 2007;", "ref_id": "BIBREF6" }, { "start": 592, "end": 618, "text": "Titov and Henderson, 2007)", "ref_id": "BIBREF24" }, { "start": 777, "end": 801, "text": "(McDonald et al., 2005a;", "ref_id": "BIBREF11" }, { "start": 802, "end": 825, "text": "McDonald et al., 2005b)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, one problem that still has not found a satisfactory solution in data-driven dependency parsing is the treatment of discontinuous syntactic constructions, usually modeled by non-projective dependency trees, as illustrated in Figure 1 . In a projective dependency tree, the yield of every subtree is a contiguous substring of the sentence. This is not the case for the tree in Figure 1 , where the subtrees rooted at node 2 (hearing) and node 4 (scheduled) both have discontinuous yields.", "cite_spans": [], "ref_spans": [ { "start": 233, "end": 241, "text": "Figure 1", "ref_id": null }, { "start": 384, "end": 392, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Allowing non-projective trees generally makes parsing computationally harder. Exact inference for parsing models that allow non-projective trees is NP hard, except under very restricted independence assumptions (Neuhaus and Br\u00f6ker, 1997; McDonald and Satta, 2007) . There is recent work on algorithms that can cope with important subsets of all nonprojective trees in polynomial time (Kuhlmann and Satta, 2009; G\u00f3mez-Rodr\u00edguez et al., 2009) , but the time complexity is at best O(n 6 ), which can be problematic in practical applications. Even the best algorithms for deterministic parsing run in quadratic time, rather than linear (Nivre, 2008a) , unless restricted to a subset of non-projective structures as in Attardi (2006) and Nivre (2007) .", "cite_spans": [ { "start": 211, "end": 237, "text": "(Neuhaus and Br\u00f6ker, 1997;", "ref_id": "BIBREF14" }, { "start": 238, "end": 263, "text": "McDonald and Satta, 2007)", "ref_id": "BIBREF10" }, { "start": 384, "end": 410, "text": "(Kuhlmann and Satta, 2009;", "ref_id": "BIBREF8" }, { "start": 411, "end": 440, "text": "G\u00f3mez-Rodr\u00edguez et al., 2009)", "ref_id": "BIBREF3" }, { "start": 632, "end": 646, "text": "(Nivre, 2008a)", "ref_id": "BIBREF22" }, { "start": 714, "end": 728, "text": "Attardi (2006)", "ref_id": "BIBREF0" }, { "start": 733, "end": 745, "text": "Nivre (2007)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "But allowing non-projective dependency trees also makes parsing empirically harder, because it requires that we model relations between nonadjacent structures over potentially unbounded distances, which often has a negative impact on parsing accuracy. On the other hand, it is hardly possible to ignore non-projective structures completely, given that 25% or more of the sentences in some languages cannot be given a linguistically adequate analysis without invoking non-projective structures (Nivre, 2006; Kuhlmann and Nivre, 2006; Havelka, 2007) .", "cite_spans": [ { "start": 493, "end": 506, "text": "(Nivre, 2006;", "ref_id": "BIBREF20" }, { "start": 507, "end": 532, "text": "Kuhlmann and Nivre, 2006;", "ref_id": "BIBREF7" }, { "start": 533, "end": 547, "text": "Havelka, 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Current approaches to data-driven dependency parsing typically use one of two strategies to deal with non-projective trees (unless they ignore them completely). Either they employ a non-standard parsing algorithm that can combine non-adjacent substructures (McDonald et al., 2005b; Attardi, 2006; Nivre, 2007) , or they try to recover non-", "cite_spans": [ { "start": 257, "end": 281, "text": "(McDonald et al., 2005b;", "ref_id": "BIBREF12" }, { "start": 282, "end": 296, "text": "Attardi, 2006;", "ref_id": "BIBREF0" }, { "start": 297, "end": 309, "text": "Nivre, 2007)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "ROOT 0 A 1 \u00a7 \u00a4 ? DET hearing 2 \u00a7 \u00a4 ? SBJ is 3 \u00a7 \u00a4 ? ROOT scheduled 4 \u00a7 \u00a4 ? VG on 5 \u00a7 \u00a4 ? NMOD the 6 \u00a7 \u00a4 ? DET issue 7 \u00a7 \u00a4 ? PC today 8 \u00a7 \u00a4 ? ADV . 9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "? \u00a7 \u00a4 P Figure 1 : Dependency tree for an English sentence (non-projective).", "cite_spans": [], "ref_spans": [ { "start": 8, "end": 16, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "projective dependencies by post-processing the output of a strictly projective parser (Nivre and Nilsson, 2005; Hall and Nov\u00e1k, 2005; . In this paper, we will adopt a different strategy, suggested in recent work by Nivre (2008b) and Titov et al. (2009) , and propose an algorithm that only combines adjacent substructures but derives non-projective trees by reordering the input words. The rest of the paper is structured as follows. In Section 2, we define the formal representations needed and introduce the framework of transitionbased dependency parsing. In Section 3, we first define a minimal transition system and explain how it can be used to perform projective dependency parsing in linear time; we then extend the system with a single transition for swapping the order of words in the input and demonstrate that the extended system can be used to parse unrestricted dependency trees with a time complexity that is quadratic in the worst case but still linear in the best case. In Section 4, we present experiments indicating that the expected running time of the new system on naturally occurring data is in fact linear and that the system achieves state-ofthe-art parsing accuracy. We discuss related work in Section 5 and conclude in Section 6.", "cite_spans": [ { "start": 86, "end": 111, "text": "(Nivre and Nilsson, 2005;", "ref_id": "BIBREF16" }, { "start": 112, "end": 133, "text": "Hall and Nov\u00e1k, 2005;", "ref_id": "BIBREF4" }, { "start": 215, "end": 228, "text": "Nivre (2008b)", "ref_id": "BIBREF23" }, { "start": 233, "end": 252, "text": "Titov et al. (2009)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given a set L of dependency labels, a dependency graph for a sentence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Graphs and Trees", "sec_num": "2.1" }, { "text": "x = w 1 , . . . , w n is a directed graph G = (V x , A), where 1. V x = {0, 1, . . . , n} is a set of nodes, 2. A \u2286 V x \u00d7 L \u00d7 V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Graphs and Trees", "sec_num": "2.1" }, { "text": "x is a set of labeled arcs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency Graphs and Trees", "sec_num": "2.1" }, { "text": "The set V x of nodes is the set of positive integers up to and including n, each corresponding to the linear position of a word in the sentence, plus an extra artificial root node 0. The set A of arcs is a set of triples (i, l, j), where i and j are nodes and l is a label. For a dependency graph G = (V x , A) to be well-formed, we in addition require that it is a tree rooted at the node 0, as illustrated in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 411, "end": 419, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Dependency Graphs and Trees", "sec_num": "2.1" }, { "text": "Following Nivre (2008a), we define a transition system for dependency parsing as a quadruple S = (C, T, c s , C t ), where 1. C is a set of configurations, 2. T is a set of transitions, each of which is a (partial) function t :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Systems", "sec_num": "2.2" }, { "text": "C \u2192 C, 3. c s is an initialization function, mapping a sentence x = w 1 , . . . , w n to a configuration c \u2208 C, 4. C t \u2286 C", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Systems", "sec_num": "2.2" }, { "text": "is a set of terminal configurations. In this paper, we take the set C of configurations to be the set of all triples c = (\u03a3, B, A) such that \u03a3 and B are disjoint sublists of the nodes V x of some sentence x, and A is a set of dependency arcs over V x (and some label set L); we take the initial configuration for a sentence x = w 1 , . . . , w n to be c s (x) = ([0], [1, . . . , n], { }); and we take the set C t of terminal configurations to be the set of all configurations of the form c = ([0], [ ], A) (for any arc set A). The set T of transitions will be discussed in detail in Sections 3.1-3.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Systems", "sec_num": "2.2" }, { "text": "We will refer to the list \u03a3 as the stack and the list B as the buffer, and we will use the variables \u03c3 and \u03b2 for arbitrary sublists of \u03a3 and B, respectively. For reasons of perspicuity, we will write \u03a3 with its head (top) to the right and B with its head to the left. Thus, c = ([\u03c3|i], [j|\u03b2], A) is a configuration with the node i on top of the stack \u03a3 and the node j as the first node in the buffer B. Given a transition system S = (C, T, c s , C t ), a transition sequence for a sentence x is a sequence ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Systems", "sec_num": "2.2" }, { "text": "C 0,m = (c 0 , c 1 , . . . , c m ) of configurations, such that 1. c 0 = c s (x), 2. c m \u2208 C t , 3. for every i (1 \u2264 i \u2264 m), c i = t(c i\u22121 ) for some t \u2208 T . Transition Condition LEFT-ARC l ([\u03c3|i, j], B, A) \u21d2 ([\u03c3|j], B, A\u222a{(j, l, i)}) i = 0 RIGHT-ARC l ([\u03c3|i, j], B, A) \u21d2 ([\u03c3|i], B, A\u222a{(i, l, j)}) SHIFT (\u03c3, [i|\u03b2], A) \u21d2 ([\u03c3|i], \u03b2, A) SWAP ([\u03c3|i, j], \u03b2, A) \u21d2 ([\u03c3|j], [i|\u03b2], A) 0 < i < j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Systems", "sec_num": "2.2" }, { "text": "T p = {LEFT-ARC l , RIGHT-ARC l , SHIFT}; T u = T p \u222a {SWAP}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Systems", "sec_num": "2.2" }, { "text": "The parse assigned to S by C 0,m is the dependency graph", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Systems", "sec_num": "2.2" }, { "text": "G cm = (V x , A cm ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Systems", "sec_num": "2.2" }, { "text": "where A cm is the set of arcs in c m .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Systems", "sec_num": "2.2" }, { "text": "A transition system S is sound for a class G of dependency graphs iff, for every sentence x and transition sequence C 0,m for x in S, G cm \u2208 G. S is complete for G iff, for every sentence x and dependency graph G for x in G, there is a transition sequence C 0,m for x in S such that G cm = G.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Transition Systems", "sec_num": "2.2" }, { "text": "An oracle for a transition system S is a function o : C \u2192 T . Ideally, o should always return the optimal transition t for a given configuration c, but all we require formally is that it respects the preconditions of transitions in T . That is, if o(c) = t then t is permissible in c. Given an oracle o, deterministic transition-based parsing can be achieved by the following simple algorithm:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deterministic Transition-Based Parsing", "sec_num": "2.3" }, { "text": "PARSE(o, x) 1 c \u2190 c s (x) 2 while c \u2208 C t 3 do t \u2190 o(c); c \u2190 t(c) 4 return G c", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deterministic Transition-Based Parsing", "sec_num": "2.3" }, { "text": "Starting in the initial configuration c s (x), the parser repeatedly calls the oracle function o for the current configuration c and updates c according to the oracle transition t. The iteration stops when a terminal configuration is reached. It is easy to see that, provided that there is at least one transition sequence in S for every sentence, the parser constructs exactly one transition sequence C 0,m for a sentence x and returns the parse defined by the terminal configuration c m , i.e., G cm = (V x , A cm ). Assuming that the calls o(c) and t(c) can both be performed in constant time, the worst-case time complexity of a deterministic parser based on a transition system S is given by an upper bound on the length of transition sequences in S.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deterministic Transition-Based Parsing", "sec_num": "2.3" }, { "text": "When building practical parsing systems, the oracle can be approximated by a classifier trained on treebank data, a technique that has been used successfully in a number of systems (Yamada and Matsumoto, 2003; Attardi, 2006) . This is also the approach we will take in the experimental evaluation in Section 4.", "cite_spans": [ { "start": 181, "end": 209, "text": "(Yamada and Matsumoto, 2003;", "ref_id": "BIBREF26" }, { "start": 210, "end": 224, "text": "Attardi, 2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Deterministic Transition-Based Parsing", "sec_num": "2.3" }, { "text": "Having defined the set of configurations, including initial and terminal configurations, we will now focus on the transition set T required for dependency parsing. The total set of transitions that will be considered is given in Figure 2 , but we will start in Section 3.1 with the subset T p (p for projective) consisting of the first three. In Section 3.2, we will add the fourth transition (SWAP) to get the full transition set T u (u for unrestricted).", "cite_spans": [], "ref_spans": [ { "start": 229, "end": 237, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Transitions for Dependency Parsing", "sec_num": "3" }, { "text": "The minimal transition set T p for projective dependency parsing contains three transitions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projective Dependency Parsing", "sec_num": "3.1" }, { "text": "1. LEFT-ARC l updates a configuration with i, j on top of the stack by adding (j, l, i) to A and replacing i, j on the stack by j alone. It is permissible as long as i is distinct from 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projective Dependency Parsing", "sec_num": "3.1" }, { "text": "2. RIGHT-ARC l updates a configuration with i, j on top of the stack by adding (i, l, j) to A and replacing i, j on the stack by i alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projective Dependency Parsing", "sec_num": "3.1" }, { "text": "3. SHIFT updates a configuration with i as the first node of the buffer by removing i from the buffer and pushing it onto the stack.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projective Dependency Parsing", "sec_num": "3.1" }, { "text": "The system S p = (C, T p , c s , C t ) is sound and complete for the set of projective dependency trees (over some label set L) and has been used, in slightly different variants, by a number of transition-based dependency parsers (Yamada and Matsumoto, 2003; Nivre, 2004; Attardi, 2006 ; Nivre, 2008a). For proofs of soundness and completeness, see Nivre (2008a) . As noted in section 2, the worst-case time complexity of a deterministic transition-based parser is given by an upper bound on the length of transition sequences. In S p , the number of transitions for a sentence x = w 1 , . . . , w n is always exactly 2n, since a terminal configuration can only be reached after n SHIFT transitions (moving nodes 1, . . . , n from B to \u03a3) and n applications of LEFT-ARC l or RIGHT-ARC l (removing the same nodes from \u03a3). Hence, the complexity of deterministic parsing is O(n) in the worst case (as well as in the best case).", "cite_spans": [ { "start": 230, "end": 258, "text": "(Yamada and Matsumoto, 2003;", "ref_id": "BIBREF26" }, { "start": 259, "end": 271, "text": "Nivre, 2004;", "ref_id": "BIBREF19" }, { "start": 272, "end": 285, "text": "Attardi, 2006", "ref_id": "BIBREF0" }, { "start": 349, "end": 362, "text": "Nivre (2008a)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Projective Dependency Parsing", "sec_num": "3.1" }, { "text": "Transition Stack (\u03a3) Buffer (B) Added Arc [ROOT 0 ] [A 1 , . . . , . 9 ] SHIFT [ROOT 0 , A 1 ] [hearing 2 , . . . , . 9 ] SHIFT [ROOT 0 , A 1 , hearing 2 ] [is 3 , . . . , . 9 ] LA DET [ROOT 0 , hearing 2 ] [is 3 , . . . , . 9 ] (2, DET, 1) SHIFT [ROOT 0 , hearing 2 , is 3 ] [scheduled 4 , . . . , . 9 ] SHIFT [ROOT 0 , . . . , is 3 , scheduled 4 ] [on 5 , . . . , . 9 ] SHIFT [ROOT 0 , . . . , scheduled 4 , on 5 ] [the 6 , . . . , . 9 ] SWAP [ROOT 0 , . . . , is 3 , on 5 ] [scheduled 4 , . . . , . 9 ] SWAP [ROOT 0 , hearing 2 , on 5 ] [is 3 , . . . , . 9 ] SHIFT [ROOT 0 , . . . , on 5 , is 3 ] [scheduled 4 , . . . , . 9 ] SHIFT [ROOT 0 , . . . , is 3 , scheduled 4 ] [the 6 , . . . , . 9 ] SHIFT [ROOT 0 , . . . , scheduled 4 , the 6 ] [issue 7 , . . . , . 9 ] SWAP [ROOT 0 , . . . , is 3 , the 6 ] [scheduled 4 , . . . , . 9 ] SWAP [ROOT 0 , . . . , on 5 , the 6 ] [is 3 , . . . , . 9 ] SHIFT [ROOT 0 , . . . , the 6 , is 3 ] [scheduled 4 , . . . , . 9 ] SHIFT [ROOT 0 , . . . , is 3 , scheduled 4 ] [issue 7 , . . . , . 9 ] SHIFT [ROOT 0 , . . . , scheduled 4 , issue 7 ] [today 8 , . 9 ] SWAP [ROOT 0 , . . . , is 3 , issue 7 ] [scheduled 4 , . . . , . 9 ] SWAP [ROOT 0 , . . . , the 6 , issue 7 ] [is 3 , . . . , . 9 ] LA DET [ROOT 0 , . . . , on 5 , issue 7 ] [is 3 , . . . , . 9 ] (7, DET, 6) RA PC [ROOT 0 , hearing 2 , on 5 ] [is 3 , . . . , . 9 ] (5, PC, 7) RA NMOD [ROOT 0 , hearing 2 ] [is 3 , . . . , . 9 ] (2, NMOD, 5) SHIFT [ROOT 0 , . . . , hearing 2 , is 3 ] [scheduled 4 , . . . , . 9 ] LA SBJ [ROOT 0 , is 3 ] [scheduled 4 , . . . , . 9 ] (3, SBJ, 2) SHIFT [ROOT 0 , is 3 , scheduled 4 ] [today 8 , . 9 ] SHIFT [ROOT 0 , . . . , scheduled 4 , today 8 ] [. 9 ] RA ADV [ROOT 0 , is 3 , scheduled 4 ] [. 9 ] (4, ADV, 8) RA VG [ROOT 0 , is 3 ] [. 9 ] (3, VG, 4) SHIFT [ROOT 0 , is 3 , . 9 ] [ ] RA P [ROOT 0 , is 3 ] [ ] (3, P, 9) RA ROOT [ROOT 0 ] [ ] (0, ROOT, 3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projective Dependency Parsing", "sec_num": "3.1" }, { "text": "We now consider what happens when we add the fourth transition from Figure 2 to get the extended transition set T u . The SWAP transition updates a configuration with stack [\u03c3|i, j] by moving the node i back to the buffer. This has the effect that the order of the nodes i and j in the appended list \u03a3 + B is reversed compared to the original word order in the sentence. It is important to note that SWAP is only permissible when the two nodes on top of the stack are in the original word order, which prevents the same two nodes from being swapped more than once, and when the leftmost node i is distinct from the root node 0. Note also that SWAP moves the node i back to the buffer, so that LEFT-ARC l , RIGHT-ARC l or SWAP can subsequently apply with the node j on top of the stack.", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 76, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Unrestricted Dependency Parsing", "sec_num": "3.2" }, { "text": "The fact that we can swap the order of nodes, implicitly representing subtrees, means that we can construct non-projective trees by applying A) . We use the notation A i to denote the subset of A that only contains the outgoing arcs of the node i.", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 143, "text": "A)", "ref_id": null } ], "eq_spans": [], "section": "Unrestricted Dependency Parsing", "sec_num": "3.2" }, { "text": "o(c) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 LEFT-ARC l if c = ([\u03c3|i, j], B, A c ), (j, l, i) \u2208 A and A i \u2286 A c RIGHT-ARC l if c = ([\u03c3|i, j], B, A c ), (i, l, j) \u2208 A and A j \u2286 A c SWAP if c = ([\u03c3|i, j], B, A c ) and j < G i SHIFT otherwise Figure 4: Oracle function for S u = (C, T u , c s , C t ) with target tree G = (V x ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unrestricted Dependency Parsing", "sec_num": "3.2" }, { "text": "LEFT-ARC l or RIGHT-ARC l to subtrees whose yields are not adjacent according to the original word order. This is illustrated in Figure 3 , which shows the transition sequence needed to parse the example in Figure 1 . For readability, we represent both the stack \u03a3 and the buffer B as lists of tokens, indexed by position, rather than abstract nodes. The last column records the arc that is added to the arc set A in a given transition (if any).", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 137, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 207, "end": 215, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Unrestricted Dependency Parsing", "sec_num": "3.2" }, { "text": "Given the simplicity of the extension, it is rather remarkable that the system S u = (C, T u , c s , C t ) is sound and complete for the set of all dependency trees (over some label set L), including all non-projective trees. The soundness part is trivial, since any terminating transition sequence will have to move all the nodes 1, . . . , n from B to \u03a3 (using SHIFT) and then remove them from \u03a3 (using LEFT-ARC l or RIGHT-ARC l ), which will produce a tree with root 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unrestricted Dependency Parsing", "sec_num": "3.2" }, { "text": "For completeness, we note first that projectivity is not a property of a dependency tree in itself, but of the tree in combination with a word order, and that a tree can always be made projective by reordering the nodes. For instance, let x be a sentence with dependency tree G = (V x , A), and let < G be the total order on V x defined by an inorder traversal of G that respects the local ordering of a node and its children given by the original word order. Regardless of whether G is projective with respect to x, it must by necessity be projective with respect to < G . We call < G the projective order corresponding to x and G and use it as our canonical way of finding a node order that makes the tree projective. By way of illustration, the projective order for the sentence and tree in Figure 1 ", "cite_spans": [], "ref_spans": [ { "start": 794, "end": 802, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Unrestricted Dependency Parsing", "sec_num": "3.2" }, { "text": "is: A 1 < G hearing 2 < G on 5 < G the 6 < G issue 7 < G is 3 < G scheduled 4 < G today 8 < G . 9 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unrestricted Dependency Parsing", "sec_num": "3.2" }, { "text": "If the words of a sentence x with dependency tree G are already in projective order, this means that G is projective with respect to x and that we can parse the sentence using only transitions in T p , because nodes can be pushed onto the stack in projective order using only the SHIFT transition. If the words are not in projective order, we can use a combination of SHIFT and SWAP transitions to ensure that nodes are still pushed onto the stack in projective order. More precisely, if the next node in the projective order is the kth node in the buffer, we perform k SHIFT transitions, to get this node onto the stack, followed by k\u22121 SWAP transitions, to move the preceding k \u2212 1 nodes back to the buffer. 1 In this way, the parser can effectively sort the input nodes into projective order on the stack, repeatedly extracting the minimal element of < G from the buffer, and build a tree that is projective with respect to the sorted order. Since any input can be sorted using SHIFT and SWAP, and any projective tree can be built using SHIFT, LEFT-ARC l and RIGHT-ARC l , the system S u is complete for the set of all dependency trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unrestricted Dependency Parsing", "sec_num": "3.2" }, { "text": "In Figure 4 , we define an oracle function o for the system S u , which implements this \"sort and parse\" strategy and predicts the optimal transition t out of the current configuration c, given the target dependency tree G = (V x , A) and the projective order < G . The oracle predicts LEFT-ARC l or RIGHT-ARC l if the two top nodes on the stack should be connected by an arc and if the dependent node of this arc is already connected to all its dependents; it predicts SWAP if the two top nodes are not in projective order; and it predicts SHIFT otherwise. This is the oracle that has been used to generate training data for classifiers in the experimental evaluation in Section 4.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Unrestricted Dependency Parsing", "sec_num": "3.2" }, { "text": "Let us now consider the time complexity of the extended system S u = (C, T u , c s , C t ) and let us begin by observing that 2n is still a lower bound on the number of transitions required to reach a terminal configuration. A sequence of 2n transi- tions occurs when no SWAP transitions are performed, in which case the behavior of the system is identical to the simpler system S p . This is important, because it means that the best-case complexity of the deterministic parser is still O(n) and that the we can expect to observe the best case for all sentences with projective dependency trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unrestricted Dependency Parsing", "sec_num": "3.2" }, { "text": "The exact number of additional transitions needed to reach a terminal configuration is determined by the number of SWAP transitions. Since SWAP moves one node from \u03a3 to B, there will be one additional SHIFT for every SWAP, which means that the total number of transitions is 2n + 2k, where k is the number of SWAP transitions. Given the condition that SWAP can only apply in a configuration c = ([\u03c3|i, j], B, A) if 0 < i < j, the number of SWAP transitions is bounded by n(n\u22121) 2 , which means that 2n + n(n \u2212 1) = n + n 2 is an upper bound on the number of transitions in a terminating sequence. Hence, the worst-case complexity of the deterministic parser is O(n 2 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unrestricted Dependency Parsing", "sec_num": "3.2" }, { "text": "The running time of a deterministic transitionbased parser using the system S u is O(n) in the best case and O(n 2 ) in the worst case. But what about the average case? Empirical studies, based on data from a wide range of languages, have shown that dependency trees tend to be projective and that most non-projective trees only contain a small number of discontinuities (Nivre, 2006; Kuhlmann and Nivre, 2006; Havelka, 2007) . This should mean that the expected number of swaps per sentence is small, and that the running time is linear on average for the range of inputs that occur in natural languages. This is a hypothesis that will be tested experimentally in the next section.", "cite_spans": [ { "start": 371, "end": 384, "text": "(Nivre, 2006;", "ref_id": "BIBREF20" }, { "start": 385, "end": 410, "text": "Kuhlmann and Nivre, 2006;", "ref_id": "BIBREF7" }, { "start": 411, "end": 425, "text": "Havelka, 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Unrestricted Dependency Parsing", "sec_num": "3.2" }, { "text": "Our experiments are based on five data sets from the CoNLL-X shared task: Arabic, Czech, Danish, Slovene, and Turkish (Buchholz and Marsi, 2006) . These languages have been selected because the data come from genuine dependency treebanks, whereas all the other data sets are based on some kind of conversion from another type of representation, which could potentially distort the distribution of different types of structures in the data.", "cite_spans": [ { "start": 118, "end": 144, "text": "(Buchholz and Marsi, 2006)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "In section 3.2, we hypothesized that the expected running time of a deterministic parser using the transition system S u would be linear, rather than quadratic. To test this hypothesis, we examine how the number of transitions varies as a function of sentence length. We call this the abstract running time, since it abstracts over the actual time needed to compute each oracle prediction and transition, which is normally constant but dependent on the type of classifier used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Running Time", "sec_num": "4.1" }, { "text": "We first measured the abstract running time on the training sets, using the oracle to derive the transition sequence for every sentence, to see how many transitions are required in the ideal case. We then performed the same measurement on the test sets, using classifiers trained on the oracle transition sequences from the training sets (as described below in Section 4.2), to see whether the trained parsers deviate from the ideal case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Running Time", "sec_num": "4.1" }, { "text": "The result for Arabic and Danish can be seen 67.3 (18.2) 11.6 80.9 (3.7) 31.2 84.6 (0.0) 27.0 74.2 (3.4) 29.9 65.3 (6.6) 21.0 S pp 67.2 (18.2) 11.6 82.1 (60.7) 34.0 84.7 (22.5) 28.9 74.8 (20.7) 26.9 65.5 (11.8) 20.7 Malt-06 66.7 (18.2) 11.0 78.4 (57.9) 27.4 84.8 (27.5) 26.7 70.3 (20.7) 19.7 65.7 (9.2) 19.3 MST-06 66.9 (0.0) 10.3 80.2 (61.7) 29.9 84.8 (62.5) 25.5 73.4 (26.4) 20.9 63.2 (11.8) 20.2 MST Malt 68.6 (9.4) 11.0 82.3 (69.2) 31.2 86.7 (60.0) 29.8 75.9 (27.6) 26.6 66.3 (9.2) 18.6 Table 1 : Labeled accuracy; AS = attachment score (non-projective arcs in brackets); EM = exact match.", "cite_spans": [], "ref_spans": [ { "start": 491, "end": 498, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Running Time", "sec_num": "4.1" }, { "text": "in Figure 5 , where black dots represent training sentences (parsed with the oracle) and white dots represent test sentences (parsed with a classifier). For Arabic there is a very clear linear relationship in both cases with very few outliers. Fitting the data with a linear function using the least squares method gives us m = 2.06n (R 2 = 0.97) for the training data and m = 2.02n (R 2 = 0.98) for the test data, where m is the number of transitions in parsing a sentence of length n. For Danish, there is clearly more variation, especially for the training data, but the least-squares approximation still explains most of the variance, with m = 2.22n (R 2 = 0.85) for the training data and m = 2.07n (R 2 = 0.96) for the test data. For both languages, we thus see that the classifier-based parsers have a lower mean number of transitions and less variance than the oracle parsers. And in both cases, the expected number of transitions is only marginally greater than the 2n of the strictly projective transition system S p .", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 5", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Running Time", "sec_num": "4.1" }, { "text": "We have chosen to display results for Arabic and Danish because they are the two extremes in our sample. Arabic has the smallest variance and the smallest linear coefficients, and Danish has the largest variance and the largest coefficients. The remaining three languages all lie somewhere in the middle, with Czech being closer to Arabic and Slovene closer to Danish. Together, the evidence from all five languages strongly corroborates the hypothesis that the expected running time for the system S u is linear in sentence length for naturally occurring data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Running Time", "sec_num": "4.1" }, { "text": "In order to assess the parsing accuracy that can be achieved with the new transition system, we trained a deterministic parser using the new transition system S u for each of the five languages. For comparison, we also trained two parsers using S p , one that is strictly projective and one that uses the pseudo-projective parsing technique to recover non-projective dependencies in a post-processing step (Nivre and Nilsson, 2005) . We will refer to the latter system as S pp . All systems use SVM classifiers with a polynomial kernel to approximate the oracle function, with features and parameters taken from , which was the best performing transition-based system in the CoNLL-X shared task. 2 Table 1 shows the labeled parsing accuracy of the parsers measured in two ways: attachment score (AS) is the percentage of tokens with the correct head and dependency label; exact match (EM) is the percentage of sentences with a completely correct labeled dependency tree. The score in brackets is the attachment score for the (small) subset of tokens that are connected to their head by a non-projective arc in the gold standard parse. For comparison, the table also includes results for the two best performing systems in the original CoNLL-X shared task, Malt-06 and MST-06 (McDonald et al., 2006) , as well as the integrated system MST Malt , which is a graph-based parser guided by the predictions of a transition-based parser and currently has the best reported results on the CoNLL-X data sets (Nivre and McDonald, 2008) .", "cite_spans": [ { "start": 406, "end": 431, "text": "(Nivre and Nilsson, 2005)", "ref_id": "BIBREF16" }, { "start": 1268, "end": 1298, "text": "MST-06 (McDonald et al., 2006)", "ref_id": null }, { "start": 1499, "end": 1525, "text": "(Nivre and McDonald, 2008)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 698, "end": 705, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Parsing Accuracy", "sec_num": "4.2" }, { "text": "Looking first at the overall attachment score, we see that S u gives a substantial improvement over S p (and outperforms S pp ) for Czech and Slovene, where the scores achieved are rivaled only by the combo system MST Malt . For these languages, there is no statistical difference between S u and MST Malt , which are both significantly better than all the other parsers, except S pp for Czech (Mc-Nemar's test, \u03b1 = .05). This is accompanied by an improvement on non-projective arcs, where S u outperforms all other systems for Czech and is second only to the two MST parsers (MST-06 and MST Malt ) for Slovene. It is worth noting that the percentage of non-projective arcs is higher for Czech (1.9%) and Slovene (1.9%) than for any of the other languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Accuracy", "sec_num": "4.2" }, { "text": "For the other three languages, S u has a drop in overall attachment score compared to S p , but none of these differences is statistically significant. In fact, the only significant differences in attachment score here are the positive differences between MST Malt and all other systems for Arabic and Danish, and the negative difference between MST-06 and all other systems for Turkish. The attachment scores for non-projective arcs are generally very low for these languages, except for the two MST parsers on Danish, but S u performs at least as well as S pp on Danish and Turkish. (The results for Arabic are not very meaningful, given that there are only eleven non-projective arcs in the entire test set, of which the (pseudo-)projective parsers found two and S u one, while MST Malt and MST-06 found none at all.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Accuracy", "sec_num": "4.2" }, { "text": "Considering the exact match scores, finally, it is very interesting to see that S u almost consistently outperforms all other parsers, including the combo system MST Malt , and sometimes by a fairly wide margin (Czech, Slovene). The difference is statistically significant with respect to all other systems except MST Malt for Slovene, all except MST Malt and S pp for Czech, and with respect to MST Malt for Turkish. For Arabic and Danish, there are no significant differences in the exact match scores. We conclude that S u may increase the probability of finding a completely correct analysis, which is sometimes reflected also in the overall attachment score, and we conjecture that the strength of the positive effect is dependent on the frequency of non-projective arcs in the language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing Accuracy", "sec_num": "4.2" }, { "text": "Processing non-projective trees by swapping the order of words has recently been proposed by both Nivre (2008b) and Titov et al. (2009) , but these systems cannot handle unrestricted non-projective trees. It is worth pointing out that, although the system described in Nivre (2008b) uses four transitions bearing the same names as the transitions of S u , the two systems are not equivalent. In particular, the system of Nivre (2008b) is sound but not complete for the class of all dependency trees.", "cite_spans": [ { "start": 98, "end": 111, "text": "Nivre (2008b)", "ref_id": "BIBREF23" }, { "start": 116, "end": 135, "text": "Titov et al. (2009)", "ref_id": "BIBREF25" }, { "start": 269, "end": 282, "text": "Nivre (2008b)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "There are also affinities to the system of Attardi (2006) , which combines non-adjacent nodes on the stack instead of swapping nodes and is equivalent to a restricted version of our system, where no more than two consecutive SWAP transitions are permitted. This restriction preserves linear worstcase complexity at the expense of completeness. Finally, the algorithm first described by Covington (2001) and used for data-driven parsing by Nivre (2007) , is complete but has quadratic complexity even in the best case.", "cite_spans": [ { "start": 43, "end": 57, "text": "Attardi (2006)", "ref_id": "BIBREF0" }, { "start": 386, "end": 402, "text": "Covington (2001)", "ref_id": "BIBREF2" }, { "start": 439, "end": 451, "text": "Nivre (2007)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "We have presented a novel transition system for dependency parsing that can handle unrestricted non-projective trees. The system reuses standard techniques for building projective trees by combining adjacent nodes (representing subtrees with adjacent yields), but adds a simple mechanism for swapping the order of nodes on the stack, which gives a system that is sound and complete for the set of all dependency trees over a given label set but behaves exactly like the standard system for the subset of projective trees. As a result, the time complexity of deterministic parsing is O(n 2 ) in the worst case, which is rare, but O(n) in the best case, which is common, and experimental results on data from five languages support the conclusion that expected running time is linear in the length of the sentence. Experimental results also show that parsing accuracy is competitive, especially for languages like Czech and Slovene where nonprojective dependency structures are common, and especially with respect to the exact match score, where it has the best reported results for four out of five languages. Finally, the simplicity of the system makes it very easy to implement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Future research will include an in-depth error analysis to find out why the system works better for some languages than others and why the exact match score improves even when the attachment score goes down. In addition, we want to explore alternative oracle functions, which try to minimize the number of swaps by allowing the stack to be temporarily \"unsorted\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "This can be seen inFigure 3, where transitions 4-8, 9-13, and 14-18 are the transitions needed to make sure that on5, the6 and issue7 are processed on the stack before is3 and scheduled4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Complete information about experimental settings can be found at http://stp.lingfil.uu.se/\u223cnivre/exp/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Thanks to Johan Hall and Jens Nilsson for help with implementation and evaluation, and to Marco Kuhlmann and three anonymous reviewers for useful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Experiments with a multilanguage non-projective dependency parser", "authors": [ { "first": "Giuseppe", "middle": [], "last": "Attardi", "suffix": "" } ], "year": 2006, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "166--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giuseppe Attardi. 2006. Experiments with a multi- language non-projective dependency parser. In Pro- ceedings of CoNLL, pages 166-170.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "CoNLL-X shared task on multilingual dependency parsing", "authors": [ { "first": "Sabine", "middle": [], "last": "Buchholz", "suffix": "" }, { "first": "Erwin", "middle": [], "last": "Marsi", "suffix": "" } ], "year": 2006, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "149--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of CoNLL, pages 149-164.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A fundamental algorithm for dependency parsing", "authors": [ { "first": "Michael", "middle": [ "A" ], "last": "Covington", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 39th Annual ACM Southeast Conference", "volume": "", "issue": "", "pages": "95--102", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael A. Covington. 2001. A fundamental algo- rithm for dependency parsing. In Proceedings of the 39th Annual ACM Southeast Conference, pages 95- 102.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Parsing mildly non-projective dependency structures", "authors": [ { "first": "Carlos", "middle": [], "last": "G\u00f3mez-Rodr\u00edguez", "suffix": "" }, { "first": "David", "middle": [], "last": "Weir", "suffix": "" }, { "first": "John", "middle": [], "last": "Carroll", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "291--299", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlos G\u00f3mez-Rodr\u00edguez, David Weir, and John Car- roll. 2009. Parsing mildly non-projective depen- dency structures. In Proceedings of EACL, pages 291-299.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Corrective modeling for non-projective dependency parsing", "authors": [ { "first": "Keith", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Vaclav", "middle": [], "last": "Nov\u00e1k", "suffix": "" } ], "year": 2005, "venue": "Proceedings of IWPT", "volume": "", "issue": "", "pages": "42--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keith Hall and Vaclav Nov\u00e1k. 2005. Corrective mod- eling for non-projective dependency parsing. In Proceedings of IWPT, pages 42-52.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Beyond projectivity: Multilingual evaluation of constraints and measures on nonprojective structures", "authors": [ { "first": "Jiri", "middle": [], "last": "Havelka", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "608--615", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiri Havelka. 2007. Beyond projectivity: Multilin- gual evaluation of constraints and measures on non- projective structures. In Proceedings of the 45th An- nual Meeting of the Association of Computational Linguistics, pages 608-615.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Incremental dependency parsing using online learning", "authors": [ { "first": "Richard", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Nugues", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Shared Task of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "1134--1138", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Johansson and Pierre Nugues. 2007. Incre- mental dependency parsing using online learning. In Proceedings of the Shared Task of EMNLP-CoNLL, pages 1134-1138.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Mildly non-projective dependency structures", "authors": [ { "first": "Marco", "middle": [], "last": "Kuhlmann", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the COLING/ACL Main Conference Poster Sessions", "volume": "", "issue": "", "pages": "507--514", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Kuhlmann and Joakim Nivre. 2006. Mildly non-projective dependency structures. In Proceed- ings of the COLING/ACL Main Conference Poster Sessions, pages 507-514.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Treebank grammar techniques for non-projective dependency parsing", "authors": [ { "first": "Marco", "middle": [], "last": "Kuhlmann", "suffix": "" }, { "first": "Giorgio", "middle": [], "last": "Satta", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "478--486", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Kuhlmann and Giorgio Satta. 2009. Treebank grammar techniques for non-projective dependency parsing. In Proceedings of EACL, pages 478-486.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Online learning of approximate dependency parsing algorithms", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2006, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "81--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algo- rithms. In Proceedings of EACL, pages 81-88.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "On the complexity of non-projective data-driven dependency parsing", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Giorgio", "middle": [], "last": "Satta", "suffix": "" } ], "year": 2007, "venue": "Proceedings of IWPT", "volume": "", "issue": "", "pages": "122--131", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald and Giorgio Satta. 2007. On the com- plexity of non-projective data-driven dependency parsing. In Proceedings of IWPT, pages 122-131.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Online large-margin training of dependency parsers", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "91--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of de- pendency parsers. In Proceedings of ACL, pages 91- 98.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Non-projective dependency parsing using spanning tree algorithms", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Kiril", "middle": [], "last": "Ribarov", "suffix": "" } ], "year": 2005, "venue": "Proceedings of HLT/EMNLP", "volume": "", "issue": "", "pages": "523--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Haji\u010d. 2005b. Non-projective dependency pars- ing using spanning tree algorithms. In Proceedings of HLT/EMNLP, pages 523-530.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Multilingual dependency analysis with a two-stage discriminative parser", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Lerman", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2006, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "216--220", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Kevin Lerman, and Fernando Pereira. 2006. Multilingual dependency analysis with a two-stage discriminative parser. In Proceedings of CoNLL, pages 216-220.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The complexity of recognition of linguistically adequate dependency grammars", "authors": [ { "first": "Peter", "middle": [], "last": "Neuhaus", "suffix": "" }, { "first": "Norbert", "middle": [], "last": "Br\u00f6ker", "suffix": "" } ], "year": 1997, "venue": "Proceedings of ACL/EACL", "volume": "", "issue": "", "pages": "337--343", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Neuhaus and Norbert Br\u00f6ker. 1997. The com- plexity of recognition of linguistically adequate de- pendency grammars. In Proceedings of ACL/EACL, pages 337-343.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Integrating graph-based and transition-based dependency parsers", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "950--958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre and Ryan McDonald. 2008. Integrat- ing graph-based and transition-based dependency parsers. In Proceedings of ACL, pages 950-958.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Pseudoprojective dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "99--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre and Jens Nilsson. 2005. Pseudo- projective dependency parsing. In Proceedings of ACL, pages 99-106.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Memory-based dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" } ], "year": 2004, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "49--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Johan Hall, and Jens Nilsson. 2004. Memory-based dependency parsing. In Proceedings of CoNLL, pages 49-56.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Labeled pseudo-projective dependency parsing with support vector machines", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Nilsson", "suffix": "" }, { "first": "G\u00fclsen", "middle": [], "last": "Eryigit", "suffix": "" }, { "first": "Svetoslav", "middle": [], "last": "Marinov", "suffix": "" } ], "year": 2006, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "221--225", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Johan Hall, Jens Nilsson, G\u00fclsen Eryigit, and Svetoslav Marinov. 2006. Labeled pseudo-projective dependency parsing with support vector machines. In Proceedings of CoNLL, pages 221-225.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Incrementality in deterministic dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together (ACL)", "volume": "", "issue": "", "pages": "50--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Proceedings of the Work- shop on Incremental Parsing: Bringing Engineering and Cognition Together (ACL), pages 50-57.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Constraints on non-projective dependency graphs", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2006, "venue": "Proceedings of EACL", "volume": "", "issue": "", "pages": "73--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre. 2006. Constraints on non-projective de- pendency graphs. In Proceedings of EACL, pages 73-80.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Incremental non-projective dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2007, "venue": "Proceedings of NAACL HLT", "volume": "", "issue": "", "pages": "396--403", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre. 2007. Incremental non-projective de- pendency parsing. In Proceedings of NAACL HLT, pages 396-403.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Algorithms for deterministic incremental dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "", "pages": "513--553", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre. 2008a. Algorithms for deterministic in- cremental dependency parsing. Computational Lin- guistics, 34:513-553.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Sorting out dependency parsing", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 6th International Conference on Natural Language Processing (GoTAL)", "volume": "", "issue": "", "pages": "16--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre. 2008b. Sorting out dependency pars- ing. In Proceedings of the 6th International Con- ference on Natural Language Processing (GoTAL), pages 16-27.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A latent variable model for generative dependency parsing", "authors": [ { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "James", "middle": [], "last": "Henderson", "suffix": "" } ], "year": 2007, "venue": "Proceedings of IWPT", "volume": "", "issue": "", "pages": "144--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Titov and James Henderson. 2007. A latent vari- able model for generative dependency parsing. In Proceedings of IWPT, pages 144-155.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Online graph planarization for synchronous parsing of semantic and syntactic dependencies", "authors": [ { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "James", "middle": [], "last": "Henderson", "suffix": "" }, { "first": "Paola", "middle": [], "last": "Merlo", "suffix": "" }, { "first": "Gabriele", "middle": [], "last": "Musillo", "suffix": "" } ], "year": 2009, "venue": "Proceedings of IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Titov, James Henderson, Paola Merlo, and Gabriele Musillo. 2009. Online graph planarization for synchronous parsing of semantic and syntactic dependencies. In Proceedings of IJCAI.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Statistical dependency analysis with support vector machines", "authors": [ { "first": "Hiroyasu", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2003, "venue": "Proceedings of IWPT", "volume": "", "issue": "", "pages": "195--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hiroyasu Yamada and Yuji Matsumoto. 2003. Statis- tical dependency analysis with support vector ma- chines. In Proceedings of IWPT, pages 195-206.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Transitions for dependency parsing;", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "Transition sequence for parsing the sentence inFigure 1(LA = LEFT-ARC, RA = REFT-ARC).", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "Abstract running time during training (black) and parsing (white) for Arabic (1460/146 sentences) and Danish (5190/322 sentences).", "num": null, "type_str": "figure" }, "TABREF0": { "content": "
| Arabic | Czech | Danish | Slovene | Turkish | ||||||
| System | AS | EM | AS | EM | AS | EM | AS | EM | AS | EM |
| S u | 67.1 (9.1) |