| { |
| "paper_id": "J13-1006", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T02:19:00.506662Z" |
| }, |
| "title": "Data-Driven Parsing using Probabilistic Linear Context-Free Rewriting Systems", |
| "authors": [ |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Kallmeyer", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "kallmeyer@phil.uni-duesseldorf.de." |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Maier", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "maierw@hhu.de.submission" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper presents the first efficient implementation of a weighted deductive CYK parser for Probabilistic Linear Context-Free Rewriting Systems (PLCFRSs). LCFRS, an extension of CFG, can describe discontinuities in a straightforward way and is therefore a natural candidate to be used for data-driven parsing. To speed up parsing, we use different context-summary estimates of parse items, some of them allowing for A * parsing. We evaluate our parser with grammars extracted from the German NeGra treebank. Our experiments show that data-driven LCFRS parsing is feasible and yields output of competitive quality.", |
| "pdf_parse": { |
| "paper_id": "J13-1006", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper presents the first efficient implementation of a weighted deductive CYK parser for Probabilistic Linear Context-Free Rewriting Systems (PLCFRSs). LCFRS, an extension of CFG, can describe discontinuities in a straightforward way and is therefore a natural candidate to be used for data-driven parsing. To speed up parsing, we use different context-summary estimates of parse items, some of them allowing for A * parsing. We evaluate our parser with grammars extracted from the German NeGra treebank. Our experiments show that data-driven LCFRS parsing is feasible and yields output of competitive quality.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Recently, the challenges that a rich morphology poses for data-driven parsing have received growing interest. A direct effect of morphological richness is, for instance, data sparseness on a lexical level (Candito and Seddah 2010). A rather indirect effect is that morphological richness often relaxes word order constraints. The principal intuition is that a rich morphology encodes information that otherwise has to be conveyed by a particular word order. If, for instance, the case of a nominal complement is not provided by morphology, it has to be provided by the position of the complement relative to other complements in the sentence. Example (1) provides an example of case marking and free word order in German. In turn, in free word order languages, word order can encode information structure (Hoffman 1995 It is assumed that this relation between a rich morphology and free word order does not hold in both directions. Although it is generally the case that languages with a rich morphology exhibit a high degree of freedom in word order, languages with a free word order do not necessarily have a rich morphology. Two examples for languages with a very free word order are Turkish and Bulgarian. The former has a very rich and the latter a sparse morphology. See M\u00fcller (2002) for a survey of the linguistics literature on this discussion.", |
| "cite_spans": [ |
| { |
| "start": 805, |
| "end": 818, |
| "text": "(Hoffman 1995", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 1277, |
| "end": 1290, |
| "text": "M\u00fcller (2002)", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "With a rather free word order, constituents and single parts of them can be displaced freely within the sentence. German, for instance, has a rich inflectional system and allows for a free word order, as we have already seen in Example (1): Arguments can be scrambled, and topicalizations and extrapositions underlie few restrictions. Consequently, discontinuous constituents occur frequently. This is challenging for syntactic description in general (Uszkoreit 1986; Becker, Joshi, and Rambow 1991; Bunt 1996; M\u00fcller 2004) , and for treebank annotation in particular (Skut et al. 1997) .", |
| "cite_spans": [ |
| { |
| "start": 451, |
| "end": 467, |
| "text": "(Uszkoreit 1986;", |
| "ref_id": "BIBREF62" |
| }, |
| { |
| "start": 468, |
| "end": 499, |
| "text": "Becker, Joshi, and Rambow 1991;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 500, |
| "end": 510, |
| "text": "Bunt 1996;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 511, |
| "end": 523, |
| "text": "M\u00fcller 2004)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 568, |
| "end": 586, |
| "text": "(Skut et al. 1997)", |
| "ref_id": "BIBREF59" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In this paper, we address the problem of data-driven parsing of discontinuous constituents on the basis of German. In this section, we inspect the type of data we have to deal with, and we describe the way such data are annotated in treebanks. We briefly discuss different parsing strategies for the data in question and motivate our own approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Consider the sentences in Example (2) as examples for discontinuous constituents (taken from the German NeGra [Skut et al. 1997] and TIGER [Brants et al. 2002 ] treebanks). Example (2a) shows several instances of discontinuous VPs and Example 2bshows a discontinuous NP. The relevant constituent is printed in italics.", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 128, |
| "text": "[Skut et al. 1997]", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 139, |
| "end": 158, |
| "text": "[Brants et al. 2002", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discontinuous Constituents", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "(2) a. Fronting: Examples of other such languages are Bulgarian and Korean. Both show discontinuous constituents as well. Example (3a) is a Bulgarian example of a PP extracted out of an NP, taken from the BulTreebank (Osenova and Simov 2004) , and Example (3b) is an example of fronting in Korean, taken from the Penn Korean Treebank (Han, Han, and Ko 2001) . 3a. Discontinuous constituents are by no means limited to languages with freedom in word order. They also occur in languages with a rather fixed word order such as English, resulting from, for instance, long-distance movements. Examples (4a) and (4b) are examples from the Penn Treebank for long extractions resulting in discontinuous S categories and for discontinuous NPs arising from extraposed relative clauses, respectively (Marcus et al. 1994) . 4a. Long Extraction in English:", |
| "cite_spans": [ |
| { |
| "start": 217, |
| "end": 241, |
| "text": "(Osenova and Simov 2004)", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 334, |
| "end": 357, |
| "text": "(Han, Han, and Ko 2001)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 789, |
| "end": 809, |
| "text": "(Marcus et al. 1994)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discontinuous Constituents", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "(i) Those chains include Bloomingdale's, which Campeau recently said it will sell.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discontinuous Constituents", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "(ii) What should I do.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discontinuous Constituents", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "b. Extraposed nominal modifiers (relative clauses and PPs) in English:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discontinuous Constituents", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "(i) They sow a row of male-fertile plants nearby, which then pollinate the malesterile plants.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discontinuous Constituents", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "(ii) Prices fell marginally for fuel and electricity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discontinuous Constituents", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "Most constituency treebanks rely on an annotation backbone based on Context-Free Grammar (CFG). Discontinuities cannot be modeled with CFG, because they require a larger domain of locality than the one offered by CFG. Therefore, the annotation backbone based on CFG is generally augmented with a separate mechanism that accounts for the non-local dependencies. In the Penn Treebank (PTB), for example, trace nodes and co-indexation markers are used in order to establish additional implicit edges in the tree beyond the overt phrase structure. In T\u00fcBa-D/Z (Telljohann et al. 2012) , a German Treebank, non-local dependencies are expressed via an annotation of topological fields (H\u00f6hle 1986 ) and special edge labels. In contrast, some other treebanks, among them NeGra and TIGER, give up the annotation backbone based on CFG and allow annotation with crossing branches (Skut et al. 1997) . In such an annotation, non-local dependencies can be expressed directly by grouping all dependent elements under a single node. Note that both crossing branches and traces annotate long-distance dependencies in a linguistically meaningful way. A difference is, however, that crossing branches are less theory-dependent because they do not make any assumptions about the base positions of \"moved\" elements. Examples for the different approaches of annotating discontinuities are given in annotation of the same sentence in the style of the T\u00fcBa-D/Z treebank (right). Figure 2 shows the PTB annotation of Example (4a-ii) (on the left, note that the directed edge from the trace to the WHNP element visualizes the co-indexation) together with a NeGra-style annotation of the same sentence (right). In the past, data-driven parsing has largely been dominated by Probabilistic Context-Free Grammar (PCFG). In order to extract a PCFG from a treebank, the trees need to be interpretable as CFG derivations. Consequently, most work has excluded non-local dependencies; either (in PTB-like treebanks) by discarding labeling conventions such as the co-indexation of the trace nodes in the PTB, or (in NeGra/TIGER-like treebanks) by applying tree transformations, which resolve the crossing branches (e.g., K\u00fcbler 2005; Boyd 2007) . Especially for the latter treebanks, such a transformation is problematic, because it generally is non-reversible and implies information loss.", |
| "cite_spans": [ |
| { |
| "start": 556, |
| "end": 580, |
| "text": "(Telljohann et al. 2012)", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 679, |
| "end": 690, |
| "text": "(H\u00f6hle 1986", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 870, |
| "end": 888, |
| "text": "(Skut et al. 1997)", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 2187, |
| "end": 2199, |
| "text": "K\u00fcbler 2005;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 2200, |
| "end": 2210, |
| "text": "Boyd 2007)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1457, |
| "end": 1465, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Treebank Annotation and Data-Driven Parsing", |
| "sec_num": "1.2" |
| }, |
| { |
| "text": "Discontinuities are no minor phenomenon: Approximately 25% of all sentences in NeGra and TIGER have crossing branches (Maier and Lichte 2011) . In the Penn Treebank, this holds for approximately 20% of all sentences (Evang and Kallmeyer 2011) . This shows that it is important to properly treat such structures.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 141, |
| "text": "(Maier and Lichte 2011)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 216, |
| "end": 242, |
| "text": "(Evang and Kallmeyer 2011)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Treebank Annotation and Data-Driven Parsing", |
| "sec_num": "1.2" |
| }, |
| { |
| "text": "In the literature, different methods have been explored that allow for the use of nonlocal information in data-driven parsing. We distinguish two classes of approaches. The first class consists of approaches that aim at using formalisms which produce trees without crossing branches but provide a larger domain of locality than CFGfor instance, through complex labels (Hockenmaier 2003) or through the derivation CFG:", |
| "cite_spans": [ |
| { |
| "start": 368, |
| "end": 386, |
| "text": "(Hockenmaier 2003)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extending the Domain of Locality", |
| "sec_num": "1.3" |
| }, |
| { |
| "text": "A \u03b3 LCFRS: \u2022 A \u2022 \u2022 \u03b3 1 \u03b3 2 \u03b3 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extending the Domain of Locality", |
| "sec_num": "1.3" |
| }, |
| { |
| "text": "Different domains of locality.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 3", |
| "sec_num": null |
| }, |
| { |
| "text": "mechanism (Chiang 2003) . The second class, to which we contribute in this paper, consists of approaches that aim at producing trees which contain non-local information. Some methods realize the reconstruction of non-local information in a post-or preprocessing step to PCFG parsing (Johnson 2002; Dienes 2003; Levy and Manning 2004; Cai, Chiang, and Goldberg 2011) . Other work uses formalisms that accommodate the direct encoding of non-local information (Plaehn 2004; Levy 2005) . We pursue the latter approach.", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 23, |
| "text": "(Chiang 2003)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 283, |
| "end": 297, |
| "text": "(Johnson 2002;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 298, |
| "end": 310, |
| "text": "Dienes 2003;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 311, |
| "end": 333, |
| "text": "Levy and Manning 2004;", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 334, |
| "end": 365, |
| "text": "Cai, Chiang, and Goldberg 2011)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 457, |
| "end": 470, |
| "text": "(Plaehn 2004;", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 471, |
| "end": 481, |
| "text": "Levy 2005)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 3", |
| "sec_num": null |
| }, |
| { |
| "text": "Our work is motivated by the following recent developments. Linear Context-Free Rewriting Systems (LCFRSs) (Vijay-Shanker, Weir, and Joshi 1987) have been established as a candidate for modeling both discontinuous constituents and non-projective dependency trees as they occur in treebanks (Maier and S\u00f8gaard 2008; Kuhlmann and Satta 2009; Maier and Lichte 2011) . LCFRSs are a natural extension of CFGs where the non-terminals can span tuples of possibly non-adjacent strings (see Figure 3 ). Because LCFRSs allow for binarization and CYK chart parsing in a way similar to CFGs, PCFG techniques, such as best-first parsing (Caraballo and Charniak 1998), weighted deductive parsing (Nederhof 2003) , and A * parsing (Klein and Manning 2003a) can be transferred to LCFRS. Finally, as mentioned before, languages such as German have recently attracted the interest of the parsing community (K\u00fcbler and Penn 2008; Seddah, K\u00fcbler, and Tsarfaty 2010) .", |
| "cite_spans": [ |
| { |
| "start": 107, |
| "end": 144, |
| "text": "(Vijay-Shanker, Weir, and Joshi 1987)", |
| "ref_id": "BIBREF63" |
| }, |
| { |
| "start": 290, |
| "end": 314, |
| "text": "(Maier and S\u00f8gaard 2008;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 315, |
| "end": 339, |
| "text": "Kuhlmann and Satta 2009;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 340, |
| "end": 362, |
| "text": "Maier and Lichte 2011)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 682, |
| "end": 697, |
| "text": "(Nederhof 2003)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 716, |
| "end": 741, |
| "text": "(Klein and Manning 2003a)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 888, |
| "end": 910, |
| "text": "(K\u00fcbler and Penn 2008;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 911, |
| "end": 945, |
| "text": "Seddah, K\u00fcbler, and Tsarfaty 2010)", |
| "ref_id": "BIBREF55" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 482, |
| "end": 490, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 3", |
| "sec_num": null |
| }, |
| { |
| "text": "We bring together these developments by presenting a parser for Probabilistic LCFRS (PLCFRS), continuing the promising work of Levy (2005) . Our parser produces trees with crossing branches and thereby accounts for syntactic long-distance dependencies while not making any additional assumptions concerning the position of hypothetical traces. We have implemented a CYK parser and we present several methods for context summary estimation of parse items. The estimates either act as figures-of-merit in a best-first parsing context or as estimates for A * parsing. A test on a real-world-sized data set shows that our parser achieves competitive results. To our knowledge, our parser is the first for the entire class of PLCFRS that has successfully been used for data-driven parsing. 1 The paper is structured as follows. Section 2 introduces probabilistic LCFRS. Sections 3 and 4 present the binarization algorithm, the parser, and the outside estimates which we use to speed up parsing. In Section 5 we explain how to extract an LCFRS from a treebank and we present grammar refinement methods for these specific treebank grammars. Finally, Section 6 presents evaluation results and Section 7 compares our work to other approaches.", |
| "cite_spans": [ |
| { |
| "start": 127, |
| "end": 138, |
| "text": "Levy (2005)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 785, |
| "end": 786, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 3", |
| "sec_num": null |
| }, |
| { |
| "text": "1 Parts of the results presented in this paper have been presented earlier. More precisely, in Kallmeyer and Maier (2010) , we presented the general architecture of the parser and all outside estimates except the LN estimate from Section 4.4 which is presented in Maier, Kaeshammer, and Kallmeyer (2012) . In Maier and Kallmeyer (2010) we have presented experiments with the relative clause split from Section 3.2. Finally, Maier (2010) contains the evaluation of the baseline (together with an evaluation using other metrics).", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 121, |
| "text": "Kallmeyer and Maier (2010)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 264, |
| "end": 303, |
| "text": "Maier, Kaeshammer, and Kallmeyer (2012)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 309, |
| "end": 335, |
| "text": "Maier and Kallmeyer (2010)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 424, |
| "end": 436, |
| "text": "Maier (2010)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 3", |
| "sec_num": null |
| }, |
| { |
| "text": "LCFRS (Vijay-Shanker, Weir, and Joshi 1987) is an extension of CFG in which a nonterminal can span not only a single string but a tuple of strings of size k \u2265 1. k is thereby called its fan-out. We will notate LCFRS with the syntax of Simple Range Concatenation Grammars (SRCG) (Boullier 1998b) , a formalism that is equivalent to LCFRS. A third formalism that is equivalent to LCFRS is Multiple Context-Free Grammar (MCFG) (Seki et al. 1991) .", |
| "cite_spans": [ |
| { |
| "start": 6, |
| "end": 43, |
| "text": "(Vijay-Shanker, Weir, and Joshi 1987)", |
| "ref_id": "BIBREF63" |
| }, |
| { |
| "start": 278, |
| "end": 294, |
| "text": "(Boullier 1998b)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 424, |
| "end": 442, |
| "text": "(Seki et al. 1991)", |
| "ref_id": "BIBREF56" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition of PLCFRS", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "A Linear Context-Free Rewriting System (LCFRS) is a tuple N, T, V, P, S where a) N is a finite set of non-terminals with a function dim: N \u2192 N that determines the fan-out of each A \u2208 N; b) T and V are disjoint finite sets of terminals and variables; c) S \u2208 N is the start symbol with dim(S) = 1;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1 (LCFRS)", |
| "sec_num": null |
| }, |
| { |
| "text": "d) P is a finite set of rules (A) . For all r \u2208 P, it holds that every variable X occurring in r occurs exactly once in the left-hand side and exactly once in the right-hand side of r.", |
| "cite_spans": [ |
| { |
| "start": 30, |
| "end": 33, |
| "text": "(A)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1 (LCFRS)", |
| "sec_num": null |
| }, |
| { |
| "text": "A(\u03b1 1 , . . . , \u03b1 dim(A) ) \u2192 A 1 (X (1) 1 , . . . , X (1) dim(A 1 ) ) \u2022 \u2022 \u2022 A m (X (m) 1 , . . . , X (m) dim(A m ) ) for m \u2265 0 where A, A 1 , . . . , A m \u2208 N, X (i) j \u2208 V for 1 \u2264 i \u2264 m, 1 \u2264 j \u2264 dim(A i ) and \u03b1 i \u2208 (T \u222a V) * for 1 \u2264 i \u2264 dim", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1 (LCFRS)", |
| "sec_num": null |
| }, |
| { |
| "text": "A rewriting rule describes how the yield of the left-hand side non-terminal can be computed from the yields of the right-hand side non-terminals. The rules A(ab, cd) \u2192 \u03b5 and A(aXb, cYd) \u2192 A(X, Y) from Figure 4 for instance specify that (1) ab, cd is in the yield of A and (2) one can compute a new tuple in the yield of A from an already existing one by wrapping a and b around the first component and c and d around the second. A CFG rule A \u2192 BC would be written A(XY) \u2192 B(X)C(Y) as an LCFRS rule.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 201, |
| "end": 209, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Definition 1 (LCFRS)", |
| "sec_num": null |
| }, |
| { |
| "text": "Let G = N, T, V, P, S be an LCFRS.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 2 (Yield, language)", |
| "sec_num": null |
| }, |
| { |
| "text": "For every A \u2208 N, we define the yield of A, yield(A) as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "a) For every rule A( \u03b1) \u2192 \u03b5, \u03b1 \u2208 yield(A); A(ab, cd) \u2192 \u03b5 A(aXb, cYd) \u2192 A(X, Y) S(XY) \u2192 A(X, Y) Figure 4 Sample LCFRS for {a n b n c n d n | n \u2265 1}. b) For every rule A(\u03b1 1 , . . . , \u03b1 dim(A) ) \u2192 A 1 (X (1) 1 , . . . , X (1) dim(A 1 ) ) \u2022 \u2022 \u2022 A m (X (m) 1 , . . . , X (m) dim(A m ) ) and for all \u03c4 i \u2208 yield(A i ) (1 \u2264 i \u2264 m): f (\u03b1 1 ), . . . , f (\u03b1 dim(A) ) \u2208 yield(A)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "where f is defined as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "(i) f (t) = t for all t \u2208 T, (ii) f (X (i) j ) = \u03c4 i (j) for all 1 \u2264 i \u2264 m, 1 \u2264 j \u2264 dim(A i ) and (iii) f (xy) = f (x)f (y) for all x, y \u2208 (T \u222a V) + .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "We call f the composition function of the rule.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "Nothing else is in yield(A).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "c)", |
| "sec_num": null |
| }, |
| { |
| "text": "The language of G is then", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "L(G) = {w | w \u2208 yield(S)}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "As an example, consider again the LCFRS in Figure 4 . The last rule tells us that, given a pair in the yield of A, we can obtain an element in the yield of S by concatenating the two components. Consequently, the language generated by this grammar is", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 43, |
| "end": 51, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "{a n b n c n d n | n \u2265 1}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "The terms of grammar fan-out and rank and the properties of monotonicity and \u03b5-freeness will be referred to later and are therefore introduced in the following definition. They are taken from the LCFRS/MCFG terminology; the SRCG term for fan-out is arity and the property of being monotone is called ordered in the context of SRCG.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "Let G = N, T, V, P, S be an LCFRS.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 3", |
| "sec_num": null |
| }, |
| { |
| "text": "The fan-out of G is the maximal fan-out of all non-terminals in G.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "1.", |
| "sec_num": null |
| }, |
| { |
| "text": "Furthermore, the right-hand side length of a rewriting rule r \u2208 P is called the rank of r and the maximal rank of all rules in P is called the rank of G.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "G is monotone if for every r \u2208 P and every right-hand side non-terminal A in r and each pair X 1 , X 2 of arguments of A in the right-hand side of r, X 1 precedes X 2 in the right-hand side iff X 1 precedes X 2 in the left-hand side.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "A rule r \u2208 P is called an \u03b5-rule if one of the left-hand side components of r is \u03b5. G is \u03b5-free if it either contains no \u03b5-rules or there is exactly one \u03b5-rule S(\u03b5) \u2192 \u03b5 and S does not appear in any of the right-hand sides of the rules in the grammar.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "For every LCFRS there exists an equivalent LCFRS that is \u03b5-free (Seki et al. 1991; Boullier 1998a ) and monotone (Michaelis 2001; Kracht 2003; .", |
| "cite_spans": [ |
| { |
| "start": 64, |
| "end": 82, |
| "text": "(Seki et al. 1991;", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 83, |
| "end": 97, |
| "text": "Boullier 1998a", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 113, |
| "end": 129, |
| "text": "(Michaelis 2001;", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 130, |
| "end": 142, |
| "text": "Kracht 2003;", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "The definition of a probabilistic LCFRS is a straightforward extension of the definition of PCFG and thus it follows (Levy 2005; Kato, Seki, and Kasami 2006) that: ", |
| "cite_spans": [ |
| { |
| "start": 117, |
| "end": 128, |
| "text": "(Levy 2005;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 129, |
| "end": 157, |
| "text": "Kato, Seki, and Kasami 2006)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "Definition 4 (PLCFRS) A probabilistic LCFRS (PLCFRS) is a tuple N, T, V, P, S, p such that N, T, V", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03a3 A( x)\u2192 \u03a6\u2208P p(A( x) \u2192 \u03a6) = 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "PLCFRS with non-terminals {S, A, B}, terminals {a} and start symbol S: As an example, consider the PLCFRS in Figure 5 . This grammar simply generates a + . Words with an even number of as and nested dependencies are more probable than words with a right-linear dependency structure. For instance, the word aa receives the two analyses in Figure 6 . The analysis (a) displaying nested dependencies has probability 0.16 and (b) (right-linear dependencies) has probability 0.042.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 109, |
| "end": 117, |
| "text": "Figure 5", |
| "ref_id": null |
| }, |
| { |
| "start": 338, |
| "end": 346, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "0.2 : S(X) \u2192 A(X) 0 .8 : S(XY) \u2192 B(X, Y) 0.7 : A(aX) \u2192 A(X) 0 .3 : A(a) \u2192 \u03b5 0.8 : B(aX, aY) \u2192 B(X, Y) 0.2 : B(a, a) \u2192 \u03b5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "Similarly to the transformation of a CFG into Chomsky normal form, an LCFRS can be binarized, resulting in an LCFRS of rank 2. As in the CFG case, in the transformation, we introduce a non-terminal for each right-hand side longer than 2 and split the rule into two rules, using this new intermediate non-terminal. This is repeated until all right-hand sides are of length 2. The transformation algorithm is inspired by G\u00f3mez-Rodr\u00edguez et al. (2009) and it is also specified in Kallmeyer (2010).", |
| "cite_spans": [ |
| { |
| "start": 419, |
| "end": 448, |
| "text": "G\u00f3mez-Rodr\u00edguez et al. (2009)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Binarization", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In order to give the algorithm for this transformation, we need the notion of a reduction of a vector \u03b1 \u2208 [(T ", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 109, |
| "text": "[(T", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Binarization.", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "\u222a V) * ] i by a vector x \u2208 V j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Binarization.", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "where all variables in x occur in \u03b1. A reduction is, roughly, obtained by keeping all variables in \u03b1 that are not in x. This is defined as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Binarization.", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "Definition 5 (Reduction) Let N, T, V, P, S be an LCFRS, \u03b1 \u2208 [(T \u222a V) * ] i and x \u2208 V j for some i, j \u2208 IN.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Binarization.", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "Let w = \u03b1 1 $ . . . $ \u03b1 i be the string obtained from concatenating the components of \u03b1, separated by a new symbol $ / \u2208 (V \u222a T). Let w be the image of w under a homomorphism h defined as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Binarization.", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "h(a) = $ for all a \u2208 T, h(X) = $ for all X \u2208 { x 1 , . . . x j } and h(y) = y in all other cases. Let y 1 , . . . y m \u2208 V + such that w \u2208 $ * y 1 $ + y 2 $ + . . . $ + y m $ * .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Binarization.", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "Then the vector y 1 , . . . y m is the reduction of \u03b1 by x.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Binarization.", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "For instance, aX 1 , X 2 , bX 3 reduced with X 2 yields X 1 , X 3 and aX 1 X 2 bX 3 reduced with X 2 yields X 1 , X 3 as well. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Binarization.", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "The two derivations of aa.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 6", |
| "sec_num": null |
| }, |
| { |
| "text": "for all rules r = A( \u03b1) \u2192 A 0 ( \u03b1 0 ) . . . A m ( \u03b1 m ) in P with m > 1 do remove r from P R := \u2205 pick new non-terminals C 1 , . . . , C m\u22121 add the rule A( \u03b1) \u2192 A 0 ( \u03b1 0 )C 1 ( \u03b3 1 ) to R where \u03b3 1 is obtained by reducing \u03b1 with \u03b1 0 for all i, 1 \u2264 i \u2264 m \u2212 2 do add the rule C i ( \u03b3 i ) \u2192 A i ( \u03b1 i )C i+1 ( \u03b3 i+1 ) to R where \u03b3 i+1 is obtained by reducing \u03b3 i with \u03b1 i end for add the rule C m\u22121 ( \u03b3 m\u22122 ) \u2192 A m\u22121 ( \u03b1 m\u22121 )A m ( \u03b1 m ) to R for every rule r \u2208 R do", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 6", |
| "sec_num": null |
| }, |
| { |
| "text": "replace right-hand side arguments of length > 1 with new variables (in both sides) and add the result to P end for end for", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 6", |
| "sec_num": null |
| }, |
| { |
| "text": "Algorithm for binarizing an LCFRS.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 7", |
| "sec_num": null |
| }, |
| { |
| "text": "The binarization algorithm is given in Figure 7 . As already mentioned, it proceeds like the CFG binarization algorithm in the sense that for right-hand sides longer than 2, we introduce a new non-terminal that covers the right-hand side without the first element. Figure 8 shows an example. In this example, there is only one rule with a righthand side longer than 2. In a first step, we introduce the new non-terminals and rules that binarize the right-hand side. This leads to the set R. In a second step, before adding the rules from R to the grammar, whenever a right-hand side argument contains several variables, these are collapsed into a single new variable.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 39, |
| "end": 47, |
| "text": "Figure 7", |
| "ref_id": null |
| }, |
| { |
| "start": 265, |
| "end": 273, |
| "text": "Figure 8", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 7", |
| "sec_num": null |
| }, |
| { |
| "text": "The equivalence of the original LCFRS and the binarized grammar is rather straightforward. Note, however, that the fan-out of the LCFRS can increase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 7", |
| "sec_num": null |
| }, |
| { |
| "text": "The binarization depicted in Figure 7 is deterministic in the sense that for every rule that needs to be binarized, we choose unique new non-terminals. Later, in Section 5.3.1, we will introduce additional factorization into the grammar rules that reduces the set of new non-terminals.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 29, |
| "end": 37, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 7", |
| "sec_num": null |
| }, |
| { |
| "text": "In LCFRS, in contrast to CFG, the order of the right-hand side elements of a rule does not matter for the result of a derivation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimizing Fan-Out and Number of Variables.", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "Original LCFRS:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimizing Fan-Out and Number of Variables.", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "S(XYZUVW) \u2192 A(X, U)B(Y, V)C(Z, W) A(aX, aY) \u2192 A(X, Y) A(a, a) \u2192 \u03b5 B(bX, bY) \u2192 B(X, Y) B(b, b) \u2192 \u03b5 C(cX, cY) \u2192 C(X, Y) C(c, c) \u2192 \u03b5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimizing Fan-Out and Number of Variables.", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "Rule with right-hand side of length > 2:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimizing Fan-Out and Number of Variables.", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "S(XYZUVW) \u2192 A(X, U)B(Y, V)C(Z, W)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimizing Fan-Out and Number of Variables.", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "For this rule, we obtain Therefore, we can reorder the right-hand side of a rule before binarizing it. In the following, we present a binarization order that yields a minimal fan-out and a minimal variable number per production and binarization step. The algorithm is inspired by G\u00f3mez-Rodr\u00edguez et al. (2009) and has first been published in this version in . We assume that we are only considering partitions of right-hand sides where one of the sets contains only a single non-terminal. For a given rule c", |
| "cite_spans": [ |
| { |
| "start": 280, |
| "end": 309, |
| "text": "G\u00f3mez-Rodr\u00edguez et al. (2009)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimizing Fan-Out and Number of Variables.", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "R = {S(XYZUVW) \u2192 A(X, U)C 1 (YZ, VW), C 1 (YZ, VW) \u2192 B(Y, V)C(Z, W)} Equivalent binarized LCFRS: S(XPUQ) \u2192 A(X, U)C 1 (P, Q) C 1 (YZ, VW) \u2192 B(Y, V)C(Z, W) A(aX, aY) \u2192 A(X, Y) A(a, a) \u2192 \u03b5 B(bX, bY) \u2192 B(X, Y) B(b, b) \u2192 \u03b5 C(cX, cY) \u2192 C(X, Y) C(c, c) \u2192 \u03b5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimizing Fan-Out and Number of Variables.", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "= A 0 ( x 0 ) \u2192 A 1 ( x 1 ) . . . A k ( x k )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimizing Fan-Out and Number of Variables.", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": ", we define the characteristic string s(c, A i ) of the A i -reduction of c as follows: Concatenate the elements of x 0 , separated with new additional symbols $ while replacing every component from x i with a $. We then define the arity of the characteristic string, dim(s(c, A i )), as the number of maximal substrings Figure 9 shows how in a first step, for a given rule r with right-hand side length > 2, we determine the optimal candidate for binarization based on the characteristic string s(r, B) of some right-hand side non-terminal B and on the fan-out of B: On all righthand side predicates B we check for the maximal fan-out (given by dim(s(r, B))) and the number of variables (dim(s(r, B)) + dim(B)) we would obtain when binarizing with this predicate. This check provides the optimal candidate. In a second step we then perform the same binarization as before, except that we use the optimal candidate now instead of the first element of the right-hand side.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 321, |
| "end": 329, |
| "text": "Figure 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Minimizing Fan-Out and Number of Variables.", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "x \u2208 V + in s(A i ). Take, for example, a rule c = VP(X, YZU) \u2192 VP(X, Z)V(Y)N(U). Then s(c, VP) =$$Y$U, s(c, V) = X$$ZU.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimizing Fan-Out and Number of Variables.", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "We can assume without loss of generality that our grammars are \u03b5-free and monotone (the treebank grammars with which we are concerned all have these properties) and that they contain only binary and unary rules. Furthermore, we assume POS tagging to be done before parsing. POS tags are non-terminals of fan-out 1. Finally, according to our grammar extraction algorithm (see Section 5.1), a separation between two components always means that there is actually a non-empty gap in between them. Consequently, two different components in a right-hand side can never be adjacent in the same component of the left-hand side. The rules are then either of the form A(a) \u2192 \u03b5 with A a POS tag and a \u2208 T or of the form dim(C) , that is, only the rules for POS tags contain terminals in their lefthand sides.", |
| "cite_spans": [ |
| { |
| "start": 710, |
| "end": 716, |
| "text": "dim(C)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Parser", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "A( x) \u2192 B( x) or A( \u03b1) \u2192 B( x)C( y) where \u03b1 \u2208 (V + ) dim(A) , x \u2208 V dim(B) , y \u2208 V", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Parser", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "cand = 0 fan-out = number of variables in r vars = number of variables in r for all i = 0 to m do cand-fan-out = dim(s(r, A i )); if cand-fan-out < fan-out and dim(A i ) < fan-out then fan-out = max({cand-fan-out, dim(A i )}); vars = cand-fan-out + dim(A i ); cand = i; else if cand-fan-out \u2264 fan-out, dim(A i ) \u2264 fan-out and cand-fan-out + dim(A i ) < vars then fan-out = max({cand-fan-out, dim(A i )}); vars = cand-fan-out + dim(A i ); cand = i end if end for", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Parser", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Optimized version of the binarization algorithm, determining binarization order.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "During parsing we have to link the terminals and variables in our LCFRS rules to portions of the input string. For this purpose we need the notions of ranges, range vectors, and rule instantiations. A range is a pair of indices that characterizes the span of a component within the input. A range vector characterizes a tuple in the yield of a non-terminal. A rule instantiation specifies the computation of an element from the lefthand side yield from elements in the yields of the right-hand side non-terminals based on the corresponding range vectors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "Let", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 6 (Range)", |
| "sec_num": null |
| }, |
| { |
| "text": "w \u2208 T * with w = w 1 . . . w n where w i \u2208 T for 1 \u2264 i \u2264 n. 1. Pos(w) := {0, . . . , n}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 6 (Range)", |
| "sec_num": null |
| }, |
| { |
| "text": "We call a pair l, r \u2208 Pos(w) \u00d7 Pos(w) with l \u2264 r a range in w. Its yield l, r (w) is the substring w l+1 . . . w r .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "For two ranges", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03c1 1 = l 1 , r 1 , \u03c1 2 = l 2 , r 2 , if r 1 = l 2 , then the concatenation of \u03c1 1 and \u03c1 2 is \u03c1 1 \u2022 \u03c1 2 = l 1 , r 2 ; otherwise \u03c1 1 \u2022 \u03c1 2 is undefined. 4. A \u03c1 \u2208 (Pos(w) \u00d7 Pos(w)) k is a k-dimensional range vector for w iff \u03c1 = l 1 , r 1 , . . . , l k , r k where l i , r i is a range in w for 1 \u2264 i \u2264 k.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "We now define instantiations of rules with respect to a given input string. This definition follows the definition of clause instantiations from Boullier (2000) . An instantiated rule is a rule in which variables are consistently replaced by ranges. Because we need this definition only for parsing our specific grammars, we restrict ourselves to \u03b5-free rules containing only variables.", |
| "cite_spans": [ |
| { |
| "start": 145, |
| "end": 160, |
| "text": "Boullier (2000)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "Let G = (N, T, V, P, S) be an \u03b5-free monotone LCFRS. For a given rule", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 7 (Rule instantiation)", |
| "sec_num": null |
| }, |
| { |
| "text": "r = A( \u03b1) \u2192 A 1 ( x 1 ) \u2022 \u2022 \u2022 A m ( x m ) \u2208 P (0 < m) that does not contain any terminals, 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 7 (Rule instantiation)", |
| "sec_num": null |
| }, |
| { |
| "text": "an instantiation with respect to a string ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 7 (Rule instantiation)", |
| "sec_num": null |
| }, |
| { |
| "text": "w = t 1 . . . t n consists of a function f : V \u2192 { i, j | 1 \u2264 i \u2264 j \u2264", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 7 (Rule instantiation)", |
| "sec_num": null |
| }, |
| { |
| "text": "= f (x) \u2022 f (y), 2. if f is an instantiation of r, then A( f ( \u03b1)) \u2192 A 1 ( f ( x 1 )) \u2022 \u2022 \u2022 A m ( f ( x m )) is an instantiated rule where f ( x 1 , . . . , x k ) = f (x 1 ), . . . , f (x k ) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 7 (Rule instantiation)", |
| "sec_num": null |
| }, |
| { |
| "text": "We use a probabilistic version of the CYK parser from Seki et al. (1991) . The algorithm is formulated using the framework of parsing as deduction (Pereira and Warren 1983; Shieber, Schabes, and Pereira 1995; Sikkel 1997) , extended with weights (Nederhof 2003) . In this framework, a set of weighted items representing partial parsing results is characterized via a set of deduction rules, and certain items (the goal items) represent successful parses.", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 72, |
| "text": "Seki et al. (1991)", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 147, |
| "end": 172, |
| "text": "(Pereira and Warren 1983;", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 173, |
| "end": 208, |
| "text": "Shieber, Schabes, and Pereira 1995;", |
| "ref_id": "BIBREF57" |
| }, |
| { |
| "start": 209, |
| "end": 221, |
| "text": "Sikkel 1997)", |
| "ref_id": "BIBREF58" |
| }, |
| { |
| "start": 246, |
| "end": 261, |
| "text": "(Nederhof 2003)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 7 (Rule instantiation)", |
| "sec_num": null |
| }, |
| { |
| "text": "During parsing, we have to match components in the rules we use with portions of the input string. For a given input w, our items have the form [A, \u03c1] where A \u2208 N and \u03c1 is a range vector that characterizes the span of A. Each item has a weight in that encodes the Viterbi inside score of its best parse tree. More precisely, we use the log probability log(p) where p is the probability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 7 (Rule instantiation)", |
| "sec_num": null |
| }, |
| { |
| "text": "The first rule (scan) tells us that the POS tags that we receive as inputs are given. Consequently, they are axioms; their probability is 1 and their weight therefore 0. The", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 7 (Rule instantiation)", |
| "sec_num": null |
| }, |
| { |
| "text": "Scan: 0 : [A, i, i + 1 ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 7 (Rule instantiation)", |
| "sec_num": null |
| }, |
| { |
| "text": "A is the POS tag of w i+1 Unary:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 7 (Rule instantiation)", |
| "sec_num": null |
| }, |
| { |
| "text": "in : [B, \u03c1] in + log(p) : [A, \u03c1] p : A( \u03b1) \u2192 B( \u03b1) \u2208 P Binary: in B : [B, \u03c1 B ], in C : [C, \u03c1 C ] in B + in C + log(p) : [A, \u03c1 A ] p : A( \u03c1 A ) \u2192 B( \u03c1 B )C( \u03c1 C )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 7 (Rule instantiation)", |
| "sec_num": null |
| }, |
| { |
| "text": "is an instantiated rule Goal: [S, 0, n ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 7 (Rule instantiation)", |
| "sec_num": null |
| }, |
| { |
| "text": "Weighted CYK deduction system. second rule, unary, is applied whenever we have found the right-hand side of an instantiation of a unary rule. In our grammar, terminals only occur in rules with POS tags and the grammar is ordered and \u03b5-free. Therefore, the components of the yield of the right-hand side non-terminal and of the left-hand side terminals are the same. The rule binary applies an instantiated rule of rank 2. If we already have the two elements of the right-hand side, we can infer the left-hand side element. In both cases, unary and binary, the probability p of the new rule is multiplied with the probabilities of the antecedent items (which amounts to summing up the antecedent weights and log(p)).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 10", |
| "sec_num": null |
| }, |
| { |
| "text": "We perform weighted deductive parsing, based on the deduction system from Figure 10 . We use a chart C and an agenda A, both initially empty, and we proceed as in Figure 11 . Because for all our deduction rules, the weight functions f that compute the weight of a consequent item from the weights of the antecedent items are monotone non-increasing in each variable, the algorithm will always find the best parse without the need of exhaustive parsing. All new items that we deduce involve at least one of the agenda items as an antecedent item. Therefore, whenever an item is the best in the agenda, we can be sure that we will never find an item with a better (i.e., higher) weight. Consequently, we can safely store this item in the chart and, if it is a goal item, we have found the best parse.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 74, |
| "end": 83, |
| "text": "Figure 10", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 163, |
| "end": 172, |
| "text": "Figure 11", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 10", |
| "sec_num": null |
| }, |
| { |
| "text": "As an example consider the development of the agenda and the chart in Figure 12 when parsing aa with the PLCFRS from Figure 5 , transformed into a PLCFRS with pre-terminals and binarization (i.e., with a POS tag T a and a new binarization nonterminal B ). The new PLCFRS is given in Figure 13 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 117, |
| "end": 125, |
| "text": "Figure 5", |
| "ref_id": null |
| }, |
| { |
| "start": 283, |
| "end": 292, |
| "text": "Figure 13", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 10", |
| "sec_num": null |
| }, |
| { |
| "text": "In this example, we find a first analysis for the input (a goal item) when combining an A with span 0, 2 into an S. This S has however a rather low probability and is therefore not on top of the agenda. Later, when finding the better analysis, the weight add SCAN results to A while A = \u2205 remove best item x : I from A add x : I to C if I goal item then stop and output true else for all y : I deduced from x : I and items in C: if there is no z with z : I \u2208 C \u222a A then add y : I to A else if z : I \u2208 A for some z then update weight of I in A to max(y, z)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 10", |
| "sec_num": null |
| }, |
| { |
| "text": "Weighted deductive parsing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "chart agenda 0 : [T a , 0, 1 ], 0 : [T a , 1, 2 ] 0 : [T a , 0, 1 ] 0 : [T a , 1, 2 ], \u22120.5 : [A, 0, 1 ] 0 : [T a , 0, 1 ], 0 : [T a , 1, 2 ] \u22120.5 : [A, 0, 1 ], \u22120.5 : [A, 1, 2 ], \u22120.7 : [B, 0, 1 , 1, 2 ] 0 : [T a , 0, 1 ], 0 : [T a , 1, 2 ], \u22120.5 : [A, 1, 2 ], \u22120.7 : [B, 0, 1 , 1, 2 ], \u22120.5 : [A, 0, 1 ] \u22121.2 : [S, 0, 1 ] 0 : [T a , 0, 1 ], 0 : [T a , 1, 2 ], \u22120.65 : [A, 0, 2 ], \u22120.7 : [B, 0, 1 , 1, 2 ], 0.5 : [A, 0, 1 ], \u22120.5 : [A, 1, 2 ] \u22121.2 : [S, 0, 1 ], \u22121.2 : [S, 1, 2 ] 0 : [T a , 0, 1 ], 0 : [T a , 1, 2 ], \u22120.7 : [B, 0, 1 , 1, 2 ], \u22121.2 : [S, 0, 1 ], \u22120.5 : [A, 0, 1 ], \u22120.5 : [A, 1, 2 ], \u22121.2 : [S, 1, 2 ], \u22121.35 : [S, 0, 2 ] \u22120.65 : [A, 0, 2 ] 0 : [T a , 0, 1 ], 0 : [T a , 1, 2 ], \u22120.8 : [S, 0, 2 ], \u22121.2 : [S, 0, 1 ], \u22120.5 : [A, 0, 1 ], \u22120.5 : [A, 1, 2 ], \u22121.2 : [S, 1, 2 ] \u22120.65 : [A, 0, 2 ], \u22120.7 : [B, 0, 1 , 1, 2 ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "Parsing of aa with the grammar from Figure 5 . PLCFRS with non-terminals {S, A, B, B , T a }, terminals {a} and start symbol S:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 36, |
| "end": 44, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 12", |
| "sec_num": null |
| }, |
| { |
| "text": "0.2 : S(X) \u2192 A(X) 0 .8 : S(XY) \u2192 B(X, Y) 0.7 : A(XY) \u2192 T a (X)A(Y) 0 .3 : A(X) \u2192 T a (X) 0.8 : B(ZX, Y) \u2192 T a (Z)B (X, Y) 1 : B (X, UY) \u2192 B(X, Y)T a (U) 0.2 : B(X, Y) \u2192 T a (X)T a (Y) 1:T a (a) \u2192 \u03b5 Figure 13", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 12", |
| "sec_num": null |
| }, |
| { |
| "text": "Sample binarized PLCFRS (with pre-terminal T a ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 12", |
| "sec_num": null |
| }, |
| { |
| "text": "of the S item in the agenda is updated and then the goal item is the top agenda item and therefore parsing has been successful. Note that, so far, we have only presented the recognizer. In order to extend it to a parser, we do the following: Whenever we generate a new item, we store it not only with its weight but also with backpointers to its antecedent items. Furthermore, whenever we update the weight of an item in the agenda, we also update the backpointers. In order to read off the best parse tree, we have to start from the goal item and follow the backpointers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 12", |
| "sec_num": null |
| }, |
| { |
| "text": "So far, the weights we use give us only the Viterbi inside score of an item. In order to speed up parsing, we add the estimate of the costs for completing the item into a goal item to its weight-that is, to the weight of each item in the agenda, we add an estimate of its Viterbi outside score 2 (i.e., the logarithm of the estimate). We use context summary estimates. A context summary is an equivalence class of items for which we can compute the actual outside scores. Those scores are then used as estimates. The challenge is to choose the estimate general enough to be efficiently computable and specific enough to be helpful for discriminating items in the agenda.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Outside Estimates", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Admissibility and monotonicity are two important conditions on estimates. All our outside estimates are admissible (Klein and Manning 2003a) , which means that they never underestimate the actual outside score of an item. In other words, they are too optimistic about the costs of completing the item into an S item spanning the entire input. For the full SX estimate described in Section 4.1 and the SX estimate with span and sentence length in Section 4.4, the monotonicity is guaranteed and we can do true A * parsing as described by Klein and Manning. Monotonicity means that for each antecedent item of a rule it holds that its weight is greater than or equal to the weight of the consequent item. The estimates from Sections 4.2 and 4.3 are not monotonic. This means that it can happen that we deduce an item I 2 from an item I 1 where the weight of I 2 is greater than the weight of I 1 . The parser can therefore end up in a local maximum that is not the global maximum we are searching for. In other words, those estimates are only figures of merit (FOM).", |
| "cite_spans": [ |
| { |
| "start": 115, |
| "end": 140, |
| "text": "(Klein and Manning 2003a)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 537, |
| "end": 546, |
| "text": "Klein and", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Outside Estimates", |
| "sec_num": "4." |
| }, |
| { |
| "text": "All outside estimates are computed off-line for a certain maximal sentence length len max .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Outside Estimates", |
| "sec_num": "4." |
| }, |
| { |
| "text": "The full SX estimate is a PLCFRS adaption of the SX estimate of Klein and Manning (2003a) (hence the name). For a given sentence length n, the estimate gives the maximal probability of completing a category X with a span \u03c1 into an S with span 0, n .", |
| "cite_spans": [ |
| { |
| "start": 64, |
| "end": 89, |
| "text": "Klein and Manning (2003a)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Full SX Estimate", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For its computation, we need an estimate of the inside score of a category C with a span \u03c1, regardless of the actual terminals in our input. This inside estimate is computed as shown in Figure 14 . Here, we do not need to consider the number of terminals outside the span of C (to the left or right or in the gaps), because they are not relevant for the inside score. Therefore the items have the form [A, l 1 , . . . , l dim(A) ], where A is a nonterminal and l i gives the length of its ith component. It holds that", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 186, |
| "end": 195, |
| "text": "Figure 14", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Full SX Estimate", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u03a3 1\u2264i\u2264dim(A) l i \u2264 len max \u2212 dim(A) + 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Full SX Estimate", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "because our grammar extraction algorithm ensures that the different components in the yield of a non-terminal are never adjacent. There is always at least one terminal in between two different components that does not belong to the yield of the non-terminal.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Full SX Estimate", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The first rule in Figure 14 tells us that POS tags always have a single component of length 1; therefore this case has probability 1 (weight 0). The rules unary and binary are roughly like the ones in the CYK parser, except that they combine items with length information. The rule unary for instance tells us that if the log of the probability of building [B, l] is greater or equal to in and if there is a rule that allows to deduce an POS tags: 0 :", |
| "cite_spans": [ |
| { |
| "start": 357, |
| "end": 363, |
| "text": "[B, l]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 18, |
| "end": 27, |
| "text": "Figure 14", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Full SX Estimate", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "[A, 1 ] A a POS tag Unary: in : [B, l] in + log(p) : [A, l] p : A( \u03b1) \u2192 B( \u03b1) \u2208 P Binary: in B : [B, l B ], in C : [C, l C ] in B + in C + log(p) : [A, l A ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Full SX Estimate", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where p : A( \u03b1 A ) \u2192 B( \u03b1 B )C( \u03b1 C ) \u2208 P and the following holds: we define B", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Full SX Estimate", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(i) as {1 \u2264 j \u2264 dim(B) | \u03b1 B ( j) occurs in \u03b1 A (i)} and C(i) as {1 \u2264 j \u2264 dim(C) | \u03b1 C ( j) occurs in \u03b1 A (i)}. Then for all i, 1 \u2264 i \u2264 dim(A): l A (i) = \u03a3 j\u2208B(i) l B ( j) + \u03a3 j\u2208C(i) l C ( j).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Full SX Estimate", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Estimate of the Viterbi inside score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 14", |
| "sec_num": null |
| }, |
| { |
| "text": "Axiom : 0 : [S, 0, len, 0 ] 1 \u2264 len \u2264 len max Unary: out : [A, l] out + log(p) : [B, l] p : A( \u03b1) \u2192 B( \u03b1) \u2208 P Binary-right: out : [X, l X ] out + in(A, l A ) + log(p) : [B, l B ] Binary-left: out : [X, l X ] out + in(B, l B ) + log(p) : [A, l A ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLCFRS Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "where, for both binary rules, there is an instantiated rule p :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLCFRS Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "X( \u03c1) \u2192 A( \u03c1 A )B( \u03c1 B ) such that l X = l out (\u03c1), l A = l out (\u03c1 A ), l A = l in (\u03c1 A ), l B = l out (\u03c1 B ), l B = l in (\u03c1 B ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLCFRS Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "Full SX estimate first version (top-down).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 15", |
| "sec_num": null |
| }, |
| { |
| "text": "A item from [B, l] with probability p, then the log of the probability of [A, l] is greater or equal to in + log(p). For each item, we record its maximal weight (i.e., its maximal probability). The rule binary is slightly more complicated because we have to compute the length vector of the left-hand side of the rule from the right-hand side length vectors.", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 18, |
| "text": "[B, l]", |
| "ref_id": null |
| }, |
| { |
| "start": 74, |
| "end": 80, |
| "text": "[A, l]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 15", |
| "sec_num": null |
| }, |
| { |
| "text": "A straightforward extension of the CFG algorithm from Klein and Manning (2003a) for computing the SX estimate is given in Figure 15 . Here, the items have the form [A, l] where the vector l tells us about the lengths of the string to the left of the first component, the first component, the string in between the first and second component, and so on.", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 79, |
| "text": "Klein and Manning (2003a)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 164, |
| "end": 170, |
| "text": "[A, l]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 122, |
| "end": 131, |
| "text": "Figure 15", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 15", |
| "sec_num": null |
| }, |
| { |
| "text": "The algorithm proceeds top-down. The outside estimate of completing an S with component length len and no terminals to the left or to the right of the S component (item [S, 0, len, 0 ]) is 0. If we expand with a unary rule (unary), then the outside estimate of the right-hand side item is greater or equal to the outside estimate of the left-hand side item plus the log of the probability of the rule. In the case of binary rules, we have to further add the inside estimate of the other daughter. For this, we need a different length vector (without the lengths of the parts in between the components). Therefore, for a given range vector \u03c1 = l 1 , r 1 , . . . , l k , r k and a sentence length n, we distinguish between the inside length vector l in (\u03c1) = r 1 \u2212 l 1 , . . . , r k \u2212 l k and the outside length vector l out (\u03c1) ", |
| "cite_spans": [ |
| { |
| "start": 823, |
| "end": 826, |
| "text": "(\u03c1)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 15", |
| "sec_num": null |
| }, |
| { |
| "text": "= l 1 , r 1 \u2212 l 1 , l 2 \u2212 r 1 , . . . , l k \u2212 r k\u22121 , r k \u2212 l k , n \u2212 r k .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 15", |
| "sec_num": null |
| }, |
| { |
| "text": "This algorithm has two major problems: Because it proceeds top-down, in the binary rules we must compute all splits of the antecedent X span into the spans of A and B, which is very expensive. Furthermore, for a category A with a certain number of terminals in the components and the gaps, we compute the lower part of the outside estimate several times, namely, for every combination of number of terminals to the left and to the right (first and last element in the outside length vector). In order to avoid these problems, we now abstract away from the lengths of the part to the left and the right, modifying our items such as to allow a bottom-up strategy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 15", |
| "sec_num": null |
| }, |
| { |
| "text": "The idea is to compute the weights of items representing the derivations from a certain lower C up to some A (C is a kind of \"gap\" in the yield of A) while summing up the inside costs of off-spine nodes and the log of the probabilities of the corresponding rules. We use items [A, C, \u03c1 A , \u03c1 C , shift] where A, C \u2208 N and \u03c1 A , \u03c1 C are range vectors, both with a first component starting at position 0. The integer shift \u2264 len max tells us how many positions to the right the C span is shifted, compared to the starting position of the A. \u03c1 A and \u03c1 C represent the spans of C and A while disregarding the number of terminals to the left and the right (i.e., only the lengths of the components and of the gaps are encoded). This means in particular that the length n of the sentence does not play a role here. The right boundary of the last range in the vectors is limited to len max . For any i, 0 \u2264 i \u2264 len max , and any range vector \u03c1, we define shift(\u03c1, i) as the range vector one obtains from adding i to all range boundaries in \u03c1 and shift(\u03c1, \u2212i) as the range vector one obtains from subtracting i from all boundaries in \u03c1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 15", |
| "sec_num": null |
| }, |
| { |
| "text": "The weight of [A, C, \u03c1 A , \u03c1 C , i] estimates the log of the probability of completing a C tree with yield \u03c1 C into an A tree with yield \u03c1 A such that, if the span of A starts at position j, the span of C starts at position i + j. Figure 16 gives the computation. The value of in (A, l) is the inside estimate of [A, l] .", |
| "cite_spans": [ |
| { |
| "start": 313, |
| "end": 319, |
| "text": "[A, l]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 231, |
| "end": 240, |
| "text": "Figure 16", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 280, |
| "end": 286, |
| "text": "(A, l)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 15", |
| "sec_num": null |
| }, |
| { |
| "text": "The SX-estimate for some predicate C with span \u03c1 where i is the left boundary of the first component of \u03c1 and with sentence length n is then given by the maximal weight of [S, C, 0, n , shift(\u03c1, \u2212i), i].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 15", |
| "sec_num": null |
| }, |
| { |
| "text": "A problem of the previous estimate is that with a large number of non-terminals (for treebank parsing, approximately 12,000 after binarization and markovization), the computation of the estimate requires too much space. We therefore turn to simpler estimates with only a single non-terminal per item. We now estimate the outside score of a nonterminal A with a span of a length length (the sum of the lengths of all the components of the span), with left terminals to the left of the first component, right terminals to the right of the last component, and gaps terminals in between the components of the A span (i.e., filling the gaps). Our items have the form [X, len, left, right, gaps] with X \u2208 N, len", |
| "cite_spans": [ |
| { |
| "start": 662, |
| "end": 689, |
| "text": "[X, len, left, right, gaps]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with Left, Gaps, Right, Length", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "+ left + right + gaps \u2264 len max , len \u2265 dim(X), gaps \u2265 dim(X) \u2212 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with Left, Gaps, Right, Length", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Let us assume that, in the rule X( \u03b1) \u2192 A( \u03b1 A )B( \u03b1 B ), when looking at the vector \u03b1, we have left A variables for A-components preceding the first variable of a B component, right A variables for A-components following the last variable of a B component, and right B variables for B-components following the last variable of an A component. (In our grammars, the first left-hand side argument always starts with the first variable from A.) Furthermore, we set gaps A = dim(A) \u2212 left A \u2212 right A and gaps B = dim(B) \u2212 right B . Figure 17 gives the computation of the estimate. It proceeds top-down, as the computation of the full SX estimate in Figure 15 , except that now the items are simpler.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 530, |
| "end": 539, |
| "text": "Figure 17", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 647, |
| "end": 656, |
| "text": "Figure 15", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "SX with Left, Gaps, Right, Length", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "POS tags: 0 : [C, C, 0, 1 , 0, 1 , 0] C a POS tag Unary: 0 : [B, B, \u03c1 B , \u03c1 B , 0] log(p) : [A, B, \u03c1 B , \u03c1 B , 0] p : A( \u03b1) \u2192 B( \u03b1) \u2208 P Binary-right: 0 : [A, A, \u03c1 A , \u03c1 A , 0], 0 : [B, B, \u03c1 B , \u03c1 B , 0] in(A, l in (\u03c1 A )) + log(p) : [X, B, \u03c1 X , \u03c1 B , i] Binary-left: 0 : [A, A, \u03c1 A , \u03c1 A , 0], 0 : [B, B, \u03c1 B , \u03c1 B , 0] in(B, l in (\u03c1 B )) + log(p) : [X, A, \u03c1 X , \u03c1 A , i] where i is such that for shift(\u03c1 B , i) = \u03c1 B p : X(\u03c1 X ) \u2192 A(\u03c1 A )B(\u03c1 B )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with Left, Gaps, Right, Length", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "is an instantiated rule.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with Left, Gaps, Right, Length", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Starting sub-trees with larger gaps:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with Left, Gaps, Right, Length", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "out : [B, C, \u03c1 B , \u03c1 C , i] 0 : [B, B, \u03c1 B , \u03c1 B , 0]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with Left, Gaps, Right, Length", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Transitive closure of sub-tree combination:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with Left, Gaps, Right, Length", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "out 1 : [A, B, \u03c1 A , \u03c1 B , i], out 2 : [B, C, \u03c1 B , \u03c1 C , j] out 1 + out 2 : [A, C, \u03c1 A , \u03c1 C , i + j]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with Left, Gaps, Right, Length", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Full SX estimate second version (bottom-up).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 16", |
| "sec_num": null |
| }, |
| { |
| "text": "Axiom : 0 : [S, len, 0, 0, 0] 1 \u2264 len \u2264 len max Unary: out : [X, len, l, r, g] out + log(p) : [A, len, l, r, g] p : X( \u03b1) \u2192 A( \u03b1) \u2208 P Binary-right: out : [X, len, l, r, g] out + in(A, len \u2212 len B ) + log(p) : [B, len B , l B , r B , g B ] Binary-left: out : [X, len, l, r, g] out + in(B, len \u2212 len A ) + log(p) : [A, len A , l A , r A , g A ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLCFRS Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "where, for both binary rules, p :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLCFRS Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "X( \u03b1) \u2192 A( \u03b1 A )B( \u03b1 B ) \u2208 P.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLCFRS Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "Further side conditions for Binary-right:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLCFRS Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "a) len + l + r + g = len B + l B + r B + g B , b ) l B \u2265 l + left A , c) if right A > 0, then r B \u2265 r + right A , else (right A = 0), r B = r, d) g B \u2265 gaps A .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLCFRS Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "Further side conditions for Binary-left: The value in(X, l) for a non-terminal X and a length l, 0 \u2264 l \u2264 len max is an estimate of the probability of an X category with a span of length l. Its computation is specified in Figure 18 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 221, |
| "end": 230, |
| "text": "Figure 18", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "PLCFRS Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "a) len + l + r + g = len A + l A + r A + g A , b ) l A = l, c) if right B > 0, then r A \u2265 r + right B , else (right B = 0), r A = r d) g A \u2265 gaps B .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLCFRS Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "The SX-estimate for a sentence length n and for some predicate C with a range characterized by \u03c1 = l 1 , r 1 , . . . , l dim(C) , r dim(C) where len = \u03a3 dim (C) i=1 (r i \u2212 l i ) and r = n \u2212 r dim(C) is then given by the maximal weight of the item [C, len, ", |
| "cite_spans": [ |
| { |
| "start": 157, |
| "end": 160, |
| "text": "(C)", |
| "ref_id": null |
| }, |
| { |
| "start": 247, |
| "end": 255, |
| "text": "[C, len,", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLCFRS Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "l 1 , r, n \u2212 len \u2212 l 1 \u2212 r].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PLCFRS Parsing", |
| "sec_num": null |
| }, |
| { |
| "text": "In order to further decrease the space complexity of the computation of the outside estimate, we can simplify the previous estimate by subsuming the two lengths left and right in a single length lr. The items now have the form [X, len, lr, gaps] with X \u2208 N, len", |
| "cite_spans": [ |
| { |
| "start": 227, |
| "end": 245, |
| "text": "[X, len, lr, gaps]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with LR, Gaps, Length", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "+ lr + gaps \u2264 len max , len \u2265 dim(X), gaps \u2265 dim(X) \u2212 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with LR, Gaps, Length", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The computation is given in Figure 19 . Again, we define left A , gaps A , right A and gaps B , right B for a rule X( \u03b1) \u2192 A( \u03b1 A )B( \u03b1 B ) as before. Furthermore, in both Binary-left and Binary-right, we have limited lr in the consequent item to the lr of the antecedent plus the length of the sister (len B , resp. len A ). This results in a further reduction of the number of items while having only little effect on the parsing results.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 28, |
| "end": 37, |
| "text": "Figure 19", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "SX with LR, Gaps, Length", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The SX-estimate for a sentence length n and for some predicate C with a span \u03c1 = l 1 , r 1 , . . . , l dim(C) , r dim(C) where len = \u03a3 dim (C) i=1 (r i \u2212 l i ) and r = n \u2212 r dim(C) is then the maximal weight of [C, len, ", |
| "cite_spans": [ |
| { |
| "start": 139, |
| "end": 142, |
| "text": "(C)", |
| "ref_id": null |
| }, |
| { |
| "start": 211, |
| "end": 219, |
| "text": "[C, len,", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with LR, Gaps, Length", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "l 1 + r, n \u2212 len \u2212 l 1 \u2212 r].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with LR, Gaps, Length", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "A a POS tag Unary:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "POS tags: 0 : [A, 1]", |
| "sec_num": null |
| }, |
| { |
| "text": "in : [B, l] in + log(p) : [A, l] p : A( \u03b1) \u2192 B( \u03b1) \u2208 P Binary: in B : [B, l B ], in C : [C, l C ] in B + in C + log(p) : [A, l B + l C ] where either p : A( \u03b1 A ) \u2192 B( \u03b1 B )C( \u03b1 C ) \u2208 P or p : A( \u03b1 A ) \u2192 C( \u03b1 C )B( \u03b1 B ) \u2208 P.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "POS tags: 0 : [A, 1]", |
| "sec_num": null |
| }, |
| { |
| "text": "Estimate of the inside score with total span length.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 18", |
| "sec_num": null |
| }, |
| { |
| "text": "Axiom : 0 : [S, len, 0, 0] 1 \u2264 len \u2264 len max Unary: out : [X, len, lr, g] out + log(p) : [A, len, lr, g] p : X( \u03b1) \u2192 A( \u03b1) \u2208 P Binary-right: out : [X, len, lr, g] out + in(A, len \u2212 len B ) + log(p) : [B, len B , lr B , g B ] p : X( \u03b1) \u2192 A( \u03b1 A )B( \u03b1 B ) \u2208 P Binary-left: out : [X, len, lr, g] out + in(B, len \u2212 len A ) + log(p) : [A, len A , lr A , g A ] p : X( \u03b1) \u2192 A( \u03b1 A )B( \u03b1 B ) \u2208 P", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 18", |
| "sec_num": null |
| }, |
| { |
| "text": "Further side conditions for Binary-right:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 18", |
| "sec_num": null |
| }, |
| { |
| "text": "a) len + lr + g = len B + lr B + g B b) lr < lr B c) g B \u2265 gaps A", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 18", |
| "sec_num": null |
| }, |
| { |
| "text": "Further side conditions for Binary-left: a) len + lr + g = len A + lr A + g A b) if right B = 0 then lr = lr A , else lr < lr A c) g A \u2265 gaps B Figure 19 SX estimate depending on length, LR, gaps.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 144, |
| "end": 153, |
| "text": "Figure 19", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 18", |
| "sec_num": null |
| }, |
| { |
| "text": "We will now present a further simplification of the last estimate that records only the span length and the length of the entire sentence. The items have the form [X, len, slen] with X \u2208 N, dim(X) \u2264 len \u2264 slen. The computation is given in Figure 20 . This last estimate is actually monotonic and allows for true A * parsing.", |
| "cite_spans": [ |
| { |
| "start": 163, |
| "end": 177, |
| "text": "[X, len, slen]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 239, |
| "end": 248, |
| "text": "Figure 20", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "SX with Span and Sentence Length", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "The SX-estimate for a sentence length n and for some predicate C with a span \u03c1 = l 1 , r 1 , . . . , l dim(C) , r dim(C) where len = \u03a3 dim (C) i=1 (r i \u2212 l i ) is then the maximal weight of [C, len, n] .", |
| "cite_spans": [ |
| { |
| "start": 139, |
| "end": 142, |
| "text": "(C)", |
| "ref_id": null |
| }, |
| { |
| "start": 190, |
| "end": 201, |
| "text": "[C, len, n]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with Span and Sentence Length", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "In order to prove that this estimate allows for monotonic weighted deductive parsing and therefore guarantees that the best parse will be found, let us have a look at the CYK deduction rules when being augmented with the estimate. Only Unary and Binary are relevant because Scan does not have antecedent items. The two rules, augmented with the outside estimate, are shown in Figure 21 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 376, |
| "end": 385, |
| "text": "Figure 21", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "SX with Span and Sentence Length", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "We have to show that for every rule, if this rule has an antecedent item with weight w and a consequent item with weight w , then w \u2265 w .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with Span and Sentence Length", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Let us start with Unary. To show:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with Span and Sentence Length", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "in B + out B \u2265 in B + log(p) + out A .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with Span and Sentence Length", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Because of the Unary rule for computing the outside estimate and because of the unary production,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with Span and Sentence Length", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Axiom : 0 : [S, len, len] 1 \u2264 len \u2264 len max Unary: out : [X, l X , slen] out + log(p) : [A, l X , slen] p : X( \u03b1) \u2192 A( \u03b1) \u2208 P Binary-right: out : [X, l X , slen] out + in(A, l X \u2212 l B ) + log(p) : [B, l B , slen] p : X( \u03b1) \u2192 A( \u03b1 A )B( \u03b1 B ) \u2208 P Binary-left: out : [X, l X , slen] out + in(B, l X \u2212 l A ) + log(p) : [A, l A , slen] p : X( \u03b1) \u2192 A( \u03b1 A )B( \u03b1 B ) \u2208 P", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SX with Span and Sentence Length", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "SX estimate depending on span and sentence length. Unary:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "in B + out B : [B, \u03c1] in B + log(p) + out A : [A, \u03c1] p : A( \u03b1) \u2192 B( \u03b1) \u2208 P Binary: in B + out B : [B, \u03c1 B ], in C + out C : [C, \u03c1 C ] in B + in C + log(p) + out A : [A, \u03c1 A ] p : A( \u03c1 A ) \u2192 B( \u03c1 B )C( \u03c1 C )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "is an instantiated rule (Here, out A , out B , and out C are the respective outside estimates of [A, \u03c1 A ], [B, \u03c1 B ] and [C, \u03c1 C ].)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 20", |
| "sec_num": null |
| }, |
| { |
| "text": "Parsing rules including outside estimate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 21", |
| "sec_num": null |
| }, |
| { |
| "text": "we obtain that, given the outside estimate out A of [A, \u03c1] the outside estimate out B of the item [B, \u03c1] is at least out A + log(p), namely, out B \u2265 log(p) + out A . Now let us consider the rule Binary. We treat only the relation between the weight of the C antecedent item and the consequent. The treatment of the antecedent B is symmetric. To show:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 21", |
| "sec_num": null |
| }, |
| { |
| "text": "in C + out C \u2265 in B + in C + log(p) + out A .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 21", |
| "sec_num": null |
| }, |
| { |
| "text": "Assume that l B is the length of the components of the B item and n is the sentence length. Then, because of the Binary-right rule in the computation of the outside estimate and because of our instantiated rule p : A( \u03c1 A ) \u2192 B( \u03c1 B )C( \u03c1 C ), we have that the outside estimate out C of the C-item is at least out A + in (B, l ", |
| "cite_spans": [ |
| { |
| "start": 321, |
| "end": 326, |
| "text": "(B, l", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 21", |
| "sec_num": null |
| }, |
| { |
| "text": "B ) + log(p). Furthermore, in(B, l B ) \u2265 in B . Consequently out C \u2265 in B + log(p) + out A .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 21", |
| "sec_num": null |
| }, |
| { |
| "text": "Before parsing, the outside estimates of all items up to a certain maximal sentence length len max are precomputed. Then, when performing the weighted deductive parsing as explained in Section 3.2, whenever a new item is stored in the agenda, we add its outside estimate to its weight.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Integration into the Parser", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "Because the outside estimate is always greater than or equal to the actual outside score, given the input, the weight of an item in the agenda is always greater than or equal to the log of the actual product of the inside and outside score of the item. In this sense, the outside estimates given earlier are admissible.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Integration into the Parser", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "Additionally, as already mentioned, note that the full SX estimate and the SX estimate with span and sentence length are monotonic and allow for A * parsing. The other two estimates, which are both not monotonic, act as FOMs in a best-first parsing context. Consequently, they contribute to speeding up parsing but they decrease the quality of the parsing output. For further evaluation details see Section 6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Integration into the Parser", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "The algorithm we use for extracting an LCFRS from a constituency treebank with crossing branches has originally been presented in Maier and S\u00f8gaard (2008) . It interprets the treebank trees as LCFRS derivation trees. Consider for instance the tree in Figure 22 . The S node has two daughters, a VMFIN node and a VP node. This yields a rule S \u2192 VP VMFIN. The VP is discontinuous with two components that wrap around the yield of the VMFIN. Consequently, the LCFRS rule is S(XYZ) \u2192 VP(X, Z) VMFIN(Y).", |
| "cite_spans": [ |
| { |
| "start": 130, |
| "end": 154, |
| "text": "Maier and S\u00f8gaard (2008)", |
| "ref_id": "BIBREF38" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 251, |
| "end": 260, |
| "text": "Figure 22", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Grammar Extraction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The extraction of an LCFRS from treebanks with crossing branches is almost immediate, except for the fan-out of the non-terminal categories: In the treebank, we can have the same non-terminal with different fan-outs, for instance a VP without a gap (fan-out 1), a VP with a single gap (fan-out 2), and so on. In the corresponding LCFRS, we have to distinguish these different non-terminals by mapping them to different predicates.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar Extraction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The algorithm first creates a so-called lexical clause P(a) \u2192 \u03b5 for each pre-terminal P dominating some terminal a. Then for all other non-terminals A 0 with the children", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar Extraction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "A 1 \u2022 \u2022 \u2022 A m , a clause A 0 \u2192 A 1 \u2022 \u2022 \u2022 A m is created.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar Extraction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "is the number of discontinuous parts in their yields. The components of A 0 are concatenations of variables that describe how the discontinuous parts of the yield of A 0 are obtained from the yields of its daughters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The number of components of the A 1 \u2022 \u2022 \u2022 A m", |
| "sec_num": null |
| }, |
| { |
| "text": "More precisely, the non-terminals in our LCFRS are all A k where A is a non-terminal label in the treebank and k is a possible fan-out for A. For a given treebank tree V, E, r, l where V is the set of nodes, E \u2282 V \u00d7 V the set of immediate dominance edges, r \u2208 V the root node, and l : V \u2192 N \u222a T the labeling function, the algorithm constructs the following rules. Let us assume that w 1 , . . . , w n are the terminal labels of the leaves in V, E, r with a linear precedence relation w i \u227a w j for 1 \u2264 i < j \u2264 n. We introduce a variable X i for every 2 ) ) \u2192 \u03b5 to the rules of the grammar. (We omit the fan-out subscript here because pre-terminals are always of fan-out 1.)", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 551, |
| "end": 554, |
| "text": "2 )", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "The number of components of the A 1 \u2022 \u2022 \u2022 A m", |
| "sec_num": null |
| }, |
| { |
| "text": "w i , 1 \u2264 i \u2264 n. r For every pair of nodes v 1 , v 2 \u2208 V with v 2 , v 2 \u2208 E, l(v 2 ) \u2208 T, we add l(v 1 )(l(v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The number of components of the A 1 \u2022 \u2022 \u2022 A m", |
| "sec_num": null |
| }, |
| { |
| "text": "r For every node v \u2208 V with l(v) = A 0 / \u2208 T such that there are exactly m nodes v 1 , . . . , v m \u2208 V (m \u2265 1) with v, v i \u2208 E and l(v i ) = A i / \u2208 T for all 1 \u2264 i \u2264 m, we now create a rule A 0 (x (0) 1 , . . . , x (0) dim(A 0 ) ) \u2192 A 1 (x (1) 1 , . . . , x (1) dim(A 1 ) ) . . . A m (x (m) 1 , . . . , x (m) dim(A m ) )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The number of components of the A 1 \u2022 \u2022 \u2022 A m", |
| "sec_num": null |
| }, |
| { |
| "text": "where for the predicate A i , 0 \u2264 i \u2264 m, the following must hold:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The number of components of the A 1 \u2022 \u2022 \u2022 A m", |
| "sec_num": null |
| }, |
| { |
| "text": "A i , x (i) 1 . . . x (i) dim(A i ) is the concatenation of all X \u2208 {X i | v i , v i \u2208 E * with l(v i ) = w i } such that X i precedes X j if i < j, and 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The concatenation of all arguments of", |
| "sec_num": "1." |
| }, |
| { |
| "text": "a variable X j with 1 \u2264 j < n is the right boundary of an argument of A i if and only if", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The concatenation of all arguments of", |
| "sec_num": "1." |
| }, |
| { |
| "text": "X j+1 / \u2208 {X i | v i , v i \u2208 E * with l(v i ) = w i },", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The concatenation of all arguments of", |
| "sec_num": "1." |
| }, |
| { |
| "text": "that is, an argument boundary is introduced at each discontinuity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The concatenation of all arguments of", |
| "sec_num": "1." |
| }, |
| { |
| "text": "As a further step, in this new rule, all right-hand side arguments of length > 1 are replaced in both sides of the rule with a single new variable. Finally, all non-terminals A in the rule are equipped with an additional subscript dim(A), which gives us the final non-terminal in our LCFRS.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The concatenation of all arguments of", |
| "sec_num": "1." |
| }, |
| { |
| "text": "PROAV(Dar\u00fcber) \u2192 \u03b5 VMFIN(mu\u00df) \u2192 \u03b5 VVPP(nachgedacht) \u2192 \u03b5 VAINF(werden) \u2192 \u03b5 S 1 (X 1 X 2 X 3 ) \u2192 VP 2 (X 1 , X 3 )VMFIN(X 2 ) VP 2 (X 1 , X 2 X 3 ) \u2192 VP 2 (X 1 , X 2 )VAINF(X 3 ) VP 2 (X 1 , X 2 ) \u2192 PROAV(X 1 )VVPP(X 2 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The concatenation of all arguments of", |
| "sec_num": "1." |
| }, |
| { |
| "text": "LCFRS rules extracted from the tree in Figure 22 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 39, |
| "end": 48, |
| "text": "Figure 22", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 23", |
| "sec_num": null |
| }, |
| { |
| "text": "For the tree in Figure 22 , the algorithm produces for instance the rules in Figure 23 . As standard for PCFG, the probabilities are computed using Maximum Likelihood Estimation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 16, |
| "end": 25, |
| "text": "Figure 22", |
| "ref_id": "FIGREF7" |
| }, |
| { |
| "start": 77, |
| "end": 86, |
| "text": "Figure 23", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 23", |
| "sec_num": null |
| }, |
| { |
| "text": "As previously mentioned, in contrast to CFG the order of the right-hand side elements of a rule does not matter for the result of an LCFRS derivation. Therefore, we can reorder the right-hand side of a rule before binarizing it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Outward Binarization", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The following, treebank-specific reordering results in a head-outward binarization where the head is the lowest subtree and it is extended by adding first all sisters to its left and then all sisters to its right. It consists of reordering the right-hand side of the rules extracted from the treebank such that first, all elements to the right of the head are listed in reverse order, then all elements to the left of the head in their original order, and then the head itself. Figure 24 shows the effect this reordering and binarization has on the form of the syntactic trees. In addition to this, we also use a variant of this reordering Rule extracted for the S node: S(XYZU) \u2192 VP(X, U)VMFIN(Y)NN(Z) Reordering for head-outward binarization: S(XYZU) \u2192 NN(Z)VP(X, U)VMFIN(Y) New rules resulting from binarizing this rule:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 478, |
| "end": 487, |
| "text": "Figure 24", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Head-Outward Binarization", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "S(XYZ) \u2192 S bin1 (X, Z)NN(Y) S bin1 (XY, Z) \u2192 VP(X, Z)VMFIN(Y)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Outward Binarization", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Rule extracted for the VP node: VP(X, YZ) \u2192 NN(X)AV(Y)VAINF(Z) New rules resulting from binarizing this rule:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Outward Binarization", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "VP(X, Y) \u2192 NN(X)VP bin1 (Y) VP bin1 (XY) \u2192 AV(X)VAINF(Y)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Outward Binarization", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Tree after binarization:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Outward Binarization", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "S S bin1 VP VP bin1 NN VMFIN NN AV VAINF", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Head-Outward Binarization", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Sample head-outward binarization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 24", |
| "sec_num": null |
| }, |
| { |
| "text": "where we add first the sisters to the right and then the ones to the left. This is what Klein and Manning (2003b) do. To mark the heads of phrases, we use the head rules that the Stanford parser (Klein and Manning 2003c) uses for NeGra. In all binarizations, there exists the possibility of adding additional unary rules when deriving the head. This allows for a further factorization. In the experiments, however, we do not insert unary rules, neither at the highest nor at the lowest new binarization non-terminal, because this was neither beneficial for parsing times nor for the parsing results.", |
| "cite_spans": [ |
| { |
| "start": 88, |
| "end": 113, |
| "text": "Klein and Manning (2003b)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 195, |
| "end": 220, |
| "text": "(Klein and Manning 2003c)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 24", |
| "sec_num": null |
| }, |
| { |
| "text": "As already mentioned in Section 3.1, a binarization that introduces unique new non-terminals for every single rule that needs to be binarized produces a large amount of non-terminals and fails to capture certain generalizations. For this reason, we introduce markovization (Collins 1999; Klein and Manning 2003b) .", |
| "cite_spans": [ |
| { |
| "start": 273, |
| "end": 287, |
| "text": "(Collins 1999;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 288, |
| "end": 312, |
| "text": "Klein and Manning 2003b)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Markovization.", |
| "sec_num": "5.3.1" |
| }, |
| { |
| "text": "Markovization is achieved by introducing only a single new non-terminal for the new rules introduced during binarization and adding vertical and horizontal context from the original trees to each occurrence of this new non-terminal. As vertical context, we add the first v labels on the path from the root node of the tree that we want to binarize to the root of the entire treebank tree. The vertical context is collected during grammar extraction and then taken into account during binarization of the rules. As horizontal context, during binarization of a rule A( \u03b1) \u2192 A 0 ( \u03b1 0 ) . . . A m ( \u03b1 m ), for the new non-terminal that comprises the right-hand side elements Figure 25 shows an example of a markovization of the tree from Figure 24 with v = 1 and h = 2. Here, the superscript is the vertical context and the subscript the horizontal context of the new non-terminal X. Note that in this example we have disregarded the fan-out of the context categories. The VP, for instance, is actually a VP 2 because it has fan-out 2. For the context symbols, one can either use the categories from the original treebank (without fan-out) or the ones from the LCFRS rules (with fan-out). We chose the latter approach because it delivered better parsing results.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 672, |
| "end": 681, |
| "text": "Figure 25", |
| "ref_id": null |
| }, |
| { |
| "start": 735, |
| "end": 744, |
| "text": "Figure 24", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Markovization.", |
| "sec_num": "5.3.1" |
| }, |
| { |
| "text": "A i . . . A m (for some 1 \u2264 i \u2264 m), we add the first h elements of A i , A i\u22121 , . . . , A 0 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Markovization.", |
| "sec_num": "5.3.1" |
| }, |
| { |
| "text": "Grammar annotation (i.e., manual enhancement of annotation information through category splitting) has previously been successfully used in parsing German (Versley 2005) . In order to see if such modifications can have a beneficial effect in PLCFRS parsing as well, we perform different category splits on the (unbinarized) NeGra constituency data.", |
| "cite_spans": [ |
| { |
| "start": 155, |
| "end": 169, |
| "text": "(Versley 2005)", |
| "ref_id": "BIBREF63" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Further Category Splitting.", |
| "sec_num": "5.3.2" |
| }, |
| { |
| "text": "We split the category S (\"sentence\") into SRC (\"relative clause\") and S (all other categories S). Relative clauses mostly occur in a very specific context, namely, as the ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Further Category Splitting.", |
| "sec_num": "5.3.2" |
| }, |
| { |
| "text": "Sample markovization with v = 1, h = 2. right part of an NP or a PP. This splitting should therefore speed up parsing and increase precision. Furthermore, we distinguish NPs by their case. More precisely, to all nodes with categories N, we append the grammatical function label to the category label. We finally experiment with the combination of both splits.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 25", |
| "sec_num": null |
| }, |
| { |
| "text": "Our data source is the NeGra treebank (Skut et al. 1997) . We create two different data sets for constituency parsing. For the first one, we start out with the unmodified NeGra treebank and remove all sentences with a length of more than 30 words. We pre-process the treebank following common practice (K\u00fcbler and Penn 2008) , attaching all nodes which are attached to the virtual root node to nodes within the tree such that, ideally, no new crossing edges are created. In a second pass, we attach punctuation which comes in pairs (parentheses, quotation marks) to the same nodes. For the second data set we create a copy of the pre-processed first data set, in which we apply the usual tree transformations for NeGra PCFG parsing (i.e., moving nodes to higher positions until all crossing branches are resolved). The first 90% of both data sets are used as the training set and the remaining 10% as test set. The first data set is called NeGra LCFRS and the second is called NeGra CFG . Table 1 lists some properties of the training and test (respectively, gold) parts of NeGra LCFRS , namely, the total number of sentences, the average sentence length, the average tree height (the height of a tree being the length of the longest of all paths from the terminals to the root node), and the average number of children per node (excluding terminals). Furthermore, gap degrees (i.e., the number of gaps in the spans of non-terminal nodes) are listed (Maier and Lichte 2011) .", |
| "cite_spans": [ |
| { |
| "start": 38, |
| "end": 56, |
| "text": "(Skut et al. 1997)", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 302, |
| "end": 324, |
| "text": "(K\u00fcbler and Penn 2008)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 1450, |
| "end": 1473, |
| "text": "(Maier and Lichte 2011)", |
| "ref_id": "BIBREF37" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 989, |
| "end": 996, |
| "text": "Table 1", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Our findings correspond to those of Maier and Lichte except for small differences due to the fact that, unlike us, they removed the punctuation from the trees.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We have implemented the CYK parser described in the previous section in a system called rparse. The implementation is realized in Java. 3 ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parser Implementation", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "For the evaluation of the constituency parses, we use an EVALB-style metric. For a tree over a string w, a single constituency is represented by a tuple A, \u03c1 with A being a node label and \u03c1 \u2208 (Pos(w) \u00d7 Pos(w)) dim (A) . We compute precision, recall, and F 1 based on these tuples from gold and de-binarized parsed test data from which all category splits have been removed. This metric is equivalent to the corresponding PCFG metric for dim(A) = 1. Despite the shortcomings of such a measure (Rehbein and van Genabith 2007) , it still allows to some extent a comparison to previous work in PCFG parsing (see also Section 7). Note that we provide the parser with gold POS tags in all experiments.", |
| "cite_spans": [ |
| { |
| "start": 192, |
| "end": 199, |
| "text": "(Pos(w)", |
| "ref_id": null |
| }, |
| { |
| "start": 214, |
| "end": 217, |
| "text": "(A)", |
| "ref_id": null |
| }, |
| { |
| "start": 492, |
| "end": 523, |
| "text": "(Rehbein and van Genabith 2007)", |
| "ref_id": "BIBREF53" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "We use the markovization settings v = 1 and h = 2 for all further experiments. The setting which has been reported to yield the best results for PCFG parsing of NeGra, v = 2 and h = 1 (Rafferty and Manning 2008) , required a parsing time which was too high. 4 Table 2 contains the parsing results for NeGra LCFRS using five different binarizations: Head-driven and KM are the two head-outward binarizations that use a head chosen on linguistic grounds (described in Section 5.2); L-to-R is another variant in which we always choose the rightmost daughter of a node as its head. 5 Optimal reorders the left-hand side such that the fan-out of the binarized rules is optimized (described in Section 3.1.2). Finally, we also try a deterministic binarization (Deterministic) in which we binarize strictly from left to right (i.e., we do not reorder the right-hand sides of productions, and choose unique binarization labels).", |
| "cite_spans": [ |
| { |
| "start": 184, |
| "end": 211, |
| "text": "(Rafferty and Manning 2008)", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 258, |
| "end": 259, |
| "text": "4", |
| "ref_id": null |
| }, |
| { |
| "start": 578, |
| "end": 579, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Markovization and Binarization", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "The results of the head-driven binarizations and the optimal binarization lie close together; the results for the deterministic binarization are worse. This indicates that the presence or absence of markovization has more impact on parsing results than the actual binarization order. Furthermore, the non-optimal binarizations did not yield a binarized grammar of a higher fan-out than the optimal binarization: For all five binarizations, the fan-out was 7 (caused by a VP interrupted by punctuation). The different binarizations result in different numbers of items, and therefore allow for different parsing speeds. The respective leftmost graph in Figures 26 and 27 show a visual representation of the number of items produced by all binarizations, and the corresponding parsing times. Note that when choosing the head with head rules the number of items is almost not affected by the choice of adding first the children to the left of the head and then to the right of the head or vice versa. The optimal binarization produces the best results. Therefore we will use it in all further experiments, in spite of its higher parsing time. Table 3 presents the constituency parsing results for NeGra LCFRS and NeGra CFG , both with and without the different category splits. Recall that NeGra LCFRS has crossing branches and consequently leads to a PLCFRS of fan-out > 1 whereas NeGra CFG does not contain crossing branches and consequently leads to a 1-PLCFRS-in other words, ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 652, |
| "end": 669, |
| "text": "Figures 26 and 27", |
| "ref_id": null |
| }, |
| { |
| "start": 1140, |
| "end": 1147, |
| "text": "Table 3", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Markovization and Binarization", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "NeGra LCFRS : Parsing times for PLCFRS parsing (left-to-right): binarizations, baseline and category splits, and estimates (log scale). a PCFG. We evaluate the parser output against the unmodified gold data; that is, before we evaluate the experiments with category splits, we replace all split labels in the parser output with the corresponding original labels. We take a closer look at the properties of the trees in the parser output for NeGra LCFRS . Twenty-nine sentences had no parse, therefore, the parser output has 1,804 sentences. The average tree height is 4.72, and the average number of children per node (excluding terminals) is 2.91. These values are almost identical to the values for the gold data. As for the gap degree, we get 1,401 sentences with no gaps (1,361 in the gold set), 334 with gap degree 1 (387 in the gold set), and 69 with 2 or 3 gaps (85 in the gold set). Even though the difference is only small, one can see that fewer gaps are preferred. This is not surprising, since constituents with many gaps are rare events and therefore end up with a probability which is too low.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 27", |
| "sec_num": null |
| }, |
| { |
| "text": "We see that the quality of the PLCFRS parser output on NeGra LCFRS (which contains more information than the output of a PCFG parser) does not lag far behind the quality of the PCFG parsing results on NeGra CFG . With respect to the category splits, the results show furthermore that category splitting is indeed beneficial for the quality of the PLCFRS parser output. The gains in speed are particularly visible for sentences with a length greater than 20 words (cf. the number of produced items and parsing times in Figures 26 and 27 [middle] ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 518, |
| "end": 544, |
| "text": "Figures 26 and 27 [middle]", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 27", |
| "sec_num": null |
| }, |
| { |
| "text": "We compare the parser performance without estimates (OFF) with its performance with the estimates described in Sections 4.3 (LR) and 4.4 (LN).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluating Outside Estimates", |
| "sec_num": "6.6" |
| }, |
| { |
| "text": "Unfortunately, the full estimates seem to be only of theoretical interest because they were too expensive to compute both in terms of time and space, given the restrictions imposed by our hardware. We could, however, compute the LN and the LR estimate. Unlike the LN estimate, which allows for true A * parsing, the LR estimate lets the quality of the parsing results deteriorate: Compared with the baseline, labeled F 1 drops from 74.90 to 73.76 and unlabeled F 1 drops from 77.91 to 76.89. The respective rightmost graphs in Figures 26 and 27 show the average number of items produced by the parser and the parsing times for different sentence lengths. The results indicate that the estimates have the desired effect of preventing unnecessary items from being produced. This is reflected in a significantly lower parsing time.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 527, |
| "end": 544, |
| "text": "Figures 26 and 27", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluating Outside Estimates", |
| "sec_num": "6.6" |
| }, |
| { |
| "text": "The different behavior of the LR and the LN estimate raises the question of the trade-off between maintaining optimality and obtaining a higher parsing speed. In other words, it raises the question of whether techniques such as pruning or coarseto-fine parsing (Charniak et al. 2006) would probably be superior to A * parsing. A first implementation of a coarse-to-fine approach has been presented by van Cranenburgh (2012). He generates a CFG from the treebank PLCFRS, based on the idea of Barth\u00e9lemy et al. (2001) . This grammar, which can be seen as a coarser version of the actual PLCFRS, is then used for pruning of the search space. The problem that van Cranenburgh tackles is specific to PLCFRS: His PCFG stage generalizes over the distinction of labels by their fan-out. The merit of his work is an enormous increase in efficiency: Sentences with a length of up to 40 words can now be parsed in a reasonable time. For a comparison of the results of van Cranenburgh (2012) with our work, the same version of evaluation parameters would have to be used. The applicability and effectiveness of other coarseto-fine approaches (Charniak et al. 2006; Petrov and Klein 2007) on PLCFRS remain to be seen.", |
| "cite_spans": [ |
| { |
| "start": 261, |
| "end": 283, |
| "text": "(Charniak et al. 2006)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 491, |
| "end": 515, |
| "text": "Barth\u00e9lemy et al. (2001)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1130, |
| "end": 1152, |
| "text": "(Charniak et al. 2006;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1153, |
| "end": 1175, |
| "text": "Petrov and Klein 2007)", |
| "ref_id": "BIBREF48" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluating Outside Estimates", |
| "sec_num": "6.6" |
| }, |
| { |
| "text": "Comparing our results with results from the literature is a difficult endeavor, because PLCFRS parsing of NeGra is an entirely new task that has no direct equivalent in previous work. In particular, it is a harder task than PCFG parsing. What we can provide in this section is a comparison of the performance of our parser on NeGra CFG to the performance of previously presented PCFG parsers on the same data set and an overview on previous work on parsing which aims at reconstructing crossing branches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison to Other Approaches", |
| "sec_num": "7." |
| }, |
| { |
| "text": "For the comparison of the performance of our parser on NeGra CFG , we have performed experiments with Helmut Schmid's LoPar (Schmid 2000) and with the Stanford Parser (Klein and Manning 2003c) on NeGra CFG . 6 For the experiments both parsers were provided with gold POS tags. Recall that our parser produced labeled precision, recall, and F 1 of 76.32, 76.46, and 76.34, respectively. The plain PCFG provided by LoPar delivers lower results (LP 72.86, LR 74.43 , and LF 1 73.63). The Stanford Parser results (markovization setting v = 2, h = 1 [Rafferty and Manning 2008] , otherwise default parameters) lie in the vicinity of the results of our parser (LP 74.27, LR 76.19 , LF 1 75.45). Although the results for LoPar are no surprise, given the similarity of the models implemented by our parser and the Stanford parser, it remains to be investigated why the lexicalization component of the Stanford parser does not lead to better results. In any case the comparison shows that on a data set without crossing branches, our parser obtains the results one would expect. A further data set to which we can provide a comparison is the PaGe workshop experimental data (K\u00fcbler and Penn 2008). 7 Table 4 lists the results of some of the papers in K\u00fcbler and Penn (2008) on TIGER, namely, for Petrov and Klein (2008) Hall and Nivre (2008) (H&N) , who use a dependency-based approach (see next paragraph). The comparison again shows that our system produces good results. Again the performance gap between the Stanford parser and our parser warrants further investigation. As for the work that aims to create crossing branches, Plaehn (2004) obtains 73.16 Labeled F 1 using Probabilistic Discontinuous Phrase Structure Grammar (DPSG), albeit only on sentences with a length of up to 15 words. On those sentences, we obtain 83.97. The crucial difference between DPSG rules and LCFRS rules is that the former explicitly specify the material that can occur in gaps whereas LCFRS does not. Levy (2005) , like us, proposes to use LCFRS but does not provide any evaluation results of his work. Very recently, Evang and Kallmeyer (2011) followed up on our work. They transform the Penn Treebank such that the trace nodes and co-indexations are converted into crossing branches and parse them with the parser presented in this article, obtaining promising results. Furthermore, van Cranenburgh, Scha, and Sangati (2011) and van Cranenburgh (2012) have also followed up on our work, introducing an integration of our approach with Data-Oriented Parsing (DOP). The former article introduces an LCFRS adaption of Goodman's PCFG-DOP (Goodman 2003) . For their evaluation, the authors use the same data as we do in Maier (2010) , and obtain an improvement of roughly 1.5 points F-measure. They are also confronted with the same efficiency issues, however, and encounter a bottleneck in terms of parsing time. In van Cranenburgh (2012), a coarseto-fine approach is presented (see Section 6.6). With this approach much faster parsing is made possible and sentences with a length of up to 40 words can be parsed. The cost of the speed, however, is that the results lie well below the baseline results for standard PLCFRS parsing.", |
| "cite_spans": [ |
| { |
| "start": 124, |
| "end": 137, |
| "text": "(Schmid 2000)", |
| "ref_id": "BIBREF54" |
| }, |
| { |
| "start": 167, |
| "end": 192, |
| "text": "(Klein and Manning 2003c)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 442, |
| "end": 452, |
| "text": "(LP 72.86,", |
| "ref_id": null |
| }, |
| { |
| "start": 453, |
| "end": 461, |
| "text": "LR 74.43", |
| "ref_id": null |
| }, |
| { |
| "start": 545, |
| "end": 572, |
| "text": "[Rafferty and Manning 2008]", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 654, |
| "end": 664, |
| "text": "(LP 74.27,", |
| "ref_id": null |
| }, |
| { |
| "start": 665, |
| "end": 673, |
| "text": "LR 76.19", |
| "ref_id": null |
| }, |
| { |
| "start": 1165, |
| "end": 1190, |
| "text": "(K\u00fcbler and Penn 2008). 7", |
| "ref_id": null |
| }, |
| { |
| "start": 1242, |
| "end": 1264, |
| "text": "K\u00fcbler and Penn (2008)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 1287, |
| "end": 1310, |
| "text": "Petrov and Klein (2008)", |
| "ref_id": null |
| }, |
| { |
| "start": 1311, |
| "end": 1332, |
| "text": "Hall and Nivre (2008)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1621, |
| "end": 1634, |
| "text": "Plaehn (2004)", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 1979, |
| "end": 1990, |
| "text": "Levy (2005)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 2614, |
| "end": 2628, |
| "text": "(Goodman 2003)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 2695, |
| "end": 2707, |
| "text": "Maier (2010)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1191, |
| "end": 1198, |
| "text": "Table 4", |
| "ref_id": "TABREF11" |
| }, |
| { |
| "start": 1333, |
| "end": 1338, |
| "text": "(H&N)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison to Other Approaches", |
| "sec_num": "7." |
| }, |
| { |
| "text": "A comparison with non-projective dependency parsers (McDonald et al. 2005; Nivre et al. 2007 ) might be interesting as well, given that non-projectivity is the dependency-counterpart to discontinuity in constituency parsing. A meaningful comparison is difficult to do for the following reasons, however. Firstly, dependency parsing deals with relations between words, whereas in our case words are not considered in the parsing task. Our grammars take POS tags for a given and construct syntactic trees. Also, dependency conversion algorithms generally depend on the correct identification of linguistic head words (Lin 1995) . We cannot rely on grammatical function labels, such as, for example, Boyd and Meurers (2008) . Therefore we would have to use heuristics for the dependency conversion of the parser output. This would introduce additional noise. Secondly, the resources one obtains from our PLCFRS parser and from dependency parsers (the probabilistic LCFRS and the trained dependency parser) are quite different because the former contains non-lexicalized internal phrase structure identifying meaningful syntactic categories such as VP or NP while the latter is only concerned with relations between lexical items. A comparison would concentrate only on relations between lexical items and the rich phrase structure provided by a constituency parser would not be taken into account. To achieve some comparison, one could of course transform the discontinuous constituency trees into dependency trees with dependencies between heads and with edge labels that encode enough of the syntactic structure to retrieve the original constituency tree (Hall and Nivre 2008) . The result could then be used for a dependency evaluation. It is not clear what is to gain by this evaluation because the head-to-head dependencies one would obtain are not necessarily the predicateargument dependencies one would aim at when doing direct dependency parsing (Rambow 2010 ). 8", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 74, |
| "text": "(McDonald et al. 2005;", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 75, |
| "end": 92, |
| "text": "Nivre et al. 2007", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 615, |
| "end": 625, |
| "text": "(Lin 1995)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 697, |
| "end": 720, |
| "text": "Boyd and Meurers (2008)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1654, |
| "end": 1675, |
| "text": "(Hall and Nivre 2008)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1952, |
| "end": 1964, |
| "text": "(Rambow 2010", |
| "ref_id": "BIBREF52" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison to Other Approaches", |
| "sec_num": "7." |
| }, |
| { |
| "text": "We have presented the first efficient implementation of a weighted deductive CYK parser for Probabilistic Linear Context-Free Rewriting Systems (PLCFRS), showing that LCFRS indeed allows for data-driven parsing while modeling discontinuities in a straightforward way. To speed up parsing, we have introduced different contextsummary estimates of parse items, some acting as figures-of-merit, others allowing for A * parsing. We have implemented the parser and we have evaluated it with grammars extracted from the German NeGra treebank. Our experiments show that data-driven LCFRS parsing is feasible and yields output of competitive quality.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8." |
| }, |
| { |
| "text": "There are three main directions for future work on this subject.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8." |
| }, |
| { |
| "text": "r On the symbolic side, LCFRS seems to offer more power than necessary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8." |
| }, |
| { |
| "text": "By removing symbolic expressivity, a lower parsing complexity can be achieved. One possibility is to disallow the use of so-called ill-nested LCFRS rules. These are rules where, roughly, the spans of two right-hand side non-terminals interleave in a cross-serial way. See the parsing algorithm in G\u00f3mez-Rodr\u00edguez, Kuhlmann, and Satta (2010) . Nevertheless, this seems to be too restrictive for linguistic modeling (Chen-Main and Joshi 2010; Maier and Lichte 2011). Our goal for future work is therefore to define reduced forms of ill-nested rules with which we get a lower parsing complexity. Another possibility is to reduce the fan-out of the extracted grammar. We have pursued the question whether the fan-out of the trees in the treebank can be reduced in a linguistically meaningful way in Maier, Kaeshammer, and Kallmeyer (2012) .", |
| "cite_spans": [ |
| { |
| "start": 314, |
| "end": 340, |
| "text": "Kuhlmann, and Satta (2010)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 795, |
| "end": 834, |
| "text": "Maier, Kaeshammer, and Kallmeyer (2012)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8." |
| }, |
| { |
| "text": "r On the side of the probabilistic model, there are certain independence assumptions made in our model that are too strong. The main problem in respect is that, due to the definition of LCFRS, we have to distinguish between occurrences of the same category with different fan-outs. For instance, VP 1 (no gaps), VP 2 (one gap), and so on, are different non-terminals. Consequently, the way they expand are considered independent from each other. This is of course not true, however. Furthermore, some of these non-terminals are rather rare; we therefore have a sparse data problem here. This leads to the idea to separate the development of a category (independent from its fan-out) and the fan-out and position of gaps. We plan to integrate this into our probabilistic model in future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8." |
| }, |
| { |
| "text": "r Last, it is clear that a more informative evaluation of the parser output is still necessary, particularly with respect to its performance at the task of finding long distance dependencies and with respect to its behavior when not provided with gold POS tags.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8." |
| }, |
| { |
| "text": "Note that just asKlein and Manning (2003a), we use the terms inside score and outside score to denote the Viterbi inside and outside scores. They are not to be confused with the actual inside or outside probability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "rparse is available under the GNU General Public License 2.0 at http://www.phil.hhu.de/rparse.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Older versions of rparse contained a bug that kept the priority queue from being updated correctly (i.e., during an update, the corresponding node in the priority queue was not moved to its top, and therefore the best parse was not guaranteed to be found); however, higher parsing speeds were achieved. The current version of rparse implements the update operation correctly, using a Fibonacci queue to ensure efficiency(Cormen et al. 2003). Thanks to Andreas van Cranenburgh for pointing this out. 5 The term head is not used in its proper linguistic sense here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We have obtained the former parser from http://www.ims.uni-stuttgart.de/tcl/SOFTWARE/ LoPar.html and the latter (Version 2.0.1) from http://nlp.stanford.edu/software/lex-parser.shtml. 7 Thanks to Sandra K\u00fcbler for providing us with the experimental data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "A way to overcome this difference in the content of the dependency annotation would be to use an evaluation along the lines ofTsarfaty, Nivre, and Andersson (2011); this is not available yet for annotations with crossing branches, however.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We are particularly grateful to Giorgio Satta for extensive discussions of the details of the probabilistic treebank model presented in this paper. Furthermore, we owe a debt to Kilian Evang who participated in the implementation of the parser. Thanks to Andreas van Cranenburgh for helpful feedback on the parser implementation. Finally, we are grateful to our three anonymous reviewers for many valuable and helpful comments and suggestions. A part of the work on this paper was funded by the German Research Foundation DFG (Deutsche Forschungsgemeinschaft) in the form of an Emmy Noether Grant and a subsequent DFG research project.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Guided parsing of range concatenation languages", |
| "authors": [ |
| { |
| "first": "Fran\u00e7ois", |
| "middle": [], |
| "last": "Barth\u00e9lemy", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Boullier", |
| "suffix": "" |
| }, |
| { |
| "first": "Philippe", |
| "middle": [], |
| "last": "Deschamp", |
| "suffix": "" |
| }, |
| { |
| "first": "Clergerie", |
| "middle": [], |
| "last": "And\u00e9ric Villemonte De La", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "42--49", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Barth\u00e9lemy, Fran\u00e7ois, Pierre Boullier, Philippe Deschamp, and\u00c9ric Villemonte de la Clergerie. 2001. Guided parsing of range concatenation languages. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics, pages 42-49, Toulouse.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Long-distance scrambling and tree-adjoining grammars", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Becker", |
| "suffix": "" |
| }, |
| { |
| "first": "Aravind", |
| "middle": [ |
| "K" |
| ], |
| "last": "Tilman", |
| "suffix": "" |
| }, |
| { |
| "first": "Owen", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rambow", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of the Fifth Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "21--26", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Becker, Tilman, Aravind K. Joshi, and Owen Rambow. 1991. Long-distance scrambling and tree-adjoining grammars. In Proceedings of the Fifth Conference of the European Chapter of the Association for Computational Linguistics, pages 21-26, Berlin.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A generalization of mildly context-sensitive formalisms", |
| "authors": [ |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Boullier", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the Fourth International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+4)", |
| "volume": "", |
| "issue": "", |
| "pages": "17--20", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Boullier, Pierre. 1998a. A generalization of mildly context-sensitive formalisms. In Proceedings of the Fourth International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+4), pages 17-20, Philadelphia, PA.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A proposal for a natural language processing syntactic backbone", |
| "authors": [ |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Boullier", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "INRIA", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Boullier, Pierre. 1998b. A proposal for a natural language processing syntactic backbone. Technical Report 3342, INRIA, Roquencourt.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Range concatenation grammars", |
| "authors": [ |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Boullier", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the Sixth International Workshop on Parsing Technologies (IWPT2000)", |
| "volume": "", |
| "issue": "", |
| "pages": "53--64", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Boullier, Pierre. 2000. Range concatenation grammars. In Proceedings of the Sixth International Workshop on Parsing Technologies (IWPT2000), pages 53-64, Trento.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Discontinuity revisited: An improved conversion to context-free representations", |
| "authors": [ |
| { |
| "first": "Adriane", |
| "middle": [], |
| "last": "Boyd", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "the Linguistic Annotation Workshop at ACL 2007", |
| "volume": "", |
| "issue": "", |
| "pages": "41--44", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Boyd, Adriane. 2007. Discontinuity revisited: An improved conversion to context-free representations. In the Linguistic Annotation Workshop at ACL 2007, pages 41-44, Prague.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Revisiting the impact of different annotation schemes on PCFG parsing: A grammatical dependency evaluation", |
| "authors": [ |
| { |
| "first": "Adriane", |
| "middle": [], |
| "last": "Boyd", |
| "suffix": "" |
| }, |
| { |
| "first": "Detmar", |
| "middle": [], |
| "last": "Meurers", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Workshop on Parsing German at ACL 2008", |
| "volume": "", |
| "issue": "", |
| "pages": "24--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Boyd, Adriane and Detmar Meurers. 2008. Revisiting the impact of different annotation schemes on PCFG parsing: A grammatical dependency evaluation. In Proceedings of the Workshop on Parsing German at ACL 2008, pages 24-32, Columbus, OH.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "The TIGER Treebank", |
| "authors": [ |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Brants", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefanie", |
| "middle": [], |
| "last": "Dipper", |
| "suffix": "" |
| }, |
| { |
| "first": "Silvia", |
| "middle": [], |
| "last": "Hansen", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Lezius", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 1st Workshop on Treebanks and Linguistic Theories", |
| "volume": "", |
| "issue": "", |
| "pages": "24--42", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brants, Sabine, Stefanie Dipper, Silvia Hansen, Wolfgang Lezius, and George Smith. 2002. The TIGER Treebank. In Proceedings of the 1st Workshop on Treebanks and Linguistic Theories, pages 24-42, Sozopol.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Formal tools for describing and processing discontinuous constituency structure", |
| "authors": [ |
| { |
| "first": "Harry", |
| "middle": [], |
| "last": "Bunt", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "of Natural Language Processing", |
| "volume": "6", |
| "issue": "", |
| "pages": "63--83", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bunt, Harry. 1996. Formal tools for describing and processing discontinuous constituency structure. In Harry Bunt and Arthur van Horck, editors, Discontinuous Constituency, volume 6 of Natural Language Processing. Mouton de Gruyter, Berlin, pages 63-83.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "New figures of merit for best-first probabilistic chart parsing", |
| "authors": [ |
| { |
| "first": "Shu", |
| "middle": [], |
| "last": "Cai", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg ; Portland", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [ |
| "R" |
| ], |
| "last": "Candito", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Djam\u00e9", |
| "middle": [], |
| "last": "Seddah", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "24", |
| "issue": "", |
| "pages": "275--298", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cai, Shu, David Chiang, and Yoav Goldberg. 2011. Language-independent parsing with empty elements. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 212-216, Portland, OR. Candito, Marie and Djam\u00e9 Seddah. 2010. Parsing word clusters. In Proceedings of the First Workshop on Statistical Parsing of Morphologically-Rich Languages at NAACL HLT 2010, pages 76-84, Los Angeles, CA. Caraballo, Sharon A. and Eugene Charniak. 1998. New figures of merit for best-first probabilistic chart parsing. Computational Linguistics, 24(2):275-298.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Multilevel coarse-to-fine PCFG parsing", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Micha", |
| "middle": [], |
| "last": "Elsner", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Austerweil", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Ellis", |
| "suffix": "" |
| }, |
| { |
| "first": "Isaac", |
| "middle": [], |
| "last": "Haxton", |
| "suffix": "" |
| }, |
| { |
| "first": "Catherine", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Shrivaths", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeremy", |
| "middle": [], |
| "last": "Moore", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Pozar", |
| "suffix": "" |
| }, |
| { |
| "first": "Theresa", |
| "middle": [], |
| "last": "Vu", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "168--175", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Charniak, Eugene, Mark Johnson, Micha Elsner, Joseph Austerweil, David Ellis, Isaac Haxton, Catherine Hill, R. Shrivaths, Jeremy Moore, Michael Pozar, and Theresa Vu. 2006. Multilevel coarse-to-fine PCFG parsing. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 168-175, New York, NY.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Unavoidable ill-nestedness in natural language and the adequacy of tree local-MCTAG induced dependency structures", |
| "authors": [ |
| { |
| "first": "Joan", |
| "middle": [], |
| "last": "Chen-Main", |
| "suffix": "" |
| }, |
| { |
| "first": "Aravind", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the Tenth International Workshop on Tree Adjoining Grammar and Related Formalisms (TAG+10)", |
| "volume": "", |
| "issue": "", |
| "pages": "119--126", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen-Main, Joan and Aravind Joshi. 2010. Unavoidable ill-nestedness in natural language and the adequacy of tree local-MCTAG induced dependency structures. In Proceedings of the Tenth International Workshop on Tree Adjoining Grammar and Related Formalisms (TAG+10), pages 119-126, New Haven, CT.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Statistical parsing with an automatically extracted tree adjoining grammar", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Data-Oriented Parsing", |
| "volume": "", |
| "issue": "", |
| "pages": "299--316", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chiang, David. 2003. Statistical parsing with an automatically extracted tree adjoining grammar. In Rens Bod, Remko Scha, and Khalil Sima'an, editors, Data-Oriented Parsing, CSLI Studies in Computational Linguistics. CSLI Publications, Stanford, CA, pages 299-316.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Head-Driven Statistical Models for Natural Language Parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Collins, Michael. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Introduction to Algorithms", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [ |
| "H" |
| ], |
| "last": "Cormen", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Charles", |
| "suffix": "" |
| }, |
| { |
| "first": "Ronald", |
| "middle": [ |
| "L" |
| ], |
| "last": "Leiserson", |
| "suffix": "" |
| }, |
| { |
| "first": "Clifford", |
| "middle": [], |
| "last": "Rivest", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Stein", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cormen, Thomas H., Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. 2003. Introduction to Algorithms. MIT Press, Cambridge, 2nd edition.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Statistical Parsing with Non-local Dependencies", |
| "authors": [ |
| { |
| "first": "P\u00e9ter", |
| "middle": [], |
| "last": "Dienes", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dienes, P\u00e9ter. 2003. Statistical Parsing with Non-local Dependencies. Ph.D. thesis, Saarland University.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Optimal reduction of rule length in linear context-free rewriting systems", |
| "authors": [ |
| { |
| "first": "Kilian", |
| "middle": [], |
| "last": "Evang", |
| "suffix": "" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Kallmeyer ; Dublin", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "G\u00f3mez-Rodr\u00edguez", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Kuhlmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "G\u00f3mez-Rodr\u00edguez", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Kuhlmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Weir", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "539--547", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Evang, Kilian and Laura Kallmeyer. 2011. PLCFRS parsing of English discontinuous constituents. In Proceedings of the 12th International Conference on Parsing Technologies, pages 104-116, Dublin. G\u00f3mez-Rodr\u00edguez, Carlos, Marco Kuhlmann, and Giorgio Satta. 2010. Efficient parsing of well-nested linear context-free rewriting systems. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 276-284, Los Angeles, CA. G\u00f3mez-Rodr\u00edguez, Carlos, Marco Kuhlmann, Giorgio Satta, and David Weir. 2009. Optimal reduction of rule length in linear context-free rewriting systems. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 539-547, Boulder, CO.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Efficient parsing of DOP with PCFG-reductions", |
| "authors": [ |
| { |
| "first": "Joshua", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Data-Oriented Parsing", |
| "volume": "", |
| "issue": "", |
| "pages": "125--146", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Goodman, Joshua. 2003. Efficient parsing of DOP with PCFG-reductions. In Rens Bod, Remko Scha, and Khalil Sima'an, editors, Data-Oriented Parsing, CSLI Studies in Computational Linguistics. CSLI Publications, Stanford, CA, pages 125-146.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "A dependency-driven parser for German dependency and constituency representations", |
| "authors": [ |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Workshop on Parsing German at ACL 2008", |
| "volume": "", |
| "issue": "", |
| "pages": "47--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hall, Johan and Joakim Nivre. 2008. A dependency-driven parser for German dependency and constituency representations. In Proceedings of the Workshop on Parsing German at ACL 2008, pages 47-54, Columbus, OH.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Bracketing guidelines for Penn Korean TreeBank", |
| "authors": [ |
| { |
| "first": "Chung-Hye", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| }, |
| { |
| "first": "Na-Rae", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| }, |
| { |
| "first": "Eon-Suk", |
| "middle": [], |
| "last": "Ko", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Han, Chung-hye, Na-Rae Han, and Eon-Suk Ko. 2001. Bracketing guidelines for Penn Korean TreeBank. Technical Report 01-10, IRCS, University of Pennsylvania, Philadelphia, PA.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Data and models for Statistical Parsing with Combinatory Categorial Grammar", |
| "authors": [ |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Hockenmaier", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hockenmaier, Julia. 2003. Data and models for Statistical Parsing with Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Integrating \"free\" word order syntax and information structure", |
| "authors": [ |
| { |
| "first": "Beryl", |
| "middle": [], |
| "last": "Hoffman", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Seventh Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "245--251", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hoffman, Beryl. 1995. Integrating \"free\" word order syntax and information structure. In Seventh Conference of the European Chapter of the Association for Computational Linguistics, pages 245-251, Dublin.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Der Begriff \"Mittelfeld\"-Anmerkungen\u00fcber die Theorie der topologischen Felder", |
| "authors": [ |
| { |
| "first": "Tilman", |
| "middle": [], |
| "last": "H\u00f6hle", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "Akten des Siebten Internationalen Germanistenkongresses", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H\u00f6hle, Tilman. 1986. Der Begriff \"Mittelfeld\"-Anmerkungen\u00fcber die Theorie der topologischen Felder. In Akten des Siebten Internationalen Germanistenkongresses 1985, G\u00f6ttingen, Germany.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "A simple pattern-matching algorithm for recovering empty nodes and their antecedents", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "136--143", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johnson, Mark. 2002. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 136-143, Philadelphia, PA.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Data-driven parsing with probabilistic linear context-free rewriting systems", |
| "authors": [ |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Kallmeyer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Springer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Berlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Kallmeyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Maier", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Eighth International Workshop on Tree Adjoining Grammar and Related Formalisms (TAG+8)", |
| "volume": "", |
| "issue": "", |
| "pages": "57--64", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kallmeyer, Laura. 2010. Parsing Beyond Context-Free Grammars. Springer, Berlin. Kallmeyer, Laura and Wolfgang Maier. 2010. Data-driven parsing with probabilistic linear context-free rewriting systems. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010), pages 537-545, Beijing. Kato, Yuki, Hiroyuki Seki, and Tadao Kasami. 2006. Stochastic multiple context-free grammar for RNA pseudoknot modeling. In Proceedings of the Eighth International Workshop on Tree Adjoining Grammar and Related Formalisms (TAG+8), pages 57-64, Sydney.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "A* Parsing: Fast exact viterbi parse selection", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "40--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Klein, Dan and Christopher D. Manning. 2003a. A* Parsing: Fast exact viterbi parse selection. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 40-47, Edmonton.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Accurate unlexicalized parsing", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "423--430", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Klein, Dan and Christopher D. Manning. 2003b. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423-430, Sapporo.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Fast exact inference with a factored model for natural language parsing", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Advances in Neural Information Processing Systems 15 (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "3--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Klein, Dan and Christopher D. Manning. 2003c. Fast exact inference with a factored model for natural language parsing. In Advances in Neural Information Processing Systems 15 (NIPS), pages 3-10, Vancouver. Kracht, Marcus. 2003. The Mathematics of Language. Number 63 in Studies in Generative Grammar. Mouton de Gruyter, Berlin.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "How do treebank annotation schemes influence parsing results? Or how not to compare apples and oranges", |
| "authors": [ |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Recent Advances in Natural Language Processing 2005 (RANLP 2005)", |
| "volume": "", |
| "issue": "", |
| "pages": "293--300", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K\u00fcbler, Sandra. 2005. How do treebank annotation schemes influence parsing results? Or how not to compare apples and oranges. In Recent Advances in Natural Language Processing 2005 (RANLP 2005), pages 293-300, Borovets.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Proceedings of the Workshop on Parsing German at ACL", |
| "authors": [ |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerald", |
| "middle": [], |
| "last": "Penn", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K\u00fcbler, Sandra and Gerald Penn, editors. 2008. Proceedings of the Workshop on Parsing German at ACL 2008. Association for Computational Linguistics, Columbus, OH.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Treebank grammar techniques for non-projective dependency parsing", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Kuhlmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "478--486", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kuhlmann, Marco and Giorgio Satta. 2009. Treebank grammar techniques for non-projective dependency parsing. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 478-486, Athens.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Probabilistic Models of Word Order and Syntactic Discontinuity", |
| "authors": [ |
| { |
| "first": "Roger", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Levy, Roger. 2005. Probabilistic Models of Word Order and Syntactic Discontinuity. Ph.D. thesis, Stanford University.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Deep dependencies from context-free statistical parsers: Correcting the surface dependency approximation", |
| "authors": [ |
| { |
| "first": "Roger", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume", |
| "volume": "", |
| "issue": "", |
| "pages": "328--335", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Levy, Roger and Christopher D. Manning. 2004. Deep dependencies from context-free statistical parsers: Correcting the surface dependency approximation. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume, pages 328-335, Barcelona.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "A dependency-based method for evaluating broad-coverage parsers", |
| "authors": [ |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI 95)", |
| "volume": "", |
| "issue": "", |
| "pages": "1420--1427", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lin, Dekang. 1995. A dependency-based method for evaluating broad-coverage parsers. In Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI 95), pages 1420-1427, Montreal.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Direct parsing of discontinuous constituents in German", |
| "authors": [ |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Maier", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the First Workshop on Statistical Parsing of Morphologically-Rich Languages at NAACL HLT 2010", |
| "volume": "", |
| "issue": "", |
| "pages": "58--66", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maier, Wolfgang. 2010. Direct parsing of discontinuous constituents in German. In Proceedings of the First Workshop on Statistical Parsing of Morphologically-Rich Languages at NAACL HLT 2010, pages 58-66, Los Angeles, CA.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Data-driven PLCFRS parsing revisited: Restricting the fan-out to two", |
| "authors": [ |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Maier", |
| "suffix": "" |
| }, |
| { |
| "first": "Miriam", |
| "middle": [], |
| "last": "Kaeshammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Kallmeyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Eleventh International Conference on Tree Adjoining Grammars and Related Formalisms (TAG+11)", |
| "volume": "", |
| "issue": "", |
| "pages": "126--134", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maier, Wolfgang, Miriam Kaeshammer, and Laura Kallmeyer. 2012. Data-driven PLCFRS parsing revisited: Restricting the fan-out to two. In Proceedings of the Eleventh International Conference on Tree Adjoining Grammars and Related Formalisms (TAG+11), pages 126-134, Paris.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Discontinuity and non-projectivity: Using mildly context-sensitive formalisms for data-driven parsing", |
| "authors": [ |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Maier", |
| "suffix": "" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Kallmeyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the Tenth International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+10)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maier, Wolfgang and Laura Kallmeyer. 2010. Discontinuity and non-projectivity: Using mildly context-sensitive formalisms for data-driven parsing. In Proceedings of the Tenth International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+10), New Haven, CT.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Characterizing discontinuity in constituent treebanks", |
| "authors": [ |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Maier", |
| "suffix": "" |
| }, |
| { |
| "first": "Timm", |
| "middle": [], |
| "last": "Lichte", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Formal Grammar. 14th International Conference", |
| "volume": "5591", |
| "issue": "", |
| "pages": "167--182", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maier, Wolfgang and Timm Lichte. 2011. Characterizing discontinuity in constituent treebanks. In Formal Grammar. 14th International Conference, FG 2009. Bordeaux, France, July 25-26, 2009. Revised Selected Papers, volume 5591 of Lecture Notes in Artificial Intelligence, pages 167-182, Springer-Verlag, Berlin/Heidelberg/ New York.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Treebanks and mild context-sensitivity", |
| "authors": [ |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Maier", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 13th Conference on Formal Grammar (FG-2008)", |
| "volume": "", |
| "issue": "", |
| "pages": "61--76", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maier, Wolfgang and Anders S\u00f8gaard. 2008. Treebanks and mild context-sensitivity. In Proceedings of the 13th Conference on Formal Grammar (FG-2008), pages 61-76, Hamburg.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "The Penn Treebank: Annotating predicate argument structure", |
| "authors": [ |
| { |
| "first": "Mitchell", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Grace", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [ |
| "Ann" |
| ], |
| "last": "Marcinkiewicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Macintyre", |
| "suffix": "" |
| }, |
| { |
| "first": "Ann", |
| "middle": [], |
| "last": "Bies", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Ferguson", |
| "suffix": "" |
| }, |
| { |
| "first": "Karen", |
| "middle": [], |
| "last": "Katz", |
| "suffix": "" |
| }, |
| { |
| "first": "Britta", |
| "middle": [], |
| "last": "Schasberger", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the Human Language Technology Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "114--119", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marcus, Mitchell, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger. 1994. The Penn Treebank: Annotating predicate argument structure. In Proceedings of the Human Language Technology Conference, pages 114-119.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Non-projective dependency parsing using spanning tree algorithms", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "Kiril", |
| "middle": [], |
| "last": "Ribarov", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "523--530", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "McDonald, Ryan, Fernando Pereira, Kiril Ribarov, and Jan Haji\u010d. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 523-530, Vancouver.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "On Formal Properties of Minimalist Grammars", |
| "authors": [ |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Michaelis", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michaelis, Jens. 2001. On Formal Properties of Minimalist Grammars. Ph.D. thesis, Universit\u00e4t Potsdam.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Free word order, morphological case, and sympathy theory", |
| "authors": [ |
| { |
| "first": "Gereon", |
| "middle": [], |
| "last": "M\u00fcller", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Resolving Conflicts in Grammars: Optimality Theory in Syntax, Morphology, and Phonology", |
| "volume": "", |
| "issue": "", |
| "pages": "265--397", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M\u00fcller, Gereon. 2002. Free word order, morphological case, and sympathy theory. In Gisbert Fanselow and Caroline Fery, editors, Resolving Conflicts in Grammars: Optimality Theory in Syntax, Morphology, and Phonology. Buske Verlag, Hamburg, pages 265-397.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Continuous or discontinuous constituents? Research on Language & Computation", |
| "authors": [ |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "M\u00fcller", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "209--257", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M\u00fcller, Stefan. 2004. Continuous or discontinuous constituents? Research on Language & Computation, 2(2):209-257.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Weighted deductive parsing and knuth's algorithm", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [ |
| "-" |
| ], |
| "last": "Nederhof", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Computational Linguistics", |
| "volume": "29", |
| "issue": "1", |
| "pages": "135--143", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nederhof, Mark-Jan. 2003. Weighted deductive parsing and knuth's algorithm. Computational Linguistics, 29(1):135-143.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "MaltParser: A language-independent system for data-driven dependency parsing", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| }, |
| { |
| "first": "Atanas", |
| "middle": [], |
| "last": "Chanev", |
| "suffix": "" |
| }, |
| { |
| "first": "G\u00fclsen", |
| "middle": [], |
| "last": "Eryigit", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "" |
| }, |
| { |
| "first": "Svetoslav", |
| "middle": [], |
| "last": "Marinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Erwin", |
| "middle": [], |
| "last": "Marsi", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Natural Language Engineering", |
| "volume": "13", |
| "issue": "2", |
| "pages": "95--135", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nivre, Joakim, Johan Hall, Jens Nilsson, Atanas Chanev, G\u00fclsen Eryigit, Sandra K\u00fcbler, Svetoslav Marinov, and Erwin Marsi. 2007. MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering, 13(2):95-135.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "BTB-TR05: BulTreebank Stylebook", |
| "authors": [ |
| { |
| "first": "Petya", |
| "middle": [], |
| "last": "Osenova", |
| "suffix": "" |
| }, |
| { |
| "first": "Kiril", |
| "middle": [], |
| "last": "Simov", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Osenova, Petya and Kiril Simov. 2004. BTB-TR05: BulTreebank Stylebook. Technical Report 05, BulTreeBank Project, Sofia, Bulgaria.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "Parsing as deduction", |
| "authors": [ |
| { |
| "first": "Fernando", |
| "middle": [ |
| "C N" |
| ], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Warren", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "137--144", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pereira, Fernando C. N. and David Warren. 1983. Parsing as deduction. In Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics, pages 137-144, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Improved inference for unlexicalized parsing", |
| "authors": [ |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Petrov, Slav and Dan Klein. 2007. Improved inference for unlexicalized parsing.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Parsing German with latent variable grammars", |
| "authors": [], |
| "year": 2008, |
| "venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "24--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 404-411, Rochester, NY. Petrov, Slav and Dan Klein. 2008. Parsing German with latent variable grammars. In Proceedings of the Workshop on Parsing German at ACL 2008, pages 24-32, Columbus, OH.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Computing the most probable parse for a discontinuous phrase-structure grammar", |
| "authors": [ |
| { |
| "first": "Oliver", |
| "middle": [], |
| "last": "Plaehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "New Developments in Parsing Technology", |
| "volume": "23", |
| "issue": "", |
| "pages": "91--106", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Plaehn, Oliver. 2004. Computing the most probable parse for a discontinuous phrase-structure grammar. In Harry Bunt, John Carroll, and Giorgio Satta, editors, New Developments in Parsing Technology, volume 23 of Text, Speech And Language Technology. Kluwer, Dordrecht, pages 91-106.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "Parsing three German treebanks: Lexicalized and unlexicalized baselines", |
| "authors": [ |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Rafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Workshop on Parsing German at ACL 2008", |
| "volume": "", |
| "issue": "", |
| "pages": "40--46", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rafferty, Anna and Christopher D. Manning. 2008. Parsing three German treebanks: Lexicalized and unlexicalized baselines. In Proceedings of the Workshop on Parsing German at ACL 2008, pages 40-46, Columbus, OH.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "The simple truth about dependency and phrase structure representations: An opinion piece", |
| "authors": [ |
| { |
| "first": "Owen", |
| "middle": [], |
| "last": "Rambow", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "337--340", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rambow, Owen. 2010. The simple truth about dependency and phrase structure representations: An opinion piece. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 337-340, Los Angeles, CA.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "Evaluating evaluation measures", |
| "authors": [ |
| { |
| "first": "Ines", |
| "middle": [], |
| "last": "Rehbein", |
| "suffix": "" |
| }, |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Van Genabith", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 16th Nordic Conference of Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "372--379", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rehbein, Ines and Josef van Genabith. 2007. Evaluating evaluation measures. In Proceedings of the 16th Nordic Conference of Computational Linguistics, pages 372-379, Tartu.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "LoPar: Design and implementation", |
| "authors": [ |
| { |
| "first": "Helmut", |
| "middle": [], |
| "last": "Schmid", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Arbeitspapiere des Sonderforschungsbereiches", |
| "volume": "340", |
| "issue": "149", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Schmid, Helmut. 2000. LoPar: Design and implementation. Arbeitspapiere des Sonderforschungsbereiches 340 149, IMS, University of Stuttgart, Stuttgart, Germany.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "Proceedings of the First Workshop on Statistical Parsing of Morphologically-Rich Languages at NAACL HLT 2010", |
| "authors": [ |
| { |
| "first": "Djame", |
| "middle": [], |
| "last": "Seddah", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "" |
| }, |
| { |
| "first": "Reut", |
| "middle": [], |
| "last": "Tsarfaty", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Seddah, Djame, Sandra K\u00fcbler, and Reut Tsarfaty, editors. 2010. Proceedings of the First Workshop on Statistical Parsing of Morphologically-Rich Languages at NAACL HLT 2010. Association for Computational Linguistics, Los Angeles, CA.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "On multiple context-free grammars", |
| "authors": [ |
| { |
| "first": "Hiroyuki", |
| "middle": [], |
| "last": "Seki", |
| "suffix": "" |
| }, |
| { |
| "first": "Takahashi", |
| "middle": [], |
| "last": "Matsumura", |
| "suffix": "" |
| }, |
| { |
| "first": "Mamoru", |
| "middle": [], |
| "last": "Fujii", |
| "suffix": "" |
| }, |
| { |
| "first": "Tadao", |
| "middle": [], |
| "last": "Kasami", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Theoretical Computer Science", |
| "volume": "88", |
| "issue": "2", |
| "pages": "191--229", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Seki, Hiroyuki, Takahashi Matsumura, Mamoru Fujii, and Tadao Kasami. 1991. On multiple context-free grammars. Theoretical Computer Science, 88(2):191-229.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "Principles and implementation of deductive parsing", |
| "authors": [ |
| { |
| "first": "Stuart", |
| "middle": [ |
| "M" |
| ], |
| "last": "Shieber", |
| "suffix": "" |
| }, |
| { |
| "first": "Yves", |
| "middle": [], |
| "last": "Schabes", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [ |
| "C N" |
| ], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Journal of Logic Programming", |
| "volume": "24", |
| "issue": "1-2", |
| "pages": "3--36", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shieber, Stuart M., Yves Schabes, and Fernando C. N. Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming, 24(1-2):3-36.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "Parsing Schemata. Texts in Theoretical Computer Science", |
| "authors": [ |
| { |
| "first": "Klaas", |
| "middle": [], |
| "last": "Sikkel", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sikkel, Klaas. 1997. Parsing Schemata. Texts in Theoretical Computer Science. Springer, Berlin, Heidelberg, New York.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "An annotation scheme for free word order languages", |
| "authors": [ |
| { |
| "first": "Wojciech", |
| "middle": [], |
| "last": "Skut", |
| "suffix": "" |
| }, |
| { |
| "first": "Brigitte", |
| "middle": [], |
| "last": "Krenn", |
| "suffix": "" |
| }, |
| { |
| "first": "Thorten", |
| "middle": [], |
| "last": "Brants", |
| "suffix": "" |
| }, |
| { |
| "first": "Hans", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing (ANLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "88--95", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Skut, Wojciech, Brigitte Krenn, Thorten Brants, and Hans Uszkoreit. 1997. An annotation scheme for free word order languages. In Proceedings of the Fifth Conference on Applied Natural Language Processing (ANLP), pages 88-95, Washington, DC.", |
| "links": null |
| }, |
| "BIBREF60": { |
| "ref_id": "b60", |
| "title": "Stylebook for the T\u00fcbingen Treebank of Written German (T\u00fcBa-D/Z)", |
| "authors": [ |
| { |
| "first": "Heike", |
| "middle": [], |
| "last": "Telljohann", |
| "suffix": "" |
| }, |
| { |
| "first": "Erhard", |
| "middle": [ |
| "W" |
| ], |
| "last": "Hinrichs", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "" |
| }, |
| { |
| "first": "Heike", |
| "middle": [], |
| "last": "Zinsmeister", |
| "suffix": "" |
| }, |
| { |
| "first": "Kathrin", |
| "middle": [], |
| "last": "Beck", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Seminar f\u00fcr Sprachwissenschaft", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Telljohann, Heike, Erhard W. Hinrichs, Sandra K\u00fcbler, Heike Zinsmeister, and Kathrin Beck. 2012. Stylebook for the T\u00fcbingen Treebank of Written German (T\u00fcBa-D/Z). Technical report, Seminar f\u00fcr Sprachwissenschaft, Universit\u00e4t T\u00fcbingen, T\u00fcbingen, Germany. http://www.sfs. uni.tuebingen.de/resources/tuebadz- stylebook-1201.pdf.", |
| "links": null |
| }, |
| "BIBREF61": { |
| "ref_id": "b61", |
| "title": "Evaluating dependency parsing: Robust and heuristics-free cross-annotation evaluation", |
| "authors": [ |
| { |
| "first": "Reut", |
| "middle": [], |
| "last": "Tsarfaty", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Evelina", |
| "middle": [], |
| "last": "Andersson", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "385--396", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tsarfaty, Reut, Joakim Nivre, and Evelina Andersson. 2011. Evaluating dependency parsing: Robust and heuristics-free cross-annotation evaluation. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 385-396, Edinburgh.", |
| "links": null |
| }, |
| "BIBREF62": { |
| "ref_id": "b62", |
| "title": "Discontinuous data-oriented parsing: A mildly context-sensitive all-fragments grammar", |
| "authors": [ |
| { |
| "first": "Hans", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "A" |
| ], |
| "last": "Stanford", |
| "suffix": "" |
| }, |
| { |
| "first": "Remko", |
| "middle": [], |
| "last": "Van Cranenburgh ; Andreas", |
| "suffix": "" |
| }, |
| { |
| "first": "Federico", |
| "middle": [], |
| "last": "Scha", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sangati", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "34--44", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Uszkoreit, Hans. 1986. Linear precedence in discontinuous constituents: Complex fronting in German. CSLI report CSLI-86-47, Center for the Study of Language and Information, Stanford University, Stanford, CA. van Cranenburgh, Andreas. 2012. Efficient parsing with linear context-free rewriting systems. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 460-470, Avignon. van Cranenburgh, Andreas, Remko Scha, and Federico Sangati. 2011. Discontinuous data-oriented parsing: A mildly context-sensitive all-fragments grammar. In Proceedings of the Second Workshop on Statistical Parsing of Morphologically Rich Languages (SPMRL 2011), pages 34-44, Dublin.", |
| "links": null |
| }, |
| "BIBREF63": { |
| "ref_id": "b63", |
| "title": "Characterizing structural descriptions produced by various grammatical formalisms", |
| "authors": [ |
| { |
| "first": "Yannick", |
| "middle": [], |
| "last": "Versley", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Vijay-Shanker", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "Aravind", |
| "middle": [ |
| "K" |
| ], |
| "last": "Weir", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "104--111", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Versley, Yannick. 2005. Parser evaluation across text types. In Proceedings of the Fourth Workshop on Treebanks and Linguistic Theories, pages 209-220, Barcelona, Spain. Vijay-Shanker, K., David J. Weir, and Aravind K. Joshi. 1987. Characterizing structural descriptions produced by various grammatical formalisms. In Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics, pages 104-111, Stanford, CA.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "that, the regulation would only constantly produce new old cases.\" \". . . whether one could build on their premises the type of parking facility, which . . . \"", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "text": "Figures 1 and 2.Figure 1shows the NeGra annotation of Example (2a-i) (left), and an", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "text": "discontinuous constituent. Original NeGra annotation (left) and a T\u00fcBa-D/Z-style annotation (right). -movement. Original PTB annotation (left) and NeGra-style annotation (right).", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "num": null, |
| "text": "Figure 5 Sample PLCFRS.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF5": { |
| "num": null, |
| "text": "Sample binarization of an LCFRS.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF6": { |
| "num": null, |
| "text": "SX estimate depending on length, left, right, gaps.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF7": { |
| "num": null, |
| "text": "sample tree from NeGra.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF8": { |
| "num": null, |
| "text": "LCFRS : Items for PLCFRS parsing (left-to-right): binarizations, baseline and category splits, and estimates.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "text": ") der kleine Junge nom schickt den Brief acc seiner Schwester dat (ii) seiner Schwester dat schickt der kleine Junge nom den Brief acc (iii) den Brief acc schickt der kleine Junge nom seiner Schwester dat", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td/><td/><td>).</td><td/><td/><td/><td/></tr><tr><td>(1)</td><td>a. der</td><td>kleine</td><td>Junge nom</td><td>schickt</td><td>seiner</td><td>Schwester dat</td><td>den</td><td>Brief acc</td></tr><tr><td/><td>the</td><td>little</td><td>boy</td><td>sends</td><td>his</td><td>sister</td><td>the</td><td>letter</td></tr><tr><td/><td colspan=\"5\">b. Other possible word orders:</td><td/><td/><td/></tr><tr><td/><td>(i</td><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
| "html": null |
| }, |
| "TABREF1": { |
| "text": "It is the roof of the house he repairs.\"", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Na kyshtata</td><td>toi</td><td colspan=\"2\">popravi</td><td>pokriva.</td></tr><tr><td>Of house-DET</td><td>he</td><td colspan=\"2\">repaired</td><td>roof.</td></tr><tr><td>\"b. Gwon.han-\u0217l</td><td colspan=\"2\">nu.ga</td><td colspan=\"2\">ka.ji.go</td><td>iss.ji?</td></tr><tr><td>Authority-OBJ</td><td colspan=\"2\">who</td><td>has</td><td>not?</td></tr><tr><td colspan=\"4\">\"Who has no authority?\"</td></tr></table>", |
| "html": null |
| }, |
| "TABREF3": { |
| "text": "|w|} such that for all x, y adjacent in one of the elements of \u03b1, f (x) \u2022 f (y) must be defined; we then define f (xy)", |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF7": { |
| "text": "NeGra: Properties of the data with crossing branches.", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td>training</td><td>test</td></tr><tr><td>number of sentences</td><td>16,502</td><td>1,833</td></tr><tr><td>average sentence length</td><td>14.56</td><td>14.62</td></tr><tr><td>average tree height</td><td>4.62</td><td>4.72</td></tr><tr><td>average children per node</td><td>2.96</td><td>2.94</td></tr><tr><td>sentences without gaps</td><td colspan=\"2\">12,481 (75.63%) 1,361 (74.25%)</td></tr><tr><td>sentences with one gap</td><td>3,320 (20.12%)</td><td>387 (21.11%)</td></tr><tr><td>sentences with \u2265 2 gaps</td><td>701 (4.25%)</td><td>85 (4.64%)</td></tr><tr><td>maximum gap degree</td><td>6</td><td>5</td></tr></table>", |
| "html": null |
| }, |
| "TABREF8": { |
| "text": "NeGra LCFRS : PLCFRS parsing results for different binarizations.", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td>Head-driven</td><td>KM</td><td colspan=\"3\">L-to-R Optimal Deterministic</td></tr><tr><td>LP</td><td>74.00</td><td>74.00</td><td>75.08</td><td>74.92</td><td>72.40</td></tr><tr><td>LR</td><td>74.24</td><td>74.13</td><td>74.69</td><td>74.88</td><td>71.80</td></tr><tr><td>LF 1</td><td>74.12</td><td>74.07</td><td>74.88</td><td>74.90</td><td>72.10</td></tr><tr><td>UP</td><td>77.09</td><td>77.20</td><td>77.95</td><td>77.77</td><td>75.67</td></tr><tr><td>UR</td><td>77.34</td><td>77.33</td><td>77.54</td><td>77.73</td><td>75.04</td></tr><tr><td>UF 1</td><td>77.22</td><td>77.26</td><td>77.75</td><td>77.75</td><td>75.35</td></tr></table>", |
| "html": null |
| }, |
| "TABREF9": { |
| "text": "NeGra LCFRS and NeGra CFG : baseline and category splits.", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td colspan=\"3\">w/ category splits</td><td/><td colspan=\"2\">w/ category splits</td></tr><tr><td/><td>NeGra LCFRS</td><td>NP</td><td>S</td><td colspan=\"2\">NP \u2022 S NeGra CFG</td><td>NP</td><td>S</td><td>NP \u2022 S</td></tr><tr><td>LP</td><td>74.92</td><td colspan=\"2\">75.21 75.81</td><td>75.93</td><td>76.32</td><td colspan=\"2\">76.79 77.39</td><td>77.58</td></tr><tr><td>LR</td><td>74.88</td><td colspan=\"2\">74.95 75.65</td><td>75.57</td><td>76.36</td><td colspan=\"2\">77.23 77.35</td><td>77.99</td></tr><tr><td>LF 1</td><td>74.90</td><td colspan=\"2\">75.08 75.73</td><td>75.75</td><td>76.34</td><td colspan=\"2\">77.01 77.37</td><td>77.79</td></tr><tr><td>UP</td><td>77.77</td><td colspan=\"2\">78.16 78.31</td><td>78.60</td><td>79.12</td><td colspan=\"2\">79.62 79.84</td><td>80.09</td></tr><tr><td>UR</td><td>78.73</td><td colspan=\"2\">77.88 78.15</td><td>78.22</td><td>79.17</td><td colspan=\"2\">80.08 79.80</td><td>80.52</td></tr><tr><td>UF 1</td><td>77.75</td><td colspan=\"2\">78.02 78.23</td><td>78.41</td><td>79.14</td><td colspan=\"2\">79.85 79.82</td><td>80.30</td></tr></table>", |
| "html": null |
| }, |
| "TABREF10": { |
| "text": "(P&K), who use the Berkeley Parser (Petrov and Klein 2007); Rafferty and Manning (2008) (R&M), who use the Stanford parser (see above); and", |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF11": { |
| "text": "PaGe workshop data.", |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td>here</td><td colspan=\"2\">P&K R&M H&N</td></tr><tr><td>LP</td><td colspan=\"2\">66.93 69.23 58.52</td><td>67.06</td></tr><tr><td>LR</td><td colspan=\"2\">60.79 70.41 57.63</td><td>58.07</td></tr><tr><td colspan=\"3\">LF 1 63.71 69.81 58.07</td><td>65.18</td></tr></table>", |
| "html": null |
| } |
| } |
| } |
| } |