ACL-OCL / Base_JSON /prefixQ /json /Q19 /Q19-1005.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q19-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:09:43.878036Z"
},
"title": "Unlexicalized Transition-based Discontinuous Constituency Parsing",
"authors": [
{
"first": "Maximin",
"middle": [],
"last": "Coavoux",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"country": "ILCC"
}
},
"email": "mcoavoux@inf.ed.ac.uk"
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Crabb\u00e9",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 Sorbonne Paris Cit\u00e9",
"location": {
"country": "LLF"
}
},
"email": "bcrabbe@linguist.univ-paris-diderot.fr"
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"country": "ILCC"
}
},
"email": "scohen@inf.ed.ac.uk"
},
{
"first": "+",
"middle": [],
"last": "1}",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"country": "ILCC"
}
},
"email": ""
},
{
"first": "\u21d2",
"middle": [],
"last": "S|d",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"country": "ILCC"
}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "|d",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Lexicalized parsing models are based on the assumptions that (i) constituents are organized around a lexical head and (ii) bilexical statistics are crucial to solve ambiguities. In this paper, we introduce an unlexicalized transition-based parser for discontinuous constituency structures, based on a structure-label transition system and a bi-LSTM scoring system. We compare it with lexicalized parsing models in order to address the question of lexicalization in the context of discontinuous constituency parsing. Our experiments show that unlexicalized models systematically achieve higher results than lexicalized models, and provide additional empirical evidence that lexicalization is not necessary to achieve strong parsing results. Our best unlexicalized model sets a new state of the art on English and German discontinuous constituency treebanks. We further provide a per-phenomenon analysis of its errors on discontinuous constituents.",
"pdf_parse": {
"paper_id": "Q19-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "Lexicalized parsing models are based on the assumptions that (i) constituents are organized around a lexical head and (ii) bilexical statistics are crucial to solve ambiguities. In this paper, we introduce an unlexicalized transition-based parser for discontinuous constituency structures, based on a structure-label transition system and a bi-LSTM scoring system. We compare it with lexicalized parsing models in order to address the question of lexicalization in the context of discontinuous constituency parsing. Our experiments show that unlexicalized models systematically achieve higher results than lexicalized models, and provide additional empirical evidence that lexicalization is not necessary to achieve strong parsing results. Our best unlexicalized model sets a new state of the art on English and German discontinuous constituency treebanks. We further provide a per-phenomenon analysis of its errors on discontinuous constituents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper introduces an unlexicalized parsing model and addresses the question of lexicalization, as a parser design choice, in the context of transition-based discontinuous constituency parsing. Discontinuous constituency trees are constituency trees where crossing arcs are allowed in order to represent long-distance dependencies, and in general phenomena related to word order variations (e.g., the left dislocation in Figure 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 424,
"end": 432,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lexicalized parsing models (Collins, 1997; Charniak, 1997) are based on the assumptions that (i) constituents are organized around a lexical * Work partly done at Universit\u00e9 Paris Diderot.",
"cite_spans": [
{
"start": 27,
"end": 42,
"text": "(Collins, 1997;",
"ref_id": "BIBREF8"
},
{
"start": 43,
"end": 58,
"text": "Charniak, 1997)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "head and (ii) bilexical statistics are crucial to solve ambiguities. In a lexicalized Probabilistic Context-Free Grammar (PCFG), grammar rules involve nonterminals annotated with a terminal element that represents their lexical head, for example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "VP[saw] \u2212\u2192 VP[saw] PP[telescope].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The probability of such a rule models the likelihood that telescope is a suitable modifier for saw.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In contrast, unlexicalized parsing models renounce modeling bilexical statistics, based on the assumptions that they are too sparse to be estimated reliably. Indeed, Gildea (2001) observed that removing bilexical statistics from Collins' (1997) model lead to at most a 0.5 drop in F-score. Furthermore, Bikel (2004) showed that bilexical statistics were in fact rarely used during decoding, and that when used, they were close to that of backoff distributions used for unknown word pairs.",
"cite_spans": [
{
"start": 166,
"end": 179,
"text": "Gildea (2001)",
"ref_id": "BIBREF23"
},
{
"start": 229,
"end": 244,
"text": "Collins' (1997)",
"ref_id": "BIBREF8"
},
{
"start": 290,
"end": 315,
"text": "Furthermore, Bikel (2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Instead, unlexicalized models may rely on grammar rule refinements to alleviate the strong independence assumptions of PCFGs (Klein and Manning, 2003; Matsuzaki et al., 2005; Petrov et al., 2006; Narayan and Cohen, 2016) . They sometimes rely on structural information, such as the boundaries of constituents (Hall et al., 2014; Durrett and Klein, 2015; Cross and Huang, 2016b; Stern et al., 2017; Kitaev and Klein, 2018) .",
"cite_spans": [
{
"start": 125,
"end": 150,
"text": "(Klein and Manning, 2003;",
"ref_id": "BIBREF30"
},
{
"start": 151,
"end": 174,
"text": "Matsuzaki et al., 2005;",
"ref_id": "BIBREF39"
},
{
"start": 175,
"end": 195,
"text": "Petrov et al., 2006;",
"ref_id": "BIBREF44"
},
{
"start": 196,
"end": 220,
"text": "Narayan and Cohen, 2016)",
"ref_id": "BIBREF41"
},
{
"start": 309,
"end": 328,
"text": "(Hall et al., 2014;",
"ref_id": "BIBREF25"
},
{
"start": 329,
"end": 353,
"text": "Durrett and Klein, 2015;",
"ref_id": "BIBREF16"
},
{
"start": 354,
"end": 377,
"text": "Cross and Huang, 2016b;",
"ref_id": "BIBREF14"
},
{
"start": 378,
"end": 397,
"text": "Stern et al., 2017;",
"ref_id": "BIBREF52"
},
{
"start": 398,
"end": 421,
"text": "Kitaev and Klein, 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although initially coined for chart parsers, the notion of lexicalization naturally transfers to transition-based parsers. We take lexicalized to denote a model that (i) assigns a lexical head to each constituent and (ii) uses heads of constituents as features to score parsing actions. Head assignment is typically performed with REDUCE-RIGHT and REDUCE-LEFT actions. Most proposals in transition-based constituency parsing since Sagae and Lavie (2005) have used a lexicalized transition system, and features involving heads to score (Evang and Kallmeyer, 2011) .",
"cite_spans": [
{
"start": 431,
"end": 453,
"text": "Sagae and Lavie (2005)",
"ref_id": "BIBREF47"
},
{
"start": 535,
"end": 562,
"text": "(Evang and Kallmeyer, 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "actions (Zhu et al., 2013; Clark, 2011, 2009; Crabb\u00e9, 2014; Wang et al., 2015, among others) , including proposals for discontinuous constituency parsing (Versley, 2014a; Maier, 2015; Coavoux and Crabb\u00e9, 2017a) . A few recent proposals use an unlexicalized model (Watanabe and Sumita, 2015; Cross and Huang, 2016b; Dyer et al., 2016) . Interestingly, these latter models all use recurrent neural networks (RNN) to compute constituent representations.",
"cite_spans": [
{
"start": 8,
"end": 26,
"text": "(Zhu et al., 2013;",
"ref_id": "BIBREF60"
},
{
"start": 27,
"end": 45,
"text": "Clark, 2011, 2009;",
"ref_id": null
},
{
"start": 46,
"end": 59,
"text": "Crabb\u00e9, 2014;",
"ref_id": "BIBREF10"
},
{
"start": 60,
"end": 92,
"text": "Wang et al., 2015, among others)",
"ref_id": null
},
{
"start": 154,
"end": 170,
"text": "(Versley, 2014a;",
"ref_id": "BIBREF53"
},
{
"start": 171,
"end": 183,
"text": "Maier, 2015;",
"ref_id": "BIBREF35"
},
{
"start": 184,
"end": 210,
"text": "Coavoux and Crabb\u00e9, 2017a)",
"ref_id": "BIBREF6"
},
{
"start": 263,
"end": 290,
"text": "(Watanabe and Sumita, 2015;",
"ref_id": "BIBREF57"
},
{
"start": 291,
"end": 314,
"text": "Cross and Huang, 2016b;",
"ref_id": "BIBREF14"
},
{
"start": 315,
"end": 333,
"text": "Dyer et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are the following. We introduce an unlexicalized discontinuous parsing model, as well as its lexicalized counterpart. We evaluate them in identical experimental conditions. Our main finding is that, in our experiments, unlexicalized models consistently outperform lexicalized models. We assess the robustness of this result by performing the comparison of unlexicalized and lexicalized models with a second pair of transition systems. We further analyze the empirical properties of the systems in order to better understand the reasons for this performance difference. We find that the unlexicalized system oracle produces shorter, more incremental derivations. Finally, we provide a per-phenomenon error analysis of our best model and identify which types of discontinuous constituents are hard to predict.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several approaches to discontinuous constituency parsing have been proposed. Hall and Nivre (2008) reduces the problem to non-projective dependency parsing, via a reversible transformation, a strategy developed by Fern\u00e1ndez-Gonz\u00e1lez and Martins (2015) and Corro et al. (2017) . Chart parsers are based on probabilistic Linear Context-Free Rewriting Systems (LCFRS) (Evang and Kallmeyer, 2011; Kallmeyer and Maier, 2010) , the Data-Oriented Parsing (DOP) framework (van Cranenburgh and Bod, 2013; van Cranenburgh et al., 2016) , or pseudo-projective parsing (Versley, 2016) . Some transition-based discontinuous constituency parsers use the swap action, adapted from dependency parsing (Nivre, 2009) either with an easy-first strategy (Versley, 2014a,b) or with a shift-reduce strategy (Maier, 2015; Maier and Lichte, 2016; Stanojevi\u0107 and Garrido Alhama, 2017) . Nevertheless, the swap strategy tends to produce long derivations (in number of actions) to construct discontinuous constituents; as a result, the choice of an oracle that minimizes the number of swap actions has a substantial positive effect in accuracy (Maier and Lichte, 2016; Stanojevi\u0107 and Garrido Alhama, 2017) .",
"cite_spans": [
{
"start": 77,
"end": 98,
"text": "Hall and Nivre (2008)",
"ref_id": "BIBREF26"
},
{
"start": 214,
"end": 251,
"text": "Fern\u00e1ndez-Gonz\u00e1lez and Martins (2015)",
"ref_id": "BIBREF20"
},
{
"start": 256,
"end": 275,
"text": "Corro et al. (2017)",
"ref_id": "BIBREF9"
},
{
"start": 365,
"end": 392,
"text": "(Evang and Kallmeyer, 2011;",
"ref_id": "BIBREF19"
},
{
"start": 393,
"end": 419,
"text": "Kallmeyer and Maier, 2010)",
"ref_id": "BIBREF27"
},
{
"start": 464,
"end": 495,
"text": "(van Cranenburgh and Bod, 2013;",
"ref_id": "BIBREF11"
},
{
"start": 496,
"end": 525,
"text": "van Cranenburgh et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 557,
"end": 572,
"text": "(Versley, 2016)",
"ref_id": "BIBREF55"
},
{
"start": 685,
"end": 698,
"text": "(Nivre, 2009)",
"ref_id": "BIBREF43"
},
{
"start": 734,
"end": 752,
"text": "(Versley, 2014a,b)",
"ref_id": null
},
{
"start": 785,
"end": 798,
"text": "(Maier, 2015;",
"ref_id": "BIBREF35"
},
{
"start": 799,
"end": 822,
"text": "Maier and Lichte, 2016;",
"ref_id": "BIBREF36"
},
{
"start": 823,
"end": 859,
"text": "Stanojevi\u0107 and Garrido Alhama, 2017)",
"ref_id": "BIBREF51"
},
{
"start": 1117,
"end": 1141,
"text": "(Maier and Lichte, 2016;",
"ref_id": "BIBREF36"
},
{
"start": 1142,
"end": 1178,
"text": "Stanojevi\u0107 and Garrido Alhama, 2017)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In contrast, Coavoux and Crabb\u00e9 (2017a) extended a shift-reduce transition system to handle discontinuous constituents. Their system allows binary reductions to apply to the top element in the stack, and any other element in the stack (instead of the two top elements in standard shift-reduce parsing). The second constituent for a reduction is chosen dynamically, with an action called GAP that gives access to older elements in the stack and can be performed several times before a reduction. In practice, they made the following modifications over a standard shift-reduce system: 1. The stack, that stores subtrees being constructed, is split into two parts S and D;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "2. reductions are applied to the respective tops of S and D;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3. the GAP action pops an element from S and adds it to D, making the next element of S available for a reduction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Their parser outperforms swap-based systems. However, they only experiment with a linear classifier, and assume access to gold part-of-speech (POS) tags for most of their experiments. All these proposals use a lexicalized model, as defined in the introduction: they assign heads to new constituents and use them as features to inform parsing decisions. Previous work on unlexicalized transition-based parsing models only focused on projective constituency trees (Dyer et al., 2016; Liu and Zhang, 2017) . In particular, Cross and Huang (2016b) introduced a system that does not require explicit binarization. Their system decouples the construction of a tree and the labeling of its nodes by assigning types (structure or label) to each action, and alternating between a structural action for even steps and labeling action for odd steps. This distinction arguably makes each decision simpler.",
"cite_spans": [
{
"start": 462,
"end": 481,
"text": "(Dyer et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 482,
"end": 502,
"text": "Liu and Zhang, 2017)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This section introduces an unlexicalized transition system able to construct discontinuous constituency trees (Section 3.1), its lexicalized counterpart (Section 3.2), and corresponding oracles (Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition Systems for Discontinuous Parsing",
"sec_num": "3"
},
{
"text": "The Merge-Label-Gap transition system (henceforth ML-GAP) combines the distinction between structural and labeling actions from Cross and Huang (2016b) and the SHIFT-REDUCE-GAP (SR-GAP) strategy with a split stack from Coavoux and Crabb\u00e9 (2017a) . Like the SR-GAP transition system, ML-GAP is based on three data structures: a stack S, a doubleended queue (dequeue) D, and a buffer B. We define a parsing configuration as a quadruple S, D, i, C , where S and D are sequences of index sets, i is the index of the last shifted token, and C is a set of instantiated discontinuous constituents. We adopt a representation of instantiated constituents as pairs (X, I), where X is a nonterminal label, and I is a set of token indexes. For example, the discontinuous VP in Figure 1 is the pair (VP, {1, 2, 3, 4, 6}), because it spans tokens 1 through 4 and token 6.",
"cite_spans": [
{
"start": 128,
"end": 151,
"text": "Cross and Huang (2016b)",
"ref_id": "BIBREF14"
},
{
"start": 219,
"end": 245,
"text": "Coavoux and Crabb\u00e9 (2017a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 765,
"end": 773,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The Merge-Label-Gap Transition System",
"sec_num": "3.1"
},
{
"text": "The ML-GAP transition system is defined as a deduction system in Table 1 . The available actions are the following:",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 72,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "The Merge-Label-Gap Transition System",
"sec_num": "3.1"
},
{
"text": "\u2022 The SHIFT action pushes the singleton {i + 1}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Merge-Label-Gap Transition System",
"sec_num": "3.1"
},
{
"text": "onto D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Merge-Label-Gap Transition System",
"sec_num": "3.1"
},
{
"text": "\u2022 The MERGE action removes I s 0 and I d 0 from the top of S and D, computes their union",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Merge-Label-Gap Transition System",
"sec_num": "3.1"
},
{
"text": "I = I s 0 \u222a I d 0 , transfers the content of D to S L S SHIFT|MERGE GAP GAP MERGE LABEL-X|NO-LABEL",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Merge-Label-Gap Transition System",
"sec_num": "3.1"
},
{
"text": "Figure 2: Action sequences allowed in ML-GAP. Any derivation must be recognized by the automaton.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Merge-Label-Gap Transition System",
"sec_num": "3.1"
},
{
"text": "S and pushes I onto D. It is meant to construct incrementally subsets of tokens that are constituents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Merge-Label-Gap Transition System",
"sec_num": "3.1"
},
{
"text": "\u2022 The GAP action removes the top element from S and pushes it at the beginning of D. This action gives the system the ability to construct discontinuous trees, by making older elements in S accessible to a subsequent merge operation with I d 0 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Merge-Label-Gap Transition System",
"sec_num": "3.1"
},
{
"text": "\u2022 LABEL-X creates a new constituent labeled X whose yield is the set I d 0 at the top of D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Merge-Label-Gap Transition System",
"sec_num": "3.1"
},
{
"text": "\u2022 NO-LABEL has no effect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Merge-Label-Gap Transition System",
"sec_num": "3.1"
},
{
"text": "Actions are subcategorized into structural actions (SHIFT, MERGE, GAP) and labeling actions (NO-LABEL, LABEL-X). This distinction is meant to make each single decision easier. The current state of the parser determines the type of action to be predicted next, as illustrated by the automaton in Figure 2 . When it predicts a projective tree, the parser alternates structural and labeling actions (states S and L). However, it must be able to perform several GAP actions in a row to predict some discontinuous constituents. Since the semantics of the GAP action, a structural action, ",
"cite_spans": [],
"ref_spans": [
{
"start": 295,
"end": 303,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Merge-Label-Gap Transition System",
"sec_num": "3.1"
},
{
"text": "\u21d2 NO-LABEL \u21d2 SH \u21d2 {An} s 0 {excellent} d 0 environment actor he is \u21d2 NO-LABEL \u21d2 MERGE \u21d2 {An excellent} d 0 environment actor he is \u21d2 NO-LABEL \u21d2 SH \u21d2 {An excellent} s 0 {environment} d 0 actor he is \u21d2 NO-LABEL \u21d2 MERGE \u21d2 {An excellent environment} d 0 actor he is \u21d2 NO-LABEL \u21d2 SH \u21d2 {An excellent environment} s 0 {actor} d 0 he is \u21d2 NO-LABEL \u21d2 MERGE \u21d2 {An excellent environment actor} d 0 he is \u21d2 LABEL-NP \u21d2 SH \u21d2 {An excellent environment actor} s 0 {he} d 0 is \u21d2 LABEL-NP \u21d2 SH \u21d2 {An excellent environment actor} s 1 {he} s 0 {is} d 0 \u21d2 NO-LABEL \u21d2 GAP \u21d2 {An excellent environment actor} s 0 {he} d 1 {is} d 0 \u21d2 MERGE \u21d2 {he} s 0 {An excellent environment actor is} d 0 \u21d2 LABEL-VP \u21d2 MERGE \u21d2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Merge-Label-Gap Transition System",
"sec_num": "3.1"
},
{
"text": "{he An excellent environment actor is} d 0 \u21d2 LABEL-S Table 2 : Example derivation for the tree in Figure 1 with the ML-GAP transition system.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 2",
"ref_id": null
},
{
"start": 98,
"end": 106,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The Merge-Label-Gap Transition System",
"sec_num": "3.1"
},
{
"text": "is not to modify the top of D, but to make an index set in S available for a MERGE, it must not be followed by a labeling action. Each GAP action must be followed by either another GAP or a MERGE action (state S in the automaton). We illustrate the transition system with a full derivation in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 293,
"end": 300,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Merge-Label-Gap Transition System",
"sec_num": "3.1"
},
{
"text": "In order to assess the role of lexicalization in parsing, we introduce a second transition system, ML-GAP-LEX, which is designed (i) to be lexicalized and (ii) to differ minimally from ML-GAP. We define an instantiated lexicalized discontinuous constituent as a triple (X, I, h) where X is a nonterminal label, I is the set of terminals that are in the yield of the constituent, and h \u2208 I is the lexical head of the constituent. In ML-GAP-LEX, the dequeue and the stack contains pairs (I, h), where I is a set of indices and h \u2208 I is a distinguished element of I.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicalized Transition System",
"sec_num": "3.2"
},
{
"text": "The main difference of ML-GAP-LEX with ML-GAP is that there are two MERGE actions, MERGE-LEFT and MERGE-RIGHT, and that each of them assigns the head of the new set of indexes (and implicitly creates a new directed dependency arc):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicalized Transition System",
"sec_num": "3.2"
},
{
"text": "\u2022 MERGE-LEFT: S|(I s 0 , h s 0 ), D|(I d 0 , h d 0 ), i, C \u21d2 S|D, (I s 0 \u222a I d 0 , h s 0 ), i, C ; \u2022 MERGE-RIGHT: S|(I s 0 , h s 0 ), D|(I d 0 , h d 0 ), i, C \u21d2 S|D, (I s 0 \u222a I d 0 , h d 0 ), i, C} .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicalized Transition System",
"sec_num": "3.2"
},
{
"text": "In this work, we use deterministic static oracles. We briefly describe an oracle that builds constituents from their head outwards (head-driven oracle) and an oracle that performs merges as soon as possible (eager oracle). The latter can only be used by an unlexicalized system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracles",
"sec_num": "3.3"
},
{
"text": "Head-driven Oracle The head-driven oracle can be straightforwardly derived from the oracle for SR-GAP presented by Coavoux and Crabb\u00e9 (2017a) . A derivation in ML-GAP-LEX can be computed from a derivation in SR-GAP by",
"cite_spans": [
{
"start": 115,
"end": 141,
"text": "Coavoux and Crabb\u00e9 (2017a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Oracles",
"sec_num": "3.3"
},
{
"text": "(i) replacing REDUCE-LEFT-X (resp. REDUCE-RIGHT-X) actions by a MERGE-LEFT (resp. MERGE-RIGHT), (ii)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracles",
"sec_num": "3.3"
},
{
"text": "replacing REDUCE-UNARY-X actions by LABEL-X, and (iii) inserting LABEL-X and NO-LABEL actions as required. This oracle attaches the left dependents of a head first. In practice, other oracle strategies are possible as long as constituents are constructed from their head outward.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracles",
"sec_num": "3.3"
},
{
"text": "Eager Oracle For the ML-GAP system, we use an oracle that builds every n-ary constituent in a leftto-right fashion, as illustrated by the derivation in Table 2 . 1 This implicitly corresponds to a left-branching binarization.",
"cite_spans": [],
"ref_spans": [
{
"start": 152,
"end": 159,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Oracles",
"sec_num": "3.3"
},
{
"text": "The statistical model we used is based on a Long Short-Term Memory network (bi-LSTM) transducer that builds context-aware representations for each token in the sentence (Kiperwasser and Goldberg, 2016; Cross and Huang, 2016a ",
"cite_spans": [
{
"start": 169,
"end": 201,
"text": "(Kiperwasser and Goldberg, 2016;",
"ref_id": "BIBREF28"
},
{
"start": 202,
"end": 224,
"text": "Cross and Huang, 2016a",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Architecture",
"sec_num": "4"
},
{
"text": "c x2 w x3 c x3 h (1,1) h x1 w x2 h (2,2) h (1,2) h (2,1) h (2,3) h (1,3) h x2 h x3 w x1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Architecture",
"sec_num": "4"
},
{
"text": "Figure 3: Bi-LSTM part of the neural architecture. Each word is represented by the concatenation of a standard word-embedding w and the output of a character bi-LSTM c. The concatenation is fed to a two-layer bi-LSTM transducer that produces contextual word representations. The first layer serves as input to the tagger (Section 4.2), whereas the second layer is used by the parser to instantiate feature templates for each parsing step (Section 4.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Architecture",
"sec_num": "4"
},
{
"text": "actions. 2 The whole architecture is trained end-toend. We illustrate the bi-LSTM and the tagging components in Figure 3 . In the following paragraphs, we describe the architecture that builds shared representations (Section 4.1), the tagging component (Section 4.2), the parsing component (Section 4.3), and the objective function (Section 4.4).",
"cite_spans": [
{
"start": 9,
"end": 10,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 112,
"end": 120,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Neural Architecture",
"sec_num": "4"
},
{
"text": "We use a hierarchical bi-LSTM (Plank et al., 2016) to construct context-aware vector representations for each token. A lexical entry x is represented by the concatenation",
"cite_spans": [
{
"start": 30,
"end": 50,
"text": "(Plank et al., 2016)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Building Context-aware Token Representations",
"sec_num": "4.1"
},
{
"text": "h x = [w x ; c x ],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Context-aware Token Representations",
"sec_num": "4.1"
},
{
"text": "where w x is a standard word embedding and c x = bi-LSTM(x) is the output of a character bi-LSTM encoder, i.e., the concatenation of its last forward and backward states. We run a sentence-level bi-LSTM transducer over the sequence of local embeddings 2 A more involved strategy would be to rely on Recurrent Neural Network Grammars (Dyer et al., 2016; Kuncoro et al., 2017) . However, the adaptation of this model to discontinuous parsing is not straightforward and we leave it to future work.",
"cite_spans": [
{
"start": 333,
"end": 352,
"text": "(Dyer et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 353,
"end": 374,
"text": "Kuncoro et al., 2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Building Context-aware Token Representations",
"sec_num": "4.1"
},
{
"text": "(h x 1 , h x 2 , . . . , h x n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Context-aware Token Representations",
"sec_num": "4.1"
},
{
"text": ", to obtain vector representations that depend on the whole sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Context-aware Token Representations",
"sec_num": "4.1"
},
{
"text": "(h (1) , . . . , h (n) ) = bi-LSTM(h x 1 , h x 2 , . . . , h x n ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Context-aware Token Representations",
"sec_num": "4.1"
},
{
"text": "In practice, we use a two-layer bi-LSTM in order to supervise parsing and tagging at different layers, following results by . In what follows, we denote the i th state of the j th layer with h (j,i) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Context-aware Token Representations",
"sec_num": "4.1"
},
{
"text": "We use the context-aware representations as input to a softmax classifier to output a probability distribution over part-of-speech (POS) tags for each token:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagger",
"sec_num": "4.2"
},
{
"text": "P (t i = \u2022|x n 1 ; \u03b8 t ) = Softmax(W (t) \u2022 h (1,i) + b (t) ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagger",
"sec_num": "4.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagger",
"sec_num": "4.2"
},
{
"text": "W (t) , b (t) \u2208 \u03b8 t are parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagger",
"sec_num": "4.2"
},
{
"text": "In addition to predicting POS tags, we also predict other morphosyntactic attributes when they are available (i.e., for the Tiger corpus) such as the case, tense, mood, person, and gender, since the POS tagset does not necessarily contain this information. Finally, we predict the syntactic functions of tokens, since this auxiliary task has been shown to be beneficial for constituency parsing (Coavoux and Crabb\u00e9, 2017b) .",
"cite_spans": [
{
"start": 395,
"end": 422,
"text": "(Coavoux and Crabb\u00e9, 2017b)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tagger",
"sec_num": "4.2"
},
{
"text": "For each type of label l, we use a separate softmax classifier, with its own parameters W (l) and b (l) :",
"cite_spans": [
{
"start": 90,
"end": 93,
"text": "(l)",
"ref_id": null
},
{
"start": 100,
"end": 103,
"text": "(l)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tagger",
"sec_num": "4.2"
},
{
"text": "P (l i = \u2022|x n 1 ; \u03b8 t ) = Softmax(W (l) \u2022 h (1,i) + b (l) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagger",
"sec_num": "4.2"
},
{
"text": ". For a given token, the number and types of morphosyntactic attributes depend on its POS tag. For example, a German noun has a gender and number but no tense nor mood. We use a default value ('undef') to make sure that every token has the same number of labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tagger",
"sec_num": "4.2"
},
{
"text": "We decompose the probability of a sequence of actions a m 1 = (a 1 , a 2 , . . . , a m ) for a sentence x n 1 as the product of probability of individual actions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "4.3"
},
{
"text": "P (a m 1 |x n 1 ; \u03b8 p ) = m i=1 P (a i |a i\u22121 1 , x n 1 ; \u03b8 p ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "4.3"
},
{
"text": "The probability of an action given a parsing configuration is computed with a feedforward network with two hidden layers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "4.3"
},
{
"text": "o (1) = g(W (1) \u2022 \u03a6 f (a i\u22121 1 , x n 1 ) + b (1) ), o (2) = g(W (2) \u2022 o (1) + b (2) ), P (a i |a i\u22121 1 , x n 1 ) = Softmax(W (3) \u2022 o (2) + b (3) ), where \u2022 g is an activation function (rectifier); \u2022 W (i) , b (i) \u2208 \u03b8 p are parameters;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "4.3"
},
{
"text": "\u2022 \u03a6 f is a function, parameterized by a feature template list f , that outputs the concatenation of instantiated features, for the configuration obtained after performing the sequence of action a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "4.3"
},
{
"text": "(i\u22121) 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "4.3"
},
{
"text": "to the input sentence x n 1 . Feature templates describe a list of positions in a configuration. Features are instantiated by the context-aware representation of the token occupying the position. For example, token i will yield vector h (2,i) , the output of the sentence-level bi-LSTM transducer at position i. If a position contains no token, the feature is instantiated by a special trained embedding.",
"cite_spans": [
{
"start": 237,
"end": 242,
"text": "(2,i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "4.3"
},
{
"text": "Feature Templates The two feature template sets we used are presented in Table 3 . The BASE templates form a minimal set that extracts Configuration: 7 indexes from a configuration, relying only on constituent boundaries. The +LEX feature set adds information about the heads of constituents at the top of S and D, and can only be used together with a lexicalized transition system.",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 80,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Parser",
"sec_num": "4.3"
},
{
"text": "S|(I s 1 , h s 1 )|(I s 0 , h s 0 ), D|(I d 1 , h d 1 )|(I d 0 , h d 0 ), i, C Template set Token indexes BASE max(I s 1 ), min(I s 0 ), max(I s 0 ), max(I d 1 ), min(I d 0 ), max(I d 0 ), i +LEX BASE+ h d 0 , h d 1 , h s 0 , h s 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser",
"sec_num": "4.3"
},
{
"text": "The objective function for a single sentence x n 1 decomposes in a tagging objective and a parsing objective. The tagging objective is the negative log-likelihood of gold labels for each token:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "4.4"
},
{
"text": "L t (x n 1 ; \u03b8 t ) = \u2212 n i=1 k j=1 log P (t i,j |x n 1 ; \u03b8 t ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "4.4"
},
{
"text": "where k is the number of types of labels to predict. The parsing objective is the negative loglikelihood of the gold derivation, as computed by the oracle:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "4.4"
},
{
"text": "L p (x n 1 ; \u03b8 p ) = \u2212 m i=1 log P (a i |a i\u22121 1 , x n 1 ; \u03b8 p ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "4.4"
},
{
"text": "We train the model by minimizing L t + L p over the whole corpus. We do so by repeatedly sampling a sentence, performing one optimization step for L t followed by one optimization step for L p . Some parameters are shared across the parser and the tagger, namely the word and character embeddings, the parameters for the character bi-LSTM, and those for the first layer of the sentence bi-LSTM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Objective Function",
"sec_num": "4.4"
},
{
"text": "The experiments we performed aim at assessing the role of lexicalization in transition-based constituency parsing. We describe the data (Section 5.1) and the optimization protocol (Section 5.2). Then, we discuss empirical runtime efficiency (Section 5.3), before presenting the results of our experiments (Section 5.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "To evaluate our models, we used the Negra corpus (Skut et al., 1997) , the Tiger corpus (Brants et al., 2002) , and the discontinuous version of the Penn Treebank (Evang and Kallmeyer, 2011; Marcus et al., 1993) .",
"cite_spans": [
{
"start": 49,
"end": 68,
"text": "(Skut et al., 1997)",
"ref_id": "BIBREF49"
},
{
"start": 88,
"end": 109,
"text": "(Brants et al., 2002)",
"ref_id": "BIBREF3"
},
{
"start": 163,
"end": 190,
"text": "(Evang and Kallmeyer, 2011;",
"ref_id": "BIBREF19"
},
{
"start": 191,
"end": 211,
"text": "Marcus et al., 1993)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "For the Tiger corpus, we use the Statistical Parsing of Morphologically Rich Languages (SPMRL) split (Seddah et al., 2013) . We obtained the dependency labels and the morphological information for each token from the dependency treebank versions of the SPMRL release. We converted the Negra corpus to labeled dependency trees with the DEPSY tool 3 in order to annotate each token with a dependency label. We do not predict morphological attributes for the Negra corpus (only POS tags) since only a small section is annotated with a full morphological analysis. We use the standard split (Dubey and Keller, 2003) for this corpus, and no limit on sentence length. For the Penn Treebank, we use the standard split (sections 2-21 for training, 22 for development and 23 for test). We retrieved the dependency labels from the dependency version of the Penn Treebank (PTB), obtained by the Stanford Parser (de Marneffe et al., 2006) .",
"cite_spans": [
{
"start": 101,
"end": 122,
"text": "(Seddah et al., 2013)",
"ref_id": "BIBREF48"
},
{
"start": 587,
"end": 611,
"text": "(Dubey and Keller, 2003)",
"ref_id": "BIBREF15"
},
{
"start": 900,
"end": 926,
"text": "(de Marneffe et al., 2006)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "We used the relevant module of discodop 4 (van Cranenburgh et al., 2016) for evaluation. It provides an F1 measure on labeled constituents, as well as an F1 score computed only on discontinuous constituents (Disc. F1). Following standard practice, we used the evaluation parameters included in discodop release (proper.prm). These parameters ignore punctuation and root symbols.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "We optimize the loss with the Averaged Stochastic Gradient Descent algorithm (Polyak and Juditsky, 1992; Bottou, 2010) using the following dimensions for embeddings and hidden layers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization and Hyperparameters",
"sec_num": "5.2"
},
{
"text": "\u2022 Feedforward network: 2 layers of 128 units with rectifiers as activation function;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization and Hyperparameters",
"sec_num": "5.2"
},
{
"text": "\u2022 The character bi-LSTM has 1 layer, with states of size 32 (in each direction);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization and Hyperparameters",
"sec_num": "5.2"
},
{
"text": "\u2022 The sentence bi-LSTM has 2 layers, with states of size 128 (in each direction);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization and Hyperparameters",
"sec_num": "5.2"
},
{
"text": "\u2022 Character embedding size: 32;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization and Hyperparameters",
"sec_num": "5.2"
},
{
"text": "\u2022 Word-embedding size: 32.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization and Hyperparameters",
"sec_num": "5.2"
},
{
"text": "We tune the learning rate ({0.01, 0.02}) and the number of iterations ({4, 8, 12, . . . , 28, 30} ) on the development sets of each corpus. All parameters, including embeddings, are randomly initialized. We use no pretrained word embeddings nor any other external data. 5 Finally, following the method of Kiperwasser and Goldberg (2016) to handle unknown words, each time we sample a sentence from the training set, we stochastically replace each word by an UNKNOWN pseudoword with a probability p w = \u03b1 #{w}+\u03b1 , where #{w} is the raw number of occurrences of w in the training set and \u03b1 is a hyperparameter set to 0.8375, as suggested by Cross and Huang (2016b) .",
"cite_spans": [
{
"start": 270,
"end": 271,
"text": "5",
"ref_id": null
},
{
"start": 305,
"end": 336,
"text": "Kiperwasser and Goldberg (2016)",
"ref_id": "BIBREF28"
},
{
"start": 639,
"end": 662,
"text": "Cross and Huang (2016b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 70,
"end": 97,
"text": "({4, 8, 12, . . . , 28, 30}",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Optimization and Hyperparameters",
"sec_num": "5.2"
},
{
"text": "For each experiment, we performed both training and parsing on a single CPU core. Training a single model on the Tiger corpus (i.e., the largest training corpus) took approximately a week. Parsing the 5,000 sentences of the development section of the Tiger corpus takes 53 seconds (1,454 tokens per second) for the ML-GAP model and 40 seconds (1,934 tokens per second) for the SR-GAP-UNLEX model, excluding model initialization and input-output times (Table 5) .",
"cite_spans": [],
"ref_spans": [
{
"start": 451,
"end": 460,
"text": "(Table 5)",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Runtime efficiency",
"sec_num": "5.3"
},
{
"text": "Although indicative, these runtimes compare well with other neural discontinuous parsers, e.g., Corro et al. (2017) , or to transition-based parsers using a linear classifier (Maier, 2015; Coavoux and Crabb\u00e9, 2017a) .",
"cite_spans": [
{
"start": 96,
"end": 115,
"text": "Corro et al. (2017)",
"ref_id": "BIBREF9"
},
{
"start": 175,
"end": 188,
"text": "(Maier, 2015;",
"ref_id": "BIBREF35"
},
{
"start": 189,
"end": 215,
"text": "Coavoux and Crabb\u00e9, 2017a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Runtime efficiency",
"sec_num": "5.3"
},
{
"text": "First, we compare the results of our proposed models on the development sets, focusing on the effect of lexicalization (Section 5.4.1). Then, we present morphological analysis results (Section 5.4.2). Finally, we compare our best model to other published results on the test sets (Section 5.4.3). 2017\u22487.3 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.4"
},
{
"text": "Lexicalized vs. Unlexicalized Models We first compare the unlexicalized ML-GAP system with the ML-GAP-LEX system (Table 4 ). The former consistently obtains higher results. The F-score difference is small on English (0.1 to 0.3) but substantial on the German treebanks (more than 1.0 absolute point) and in general on discontinuous constituents (Disc. F). In order to assess the robustness of the advantage of unlexicalized models, we also compare our implementation of SR-GAP (Coavoux and Crabb\u00e9, 2017a) 6 with an unlexicalized variant (SR-GAP-UNLEX) that uses a single type of reduction (REDUCE) instead of the traditional REDUCE-RIGHT and REDUCE-LEFT actions. This second comparison exhibits the same pattern in favor of unlexicalized models.",
"cite_spans": [
{
"start": 477,
"end": 504,
"text": "(Coavoux and Crabb\u00e9, 2017a)",
"ref_id": "BIBREF6"
},
{
"start": 505,
"end": 506,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 113,
"end": 121,
"text": "(Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of Lexicalization",
"sec_num": "5.4.1"
},
{
"text": "These results suggest that lexicalization is not necessary to achieve very strong discontinuous parsing results. A possible interpretation is that the bi-LSTM transducer may implicitly learn latent lexicalization, as suggested by Kuncoro et al. (2017) , which is consistent with recent analyses of other types of syntactic information captured by LSTMs in parsing models (Gaddy et al., 2018) or language models (Linzen et al., 2016) .",
"cite_spans": [
{
"start": 230,
"end": 251,
"text": "Kuncoro et al. (2017)",
"ref_id": "BIBREF32"
},
{
"start": 371,
"end": 391,
"text": "(Gaddy et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 411,
"end": 432,
"text": "(Linzen et al., 2016)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Lexicalization",
"sec_num": "5.4.1"
},
{
"text": "For lexicalized models, information about the head of constituents (+LEX) have a mixed effect and brings an improvement in only half the cases. It is even slightly detrimental on English (ML-GAP-LEX).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Lexical Features",
"sec_num": null
},
{
"text": "Controlling for the Oracle Choice The advantage of unlexicalized systems could be due to the properties of its eager oracle, in particular its higher incrementality (see Section 6 for an analysis). In order to isolate the effect of the oracle, we trained ML-GAP with the head-driven oracle, i.e., the oracle used by the ML-GAP-LEX system. We observe a small drop in F-measure on English (\u22120.1) and on the Tiger corpus (\u22120.4) but no effect on the Negra corpus. However, the resulting parser still outperforms ML-GAP-LEX, with the exception of English. These results suggest Tiger (Bj\u00f6rkelund et al., 2013; POS 98.1 Complete match 91.8 Table 6 : Morphological analysis results on development sets.",
"cite_spans": [
{
"start": 579,
"end": 604,
"text": "(Bj\u00f6rkelund et al., 2013;",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 634,
"end": 641,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of Lexical Features",
"sec_num": null
},
{
"text": "that the oracle choice definitely plays a role in the advantage of ML-GAP over ML-GAP-LEX, but is not sufficient to explain the performance difference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Lexical Features",
"sec_num": null
},
{
"text": "Discussion Overall, our experiments provide empirical arguments in favor of unlexicalized discontinuous parsing systems. Unlexicalized systems are arguably simpler than their lexicalized counterparts-since they have no directional (left or right) actions-and obtain better results. We further hypothesize that derivations produced by the eager oracle, which cannot be used by lexicalized systems, are easier to learn. We provide a quantitative and comparative analysis of derivations from both transition systems in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Lexical Features",
"sec_num": null
},
{
"text": "We report results for morphological analysis with the selected models (ML-GAP with BASE features for the Penn Treebank and Tiger, SR-GAP-UNLEX with BASE features for Negra) in Table 6 . For each morphological attribute, we report an accuracy score computed over every token. However, most morphological attributes are only relevant for specific part-of-speech tags. For instance, TENSE is only a feature of verbs. The accuracy metric is somewhat misleading, since the fact that the tagger predicts correctly that a token does not have an attribute is considered a correct answer. Therefore, if only 5% of tokens bore a specific morphological attribute, a 95% accuracy is a most frequent baseline score. For this reason, we also report a coverage metric (Cov.) that indicates the proportion of tokens in the corpus that possess an attribute, and an F1 measure.",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 183,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tagging and Morphological Analysis",
"sec_num": "5.4.2"
},
{
"text": "The tagger achieves close to state-of-the-art results on all three corpora. On the Tiger corpus, it slightly outperforms previous results published by Bj\u00f6rkelund et al. (2013) who used the MARMOT tagger . Morphological attributes are also very well predicted, with F1 scores above 98%, except for case and gender.",
"cite_spans": [
{
"start": 151,
"end": 175,
"text": "Bj\u00f6rkelund et al. (2013)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tagging and Morphological Analysis",
"sec_num": "5.4.2"
},
{
"text": "The two best performing models on the development sets are the ML-GAP (DPTB, Tiger) and the SR-GAP-UNLEX (Negra) models with BASE features. We report their results on the test sets in Table 7 . They are compared with other published results: transition-based parsers using a SWAP action (Maier, 2015; Stanojevi\u0107 and Garrido Alhama, 2017) or a GAP action (Coavoux and Crabb\u00e9, 2017a) , the pseudo-projective parser of Versley (2016) , parsers based on non-projective dependency parsing (Fern\u00e1ndez-Gonz\u00e1lez and Martins, 2015; Corro et al., 2017) , and finally chart parsers based on probabilistic LCFRS (Evang and Kallmeyer, 2011; Gebhardt, 2018) or dataoriented parsing (van Cranenburgh et al., 2016) . Note that some of these publications report results in a gold POS-tag scenario, a much easier experimental setup that is not comparable to ours (bottom part of the table). In Table 7 , we also indicate models that use a neural scoring system with a ' * '.",
"cite_spans": [
{
"start": 287,
"end": 300,
"text": "(Maier, 2015;",
"ref_id": "BIBREF35"
},
{
"start": 301,
"end": 337,
"text": "Stanojevi\u0107 and Garrido Alhama, 2017)",
"ref_id": "BIBREF51"
},
{
"start": 354,
"end": 381,
"text": "(Coavoux and Crabb\u00e9, 2017a)",
"ref_id": "BIBREF6"
},
{
"start": 416,
"end": 430,
"text": "Versley (2016)",
"ref_id": "BIBREF55"
},
{
"start": 484,
"end": 522,
"text": "(Fern\u00e1ndez-Gonz\u00e1lez and Martins, 2015;",
"ref_id": "BIBREF20"
},
{
"start": 523,
"end": 542,
"text": "Corro et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 600,
"end": 627,
"text": "(Evang and Kallmeyer, 2011;",
"ref_id": "BIBREF19"
},
{
"start": 628,
"end": 643,
"text": "Gebhardt, 2018)",
"ref_id": "BIBREF22"
},
{
"start": 668,
"end": 698,
"text": "(van Cranenburgh et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 184,
"end": 191,
"text": "Table 7",
"ref_id": "TABREF9"
},
{
"start": 876,
"end": 883,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "External Comparisons",
"sec_num": "5.4.3"
},
{
"text": "Our models obtain state-of-the-art results and outperform every other system, including the LSTM-based parser of Stanojevi\u0107 and Garrido Alhama (2017) that uses a SWAP action to predict discontinuities. This observation confirms in another setting the results of Coavoux and Crabb\u00e9 (2017a) , namely that GAP transition systems have more desirable properties than SWAP transition systems.",
"cite_spans": [
{
"start": 262,
"end": 288,
"text": "Coavoux and Crabb\u00e9 (2017a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "External Comparisons",
"sec_num": "5.4.3"
},
{
"text": "In this section, we investigate empirical properties of the transition systems evaluated in the previous section. A key difference between lexicalized and unlexicalized systems is that the latter are arguably simpler: they do not have to assign heads to new constituents. As a result, they need fewer types of distinct transitions, and they have simpler decisions to make. Furthermore, they do not run the risk of error propagation from wrong head assignments. We argue that an important consequence of the simplicity of unlexicalized systems is that their derivations are easier to learn. In particular, ML-GAP derivations have a better incrementality than those of ML-GAP-LEX (Section 6.1) and are more economical in terms of number of GAP actions needed to derive discontinuous trees (Section 6.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Analysis",
"sec_num": "6"
},
{
"text": "We adopt the definition of incrementality of Nivre (2004) : an incremental algorithm minimizes the number of connected components in the stack during parsing. An unlexicalized system can construct a new constituent by incorporating each new component immediately, whereas a lexicalized system waits until it has shifted the head of a constituent before starting to build system can construct partial structures as soon as there are two elements with the same parent node in the stack. 8 We report the average number of connected components in the stack during a derivation computed by an oracle for each transition system in Table 8 . The unlexicalized transition system ML-GAP has a better incrementality. On average, it maintains a smaller stack. This is an advantage since parsing decisions rely on information extracted from the stack and smaller localized stacks are easier to represent.",
"cite_spans": [
{
"start": 45,
"end": 57,
"text": "Nivre (2004)",
"ref_id": "BIBREF42"
},
{
"start": 485,
"end": 486,
"text": "8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 625,
"end": 632,
"text": "Table 8",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Incrementality",
"sec_num": "6.1"
},
{
"text": "The GAP actions are supposedly the most difficult to predict, because they involve long distance information. They also increase the length of a derivation and make the parser more prone to error propagation. We expect that a transition system that is able to predict a discontinuous tree more efficiently, in terms of number of GAP actions, to be a better choice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of GAP Actions",
"sec_num": "6.2"
},
{
"text": "We report in Table 9 the number of GAP actions necessary to derive the discontinuous trees for several corpora and for several transition systems (using oracles). We also report the average and maximum number of consecutive GAP actions in each case. For English and German, the unlexicalized transition system ML-GAP needs much fewer GAP actions to derive discontinuous trees (approximately 45% fewer). The average number of consecutive GAP actions is also smaller (as well as the maximum for German corpora). On average, the elements in the stack (S) that need to combine with the top of D are closer to the top of S with the ML-GAP transition system 8 SH, SH, M(ERGE), SH, M, SH, M, SH, M, LABEL-NP. than with lexicalized systems. This observation is not surprising; since ML-GAP can start constructing constituents before having access to their lexical head, it can construct larger structures before having to GAP them.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 9",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Number of GAP Actions",
"sec_num": "6.2"
},
{
"text": "In this section, we provide an error analysis focused on the predictions of the ML-GAP model on the discontinuous constituents of the discontinuous PTB. It is aimed at understanding which types of long-distance dependencies are easy or hard to predict and providing insights for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "We manually compared the gold and predicted trees from the development set that contained at least one discontinuous constituent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "7.1"
},
{
"text": "Out of 278 sentences in the development set containing a discontinuity (excluding those in which the discontinuity is only due to punctuation attachment), 165 were exact matches for discontinuous constituents and 113 contained at least one error. Following Evang (2011), we classified errors according to the phenomena producing a discontinuity. We used the following typology, 9 illustrated by examples where the main discontinuous constituent is highlighted in bold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "7.1"
},
{
"text": "\u2022 Wh-extractions: What should I do?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "7.1"
},
{
"text": "\u2022 Fronted quotations: ''Absolutely'', he said.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "7.1"
},
{
"text": "\u2022 Extraposed dependent: In April 1987, evidence surfaced that commissions were paid. \u2022 Circumpositioned quotations: In general, they say, avoid takeover stocks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "7.1"
},
{
"text": "\u2022 It-extrapositions: ''It's better to wait.''",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "7.1"
},
{
"text": "\u2022 Subject-verb inversion: Said the spokeswoman: ''The whole structure has changed.''",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "7.1"
},
{
"text": "For each phenomenon occurrence, we manually classified the output of the parser in one of the following categories: (i) perfect match, (ii) partial match, and (iii) false negative. Partial matches are cases where the parser identified the phenomenon involved but made a mistake regarding the labelling of a discontinuous constituent (e.g., S instead of SBAR) or its scope. The latter case includes, e.g., occurrences where the parser found an extraction, but failed to find the correct extraction site. Finally, we also report false positives for each phenomenon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "7.1"
},
{
"text": "First of all, the parser tends to be conservative when predicting discontinuities: there are in general few false positives. The 72.0 discontinuous F1 (Table 4) indeed decomposes in a precision of 78.4 and a recall of 66.6. This does not seem to be a property of our parser, as other authors also report systematically higher precisions than recalls (Maier, 2015; Stanojevi\u0107 and Garrido Alhama, 2017) . Instead, the scarcity of discontinuities in the data might be a determining factor: only 20% of sentences in the Discontinuous Penn Treebank contain at least one discontinuity and 30% of sentences in the Negra and Tiger corpus. Analysis results are presented in Table 10 . For wh-extractions, there are two main causes of errors. The first one is an ambiguity on the extraction site. For example, in the relative clause which many clients didn't know about, instead of predicting a discontinuous PP, where which is the complement of about, the parser attached which as a complement of know. Another source of error (both for false positives and false negatives) is the ambiguity of that-clauses, that can be either completive clauses 10 or relative clauses. 11 Phenomena related to quotations are rather well identified probably due to the fact that they are frequent in newspaper data and exhibit regular patterns (quotation marks, speech verbs). However, a difficulty in identifying circumpositioned quotations arises when there are no quotation marks, to determine what the precise scope of the quotation is.",
"cite_spans": [
{
"start": 350,
"end": 363,
"text": "(Maier, 2015;",
"ref_id": "BIBREF35"
},
{
"start": 364,
"end": 400,
"text": "Stanojevi\u0107 and Garrido Alhama, 2017)",
"ref_id": "BIBREF51"
},
{
"start": 1161,
"end": 1163,
"text": "11",
"ref_id": null
}
],
"ref_spans": [
{
"start": 151,
"end": 160,
"text": "(Table 4)",
"ref_id": null
},
{
"start": 665,
"end": 673,
"text": "Table 10",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7.2"
},
{
"text": "Finally, the hardest types of discontinuity for the parser are extrapositions. Contrary to previously discussed phenomena, there is usually no lexical trigger (wh-word, speech verb) that makes these discontinuities easy to spot. Most cases involve modifier attachment ambiguities, which are known to be hard to solve (Kummerfeld et al., 2012) and often require some world knowledge.",
"cite_spans": [
{
"start": 317,
"end": 342,
"text": "(Kummerfeld et al., 2012)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7.2"
},
{
"text": "We have introduced an unlexicalized transitionbased discontinuous constituency parsing model. 12 We have compared it, in identical experimental settings, with its lexicalized counterpart in order to provide insights on the effect of lexicalization as a parser design choice.",
"cite_spans": [
{
"start": 94,
"end": 96,
"text": "12",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "We found that lexicalization is not necessary to achieve very high parsing results in discontinuous constituency parsing, a result consistent with previous studies on lexicalization in projective constituency parsing (Klein and Manning, 2003; Cross and Huang, 2016b) . A study of empirical properties of our transition systems suggested explanations for the performance difference, by showing that the unlexicalized system produces shorter derivations and has a better incrementality. Finally, we presented a qualitative analysis of our parser's errors on discontinuous constituents.",
"cite_spans": [
{
"start": 217,
"end": 242,
"text": "(Klein and Manning, 2003;",
"ref_id": "BIBREF30"
},
{
"start": 243,
"end": 266,
"text": "Cross and Huang, 2016b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "The systems exhibit spurious ambiguity for constructing n-ary (n > 2) constituents. We leave the exploration of nondeterministic oracles to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://nats-www.informatik.uni-hamburg. de/pub/CDG/DownloadPage/cdg-2006-06-21.tar.gz We modified DEPSY to keep the same tokenization as the original corpus.4 https://github.com/andreasvc/disco-dop",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We leave to future work the investigation of the effect of pretrained word embeddings and semi-supervised learning methods, such as tritraining, that have been shown to be effective in recent work on projective constituency parsing (Choe andCharniak, 2016;Kitaev and Klein, 2018).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is not the same model as Coavoux and Crabb\u00e9 (2017a) since our experiments use the statistical model presented in Section 4, with joint morphological analysis, whereas they use a structured perceptron and require a POS-tagged input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "SH(IFT), SH, SH, SH, SH, M(ERGE)-R(IGHT), M-R, M-R, M-R, LABEL-NP. (NO-LABEL actions are omitted.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These categories cover all cases in the development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "(NP the consensus . . . (SBAR that the Namibian guerrillas were above all else the victims of suppression by neighboring South Africa.)) 11 (NP the place (SBAR that world opinion has been celebrating over))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Kilian Evang and Laura Kallmeyer for providing us with the Discontinuous Penn Treebank. We thank Caio Corro, Sacha Beniamine, TACL reviewers, and action editor Stephen Clark for feedback that helped improve the paper. Our implementation makes use of the Eigen C++ library (Guennebaud and Jacob, 2010) , treetools, 13 and discodop. 14 MC and SC gratefully acknowledge the support of Huawei Technologies.",
"cite_spans": [
{
"start": 281,
"end": 309,
"text": "(Guennebaud and Jacob, 2010)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A distributional analysis of a lexicalized statistical parsing model",
"authors": [
{
"first": "M",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bikel",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP 2004",
"volume": "",
"issue": "",
"pages": "182--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel M. Bikel. 2004. A distributional analysis of a lexicalized statistical parsing model. In Proceedings of EMNLP 2004, pages 182-189. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Re)ranking meets morphosyntax: Stateof-the-art results from the SPMRL 2013 shared 12 The source code of the parser is released with pretrained models at",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Cetinoglu",
"suffix": ""
},
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Seeker",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages",
"volume": "",
"issue": "",
"pages": "135--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders Bj\u00f6rkelund, Ozlem Cetinoglu, Rich\u00e1rd Farkas, Thomas Mueller, and Wolfgang Seeker. 2013. (Re)ranking meets morphosyntax: State- of-the-art results from the SPMRL 2013 shared 12 The source code of the parser is released with pretrained models at https://github.com/mcoavoux/mtg_ TACL. 13 https://github.com/wmaier/treetools 14 https://github.com/andreasvc/disco-dop task. In Proceedings of the Fourth Work- shop on Statistical Parsing of Morphologically- Rich Languages, pages 135-145, Seattle, Washington, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Large-scale machine learning with stochastic gradient descent",
"authors": [
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 19th International Conference on Computational Statistics (COMPSTAT'2010)",
"volume": "",
"issue": "",
"pages": "177--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L\u00e9on Bottou. 2010. Large-scale machine learning with stochastic gradient descent. In Proceed- ings of the 19th International Conference on Computational Statistics (COMPSTAT'2010), pages 177-187, Paris, France. Springer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Tiger treebank",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Stefanie",
"middle": [],
"last": "Dipper",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Lezius",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Workshop on Treebanks and Linguistic Theories",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Brants, Stefanie Dipper, Silvia Hansen, Wolfgang Lezius, and George Smith. 2002. Tiger treebank. In Proceedings of the Work- shop on Treebanks and Linguistic Theories, September 20-21 (TLT02). Sozopol, Bulgaria.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Statistical parsing with a context-free grammar and word statistics",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Fourteenth National Conference on Artificial Intelligence and Ninth Conference on Innovative Applications of Artificial Intelligence, AAAI'97/IAAI'97",
"volume": "",
"issue": "",
"pages": "598--603",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 1997. Statistical parsing with a context-free grammar and word statistics. In Proceedings of the Fourteenth National Conference on Artificial Intelligence and Ninth Conference on Innovative Applications of Artificial Intelligence, AAAI'97/IAAI'97, pages 598-603. AAAI Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Parsing as language modeling",
"authors": [
{
"first": "Kook",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Choe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2331--2336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2331-2336, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Incremental discontinuous phrase structure parsing with the gap transition",
"authors": [
{
"first": "Maximin",
"middle": [],
"last": "Coavoux",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Crabb\u00e9",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1259--1270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximin Coavoux and Benoit Crabb\u00e9. 2017a. Incremental discontinuous phrase structure parsing with the gap transition. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1259-1270, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multilingual lexicalized constituency parsing with word-level auxiliary tasks",
"authors": [
{
"first": "Maximin",
"middle": [],
"last": "Coavoux",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Crabb\u00e9",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "331--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximin Coavoux and Benoit Crabb\u00e9. 2017b. Multilingual lexicalized constituency parsing with word-level auxiliary tasks. In Proceed- ings of the 15th Conference of the European Chapter of the Association for Computa- tional Linguistics: Volume 2, Short Papers, pages 331-336, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Three generative, lexicalised models for statistical parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "16--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, pages 16-23, Madrid, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Efficient discontinuous phrasestructure parsing via the generalized maximum spanning arborescence",
"authors": [
{
"first": "Caio",
"middle": [],
"last": "Corro",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"Le"
],
"last": "Roux",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Lacroix",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1645--1655",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caio Corro, Joseph Le Roux, and Mathieu Lacroix. 2017. Efficient discontinuous phrase- structure parsing via the generalized maximum spanning arborescence. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 1645-1655, Copenhagen, Denmark. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An LR-inspired generalized lexicalized phrase structure parser",
"authors": [
{
"first": "",
"middle": [],
"last": "Benoit Crabb\u00e9",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "541--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benoit Crabb\u00e9. 2014. An LR-inspired generalized lexicalized phrase structure parser. In Proceed- ings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 541-552, Dublin, Ireland. Dublin City University and Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Discontinuous parsing with an efficient and accurate DOP model",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Van Cranenburgh",
"suffix": ""
},
{
"first": "Rens",
"middle": [],
"last": "Bod",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of IWPT",
"volume": "",
"issue": "",
"pages": "7--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas van Cranenburgh and Rens Bod. 2013. Discontinuous parsing with an efficient and accurate DOP model. In Proceedings of IWPT, pages 7-16.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Data-oriented parsing with discontinuous constituents and function tags",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Van Cranenburgh",
"suffix": ""
},
{
"first": "Remko",
"middle": [],
"last": "Scha",
"suffix": ""
},
{
"first": "Rens",
"middle": [],
"last": "Bod",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Language Modelling",
"volume": "4",
"issue": "1",
"pages": "57--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas van Cranenburgh, Remko Scha, and Rens Bod. 2016. Data-oriented parsing with discontinuous constituents and function tags. Journal of Language Modelling, 4(1):57-111.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Incremental parsing with minimal features using bidirectional LSTM",
"authors": [
{
"first": "James",
"middle": [],
"last": "Cross",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "32--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Cross and Liang Huang. 2016a. Incremen- tal parsing with minimal features using bi- directional LSTM. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 32-37, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Spanbased constituency parsing with a structurelabel system and provably optimal dynamic oracles",
"authors": [
{
"first": "James",
"middle": [],
"last": "Cross",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Cross and Liang Huang. 2016b. Span- based constituency parsing with a structure- label system and provably optimal dynamic oracles. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1-11, Austin, Texas. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Probabilistic parsing for german using sisterhead dependencies",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Dubey",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "96--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Dubey and Frank Keller. 2003. Probabilistic parsing for german using sister- head dependencies. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 96-103, Sapporo, Japan. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural CRF parsing",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "302--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and Dan Klein. 2015. Neural CRF parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 302-312, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Recurrent neural network grammars",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "199--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Re- current neural network grammars. In Pro- ceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199-209, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Parsing discontinuous constituents in English",
"authors": [
{
"first": "Kilian",
"middle": [],
"last": "Evang",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kilian Evang. 2011. Parsing discontinuous con- stituents in English. Ph.D. thesis, Masters thesis, University of T\u00fcbingen.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "PLCFRS parsing of english discontinuous constituents",
"authors": [
{
"first": "Kilian",
"middle": [],
"last": "Evang",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Kallmeyer",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 12th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "104--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kilian Evang and Laura Kallmeyer. 2011. PLCFRS parsing of english discontinuous constituents. In Proceedings of the 12th Inter- national Conference on Parsing Technologies, pages 104-116, Dublin, Ireland. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Parsing as reduction",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "-Gonz\u00e1lez",
"middle": [],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1523--1533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Fern\u00e1ndez-Gonz\u00e1lez and Andr\u00e9 F. T. Martins. 2015. Parsing as reduction. In Pro- ceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1523-1533, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "What's going on in neural constituency parsers? an analysis",
"authors": [
{
"first": "David",
"middle": [],
"last": "Gaddy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "999--1010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Gaddy, Mitchell Stern, and Dan Klein. 2018. What's going on in neural constituency parsers? an analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 999-1010, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Generic refinement of expressive grammar formalisms with an application to discontinuous constituent parsing",
"authors": [
{
"first": "Kilian",
"middle": [],
"last": "Gebhardt",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3049--3063",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kilian Gebhardt. 2018. Generic refinement of ex- pressive grammar formalisms with an appli- cation to discontinuous constituent parsing. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3049-3063, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Corpus variation and parser performance",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "167--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea. 2001. Corpus variation and parser performance. In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing, pages 167-202.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Eigen v3",
"authors": [
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Guennebaud",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Jacob",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ga\u00ebl Guennebaud and Beno\u00eet Jacob. 2010. Eigen v3. http://eigen.tuxfamily.org.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Less grammar, more features",
"authors": [
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "228--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Hall, Greg Durrett, and Dan Klein. 2014. Less grammar, more features. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 228-237, Baltimore, Maryland. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Parsing Discontinuous Phrase Structure with Grammatical Functions",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johan Hall and Joakim Nivre. 2008. Parsing Discontinuous Phrase Structure with Gram- matical Functions, Springer Berlin Heidelberg, Berlin, Heidelberg.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Data-driven parsing with probabilistic linear context-free rewriting systems",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Kallmeyer",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Maier",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "537--545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Kallmeyer and Wolfgang Maier. 2010. Data-driven parsing with probabilistic linear context-free rewriting systems. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 537-545, Beijing, China. Coling 2010 Organizing Committee.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Simple and accurate dependency parsing using bidirectional LSTM feature representations",
"authors": [
{
"first": "Eliyahu",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "313--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature represen- tations. Transactions of the Association for Computational Linguistics, 4:313-327.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Constituency parsing with a self-attentive encoder",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Kitaev",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2676--2686",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Pro- ceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676-2686, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423-430, Sapporo, Japan. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Parser showdown at the wall street corral: An empirical investigation of error types in parser output",
"authors": [
{
"first": "Jonathan",
"middle": [
"K"
],
"last": "Kummerfeld",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "1048--1059",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan K. Kummerfeld, David Hall, James R. Curran, and Dan Klein. 2012. Parser showdown at the wall street corral: An empirical investigation of error types in parser output. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1048-1059, Jeju Island, Korea. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "What do recurrent neural network grammars learn about syntax?",
"authors": [
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "1",
"issue": "",
"pages": "1249--1258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2017. What do recurrent neural network grammars learn about syntax? In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1249-1258, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Assessing the ability of lstms to learn syntax-sensitive dependencies",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Linzen",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Dupoux",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association of Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "521--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntax-sensitive dependencies. Trans- actions of the Association of Computational Linguistics, 4:521-535.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "In-order transition-based constituent parsing",
"authors": [
{
"first": "Jiangming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "413--424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiangming Liu and Yue Zhang. 2017. In-order transition-based constituent parsing. Transac- tions of the Association for Computational Linguistics, 5:413-424.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Discontinuous incremental shift-reduce parsing",
"authors": [
{
"first": "Wolfgang",
"middle": [],
"last": "Maier",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1202--1212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfgang Maier. 2015. Discontinuous incremen- tal shift-reduce parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Inter- national Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1202-1212, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Discontinuous parsing with continuous trees",
"authors": [
{
"first": "Wolfgang",
"middle": [],
"last": "Maier",
"suffix": ""
},
{
"first": "Timm",
"middle": [],
"last": "Lichte",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Discontinuous Structures in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "47--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfgang Maier and Timm Lichte. 2016. Dis- continuous parsing with continuous trees. In Proceedings of the Workshop on Discontinuous Structures in Natural Language Processing, pages 47-57, San Diego, California. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Building a large annotated corpus of English: The Penn treebank",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn treebank. Computational Linguistics, Volume 19, Number 2, June 1993, Special Issue on Using Large Corpora: II.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Generating typed dependency parses from phrase structure parses",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06). European Language Resources Association (ELRA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the Fifth Interna- tional Conference on Language Resources and Evaluation (LREC'06). European Language Resources Association (ELRA).",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Probabilistic CFG with latent annotations",
"authors": [
{
"first": "Takuya",
"middle": [],
"last": "Matsuzaki",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "75--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takuya Matsuzaki, Yusuke Miyao, and Jun'ichi Tsujii. 2005. Probabilistic CFG with latent annotations. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 75-82, Ann Arbor, Michigan. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Efficient higher-order CRFs for morphological tagging",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "322--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Mueller, Helmut Schmid, and Hinrich Sch\u00fctze. 2013. Efficient higher-order CRFs for morphological tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 322-332, Seattle, Washington, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Optimizing spectral learning for parsing",
"authors": [
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1546--1556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shashi Narayan and Shay B. Cohen. 2016. Optimizing spectral learning for parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1546-1556, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Incrementality in deterministic dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL Workshop Incremental Parsing: Bringing Engineering and Cognition Together",
"volume": "",
"issue": "",
"pages": "50--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2004. Incrementality in deter- ministic dependency parsing. In Proceedings of the ACL Workshop Incremental Parsing: Bringing Engineering and Cognition Together, pages 50-57, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Non-projective dependency parsing in expected linear time",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "351--359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2009. Non-projective dependency parsing in expected linear time. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 351-359. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Learning accurate, compact, and interpretable tree annotation",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Romain",
"middle": [],
"last": "Thibaux",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "433--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 433-440. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "412--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank, Anders S\u00f8gaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tag- ging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 412-418, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Acceleration of stochastic approximation by averaging",
"authors": [
{
"first": "T",
"middle": [],
"last": "Boris",
"suffix": ""
},
{
"first": "Anatoli",
"middle": [
"B"
],
"last": "Polyak",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Juditsky",
"suffix": ""
}
],
"year": 1992,
"venue": "SIAM Journal on Control and Optimization",
"volume": "30",
"issue": "4",
"pages": "838--855",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boris T. Polyak and Anatoli B. Juditsky. 1992. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838-855.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "A classifierbased parser with linear run-time complexity",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth International Workshop on Parsing Technology",
"volume": "",
"issue": "",
"pages": "125--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Sagae and Alon Lavie. 2005. A classifier- based parser with linear run-time complexity. In Proceedings of the Ninth International Work- shop on Parsing Technology, pages 125-132, Vancouver, British Columbia. Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing mor phologically rich languages",
"authors": [
{
"first": "Djam\u00e9",
"middle": [],
"last": "Seddah",
"suffix": ""
},
{
"first": "Reut",
"middle": [],
"last": "Tsarfaty",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Candito",
"suffix": ""
},
{
"first": "Jinho",
"middle": [
"D"
],
"last": "Choi",
"suffix": ""
},
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Iakes",
"middle": [],
"last": "Goenaga",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Koldo Gojenola Galletebeitia",
"suffix": ""
},
{
"first": "Spence",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Maier",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Przepi\u00f3rkowski",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Yannick",
"middle": [],
"last": "Seeker",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Versley",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Vincze",
"suffix": ""
},
{
"first": "Alina",
"middle": [],
"last": "Woli\u0144ski",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Wr\u00f3blewska",
"suffix": ""
},
{
"first": "Clergerie",
"middle": [],
"last": "Villemonte De La",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages",
"volume": "",
"issue": "",
"pages": "146--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Djam\u00e9 Seddah, Reut Tsarfaty, Sandra K\u00fcbler, Marie Candito, Jinho D. Choi, Rich\u00e1rd Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola Galletebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi\u00f3rkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woli\u0144ski, Alina Wr\u00f3blewska, and Eric Villemonte de la Clergerie. 2013. Over- view of the SPMRL 2013 shared task: A cross-framework evaluation of parsing mor phologically rich languages. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 146-182, Seattle, Washington, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "An annotation scheme for free word order languages",
"authors": [
{
"first": "Wojciech",
"middle": [],
"last": "Skut",
"suffix": ""
},
{
"first": "Brigitte",
"middle": [],
"last": "Krenn",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "88--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wojciech Skut, Brigitte Krenn, Thorsten Brants, and Hans Uszkoreit. 1997. An annotation scheme for free word order languages. In Proceedings of the Fifth Conference on Applied Natural Language Processing, pages 88-95, Washington, DC, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Deep multi-task learning with low level tasks supervised at lower layers",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "231--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 231-235, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Neural discontinuous constituency parsing",
"authors": [
{
"first": "Milo\u0161",
"middle": [],
"last": "Stanojevi\u0107",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Garrido Alhama",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1667--1677",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milo\u0161 Stanojevi\u0107 and Raquel Garrido Alhama. 2017. Neural discontinuous constituency pars- ing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1667-1677, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "A minimal span-based neural constituency parser",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "818--827",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 818-827, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Experiments with easyfirst nonprojective constituent parsing",
"authors": [
{
"first": "Yannick",
"middle": [],
"last": "Versley",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages",
"volume": "",
"issue": "",
"pages": "39--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yannick Versley. 2014a. Experiments with easy- first nonprojective constituent parsing. In Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non- Canonical Languages, pages 39-53, Dublin, Ireland. Dublin City University.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Incorporating semisupervised features into discontinuous easyfirst constituent parsing",
"authors": [
{
"first": "Yannick",
"middle": [],
"last": "Versley",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yannick Versley. 2014b. Incorporating semi- supervised features into discontinuous easy- first constituent parsing. CoRR, abs/1409. 3813v1.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Discontinuity (re) 2 visited: A minimalist approach to pseudoprojective constituent parsing",
"authors": [
{
"first": "Yannick",
"middle": [],
"last": "Versley",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Discontinuous Structures in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "58--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yannick Versley. 2016. Discontinuity (re) 2 visited: A minimalist approach to pseudoprojective constituent parsing. In Proceedings of the Workshop on Discontinuous Structures in Natural Language Processing, pages 58-69, San Diego, California. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Feature optimization for constituent parsing via neural networks",
"authors": [
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1138--1147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiguo Wang, Haitao Mi, and Nianwen Xue. 2015. Feature optimization for constituent parsing via neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1138-1147, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Transition-based neural constituent parsing",
"authors": [
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1169--1179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taro Watanabe and Eiichiro Sumita. 2015. Transition-based neural constituent parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1169-1179, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Transitionbased parsing of the chinese treebank using a global discriminative model",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 11th International Conference on Parsing Technologies (IWPT'09)",
"volume": "",
"issue": "",
"pages": "162--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2009. Transition- based parsing of the chinese treebank using a global discriminative model. In Proceedings of the 11th International Conference on Pars- ing Technologies (IWPT'09), pages 162-171, Paris, France. Association for Computational Linguistics.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Syntactic processing using the generalized perceptron and beam search",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics",
"volume": "37",
"issue": "1",
"pages": "105--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2011. Syntactic processing using the generalized perceptron and beam search. Computational Linguistics, 37(1):105-151.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Fast and accurate shift-reduce constituent parsing",
"authors": [
{
"first": "Muhua",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jingbo",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "434--443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shift-reduce constituent parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 434-443, Sofia, Bulgaria. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Transactions of the Association for Computational Linguistics, vol. 7, pp. 73-89, 2019. Action Editor: Stephen Clark. Submission batch: 9/2018; Revision batch: 11/2018; Published 4/2019. c 2019 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Tree from the Discontinuous Penn Treebank",
"uris": null
},
"TABREF0": {
"content": "<table><tr><td/><td/><td/><td>S</td><td/><td/><td/></tr><tr><td/><td/><td colspan=\"5\">\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510</td></tr><tr><td/><td/><td>VP</td><td/><td>\u2502</td><td/><td>\u2502</td></tr><tr><td/><td colspan=\"5\">\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 \u2502 \u2500\u2500\u2510</td><td>\u2502</td></tr><tr><td/><td>NP</td><td/><td/><td colspan=\"2\">NP \u2502</td><td>\u2502</td></tr><tr><td colspan=\"4\">\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510</td><td>\u2502</td><td>\u2502</td><td>\u2502</td></tr><tr><td>DT</td><td>JJ</td><td>JJ</td><td colspan=\"4\">NN PRP VBZ .</td></tr><tr><td>\u2502</td><td>\u2502</td><td>\u2502</td><td>\u2502</td><td>\u2502</td><td>\u2502</td><td>\u2502</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "An excellent environmental actor he is ."
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "The ML-GAP transition system, an unlexicalized transition system for discontinuous constituency parsing."
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Feature template set descriptions."
},
"TABREF6": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Running times on development sets of the Tiger and the DPTB, reported in tokens per second (tok/s) and sentences per second (sent/s). Runtimes are only indicative; they are not comparable with those reported by other authors, since they use different hardware."
},
"TABREF9": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Discontinuous parsing results on the test sets."
},
"TABREF11": {
"content": "<table><tr><td>the constituent. For example, to construct the</td></tr><tr><td>following head-final NP,</td></tr><tr><td>NP[actor]</td></tr><tr><td>An excellent environmental actor</td></tr><tr><td>a lexicalized system must shift every token before</td></tr><tr><td>starting reductions in order to be able to predict</td></tr><tr><td>the dependency arcs between the head actor and</td></tr><tr><td>its three dependents. 7 In contrast, an unlexicalized</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "Incrementality measured by the average size of the stack during derivations. The average is calculated across all configurations (not across all sentences)."
},
"TABREF12": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "GAP action statistics in training sets."
},
"TABREF14": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Evaluation statistics per phenomenon. G: gold occurrences; PfM: perfect match; PaM: partial match; FN: false negatives; FP: false positives."
}
}
}
}