| { |
| "paper_id": "J16-3001", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T03:01:48.372548Z" |
| }, |
| "title": "Transition-Based Parsing for Deep Dependency Structures", |
| "authors": [ |
| { |
| "first": "Xun", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Peking University", |
| "location": {} |
| }, |
| "email": "zhangxunah@pku.edu" |
| }, |
| { |
| "first": "Yantao", |
| "middle": [], |
| "last": "Du", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "duyantao@pku.edu" |
| }, |
| { |
| "first": "Weiwei", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Xiaojun", |
| "middle": [], |
| "last": "Wan", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "wanxiaojun@pku.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Derivations under different grammar formalisms allow extraction of various dependency structures. Particularly, bilexical deep dependency structures beyond surface tree representation can be derived from linguistic analysis grounded by CCG, LFG, and HPSG. Traditionally, these dependency structures are obtained as a by-product of grammar-guided parsers. In this article, we study the alternative data-driven, transition-based approach, which has achieved great success for tree parsing, to build general dependency graphs. We integrate existing tree parsing techniques and present two new transition systems that can generate arbitrary directed graphs in an incremental manner. Statistical parsers that are competitive in both accuracy and efficiency can be built upon these transition systems. Furthermore, the heterogeneous design of transition systems yields diversity of the corresponding parsing models and thus greatly benefits parser ensemble. Concerning the disambiguation problem, we introduce two new techniques, namely, transition combination and tree approximation, to improve parsing quality. Transition combination makes every action performed by a parser significantly change configurations. Therefore, more distinct features can be extracted for statistical disambiguation. With the same goal of extracting informative features, tree approximation induces tree backbones from dependency graphs and re-uses tree parsing techniques to produce tree-related features. We conduct experiments on CCG-grounded functor-argument analysis, LFG-grounded grammatical relation analysis, and HPSG-grounded semantic dependency analysis for English and Chinese. Experiments demonstrate that data-driven models with appropriate transition systems can produce high-quality deep dependency analysis, comparable to more complex grammar-driven", |
| "pdf_parse": { |
| "paper_id": "J16-3001", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Derivations under different grammar formalisms allow extraction of various dependency structures. Particularly, bilexical deep dependency structures beyond surface tree representation can be derived from linguistic analysis grounded by CCG, LFG, and HPSG. Traditionally, these dependency structures are obtained as a by-product of grammar-guided parsers. In this article, we study the alternative data-driven, transition-based approach, which has achieved great success for tree parsing, to build general dependency graphs. We integrate existing tree parsing techniques and present two new transition systems that can generate arbitrary directed graphs in an incremental manner. Statistical parsers that are competitive in both accuracy and efficiency can be built upon these transition systems. Furthermore, the heterogeneous design of transition systems yields diversity of the corresponding parsing models and thus greatly benefits parser ensemble. Concerning the disambiguation problem, we introduce two new techniques, namely, transition combination and tree approximation, to improve parsing quality. Transition combination makes every action performed by a parser significantly change configurations. Therefore, more distinct features can be extracted for statistical disambiguation. With the same goal of extracting informative features, tree approximation induces tree backbones from dependency graphs and re-uses tree parsing techniques to produce tree-related features. We conduct experiments on CCG-grounded functor-argument analysis, LFG-grounded grammatical relation analysis, and HPSG-grounded semantic dependency analysis for English and Chinese. Experiments demonstrate that data-driven models with appropriate transition systems can produce high-quality deep dependency analysis, comparable to more complex grammar-driven", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The derivations licensed by a grammar under deep grammar formalisms, for example, combinatory categorial grammar (CCG; Steedman 2000), lexical-functional grammar (LFG; Bresnan and Kaplan 1982) and head-driven phrase structure grammar (HPSG; Pollard and Sag 1994) , are able to produce rich linguistic information encoded as bilexical dependencies. Under CCG, this is done by relating the lexical heads of functor categories and their arguments (Clark, Hockenmaier, and Steedman 2002) . Under LFG, bilexical grammatical relations can be easily derived as the backbone of F-structures (Sun et al. 2014) . Under HPSG, predicate-argument structures (Miyao, Ninomiya, and ichi Tsujii 2004) or reduction of minimal recursion semantics (Ivanova et al. 2012 ) can be extracted from typed feature structures corresponding to whole sentences. Dependency analysis grounded in deep grammar formalisms is usually beyond tree representations and well-suited for producing meaning representations. Figure 1 is an example from CCGBank. The deep dependency graph conveniently represents more semantically motivated information than the surface tree. For instance, it directly captures the Agent-Predicate relations between word \"people\" and conjuncts \"fight,\" \"eat,\" as well as \"drink.\"", |
| "cite_spans": [ |
| { |
| "start": 168, |
| "end": 192, |
| "text": "Bresnan and Kaplan 1982)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 241, |
| "end": 262, |
| "text": "Pollard and Sag 1994)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 444, |
| "end": 483, |
| "text": "(Clark, Hockenmaier, and Steedman 2002)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 583, |
| "end": 600, |
| "text": "(Sun et al. 2014)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 645, |
| "end": 684, |
| "text": "(Miyao, Ninomiya, and ichi Tsujii 2004)", |
| "ref_id": null |
| }, |
| { |
| "start": 729, |
| "end": 749, |
| "text": "(Ivanova et al. 2012", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 983, |
| "end": 991, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Automatically building deep dependency structures is desirable for many practical NLP applications, for example, information extraction and question answering (Reddy, Lapata, and Steedman 2014) . Traditionally, deep dependency graphs are generated as a by-product of grammar-guided parsers. The challenge is that a deep-grammar-guided parsing model usually cannot produce full coverage and the time complexity of the corresponding parsing algorithms is very high. Previous work on data-driven dependency parsing mainly focused on tree-shaped representations. Nevertheless, recent work has shown that a data-driven approach is also applicable to generate more general linguistic graphs. Sagae and Tsujii (2008) present an initial study on applying transition-based methods to generate HPSG-style predicate-argument structures, and have obtained competitive results. Furthermore, Titov et al. (2009) and Henderson et al. (2013) have shown that more general graphs rather than planars can be produced by augmenting existing transition systems.", |
| "cite_spans": [ |
| { |
| "start": 159, |
| "end": 193, |
| "text": "(Reddy, Lapata, and Steedman 2014)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 686, |
| "end": 709, |
| "text": "Sagae and Tsujii (2008)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 878, |
| "end": 897, |
| "text": "Titov et al. (2009)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 902, |
| "end": 925, |
| "text": "Henderson et al. (2013)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "This work follows early encouraging research and studies transition-based approaches to construct deep dependency graphs. The computational challenge to incremental graph spanning is the existence of a large number of crossing arcs in deep dependency analysis. To tackle this problem, we integrate insightful ideas, especially the ones illustrated in Nivre (2009) and G\u00f3mez-Rodr\u00edguez and Nivre (2010) , developed in the tree spanning scenario, and design two new transition systems, both of which are able to produce arbitrary directed graphs. In particular, we explore two techniques to localize transition actions to maximize the effect of a greedy search procedure. In this way, the corresponding parsers for generating linguistically motivated bilexical graphs can process sentences in close to linear time with respect to the number of input words. This efficiency advantage allows deep linguistic processing for very-large-scale text data.", |
| "cite_spans": [ |
| { |
| "start": 351, |
| "end": 363, |
| "text": "Nivre (2009)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 368, |
| "end": 400, |
| "text": "G\u00f3mez-Rodr\u00edguez and Nivre (2010)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "For syntactic parsing, ensembled methods have been shown to be very helpful in boosting accuracy (Sagae and Lavie 2006; Zhang et al. 2009; McDonald and Nivre 2011) . In particular, Surdeanu and Manning (2010) presented a nice comparative study on various ensemble models for dependency tree parsing. They found that the diversity of base parsers is more important than complex ensemble models for learning. Motivated by this observation, the authors proposed a hybrid transition-based parser that achieved state-of-the-art performance by combining complementary prediction powers of different transition systems. One advantage of their architecture is the linear-time decoding complexity, given that all base models run in linear-time. Another concern of our work is about the model diversity obtained by the heterogeneous design of transition systems for general graph spanning. Empirical evaluation indicates that statistical parsers built upon our new transition systems as well as the existing best transition system-namely, Titov et al. (2009) 's system (THMM, hereafter)-exhibit complementary parsing strengths, which benefit system combination. In order to take advantage of this model diversity, we propose a simple yet effective ensemble model to build a better hybrid system.", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 119, |
| "text": "(Sagae and Lavie 2006;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 120, |
| "end": 138, |
| "text": "Zhang et al. 2009;", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 139, |
| "end": 163, |
| "text": "McDonald and Nivre 2011)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 181, |
| "end": 208, |
| "text": "Surdeanu and Manning (2010)", |
| "ref_id": null |
| }, |
| { |
| "start": 1029, |
| "end": 1048, |
| "text": "Titov et al. (2009)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "We implement statistical parsers using the structured perceptron algorithm (Collins 2002) for transition classification and use a beam decoder for global inference. Concerning the disambiguation problem, we introduce two new techniques, namely, transition combination and tree approximation, to improve parsing quality. To increase system coverage, the ARC transitions designed by the THMM as well as our systems do not change the nodes in the stack nor buffer in a configuration: Only the nodes linked to the top of the stack or buffer are modified. Therefore, features derived from the configurations before and after an ARC transition are not distinct enough to train a good classifier. To deal with this problem, we propose the transition combination technique and three algorithms to derive oracles for modified transition systems. When we apply our models to semantics-oriented deep dependency structures, for example, CCG-grounded functor-argument analysis and HPSG-grounded reduced minimal recursion semantics (MRS; Copestake et al. 2005 ) analysis, we find that syntactic trees can provide very helpful features. In case the syntactic information is not available, we introduce a tree approximation technique to induce tree backbones from deep dependency graphs. Such tree backbones can be utilized to train a tree parser which provides pseudo tree features.", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 89, |
| "text": "(Collins 2002)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1024, |
| "end": 1045, |
| "text": "Copestake et al. 2005", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "To evaluate transition-based models for deep dependency parsing, we conduct experiments on CCG-grounded functor-argument analysis (Hockenmaier and Steedman 2007; Tse and Curran 2010), LFG-grounded grammatical relation analysis (Sun et al. 2014) , and HPSG-grounded semantic dependency analysis (Miyao, Ninomiya, and ichi Tsujii 2004; Ivanova et al. 2012) for English and Chinese. Empirical evaluation indicates some non-obvious facts:", |
| "cite_spans": [ |
| { |
| "start": 227, |
| "end": 244, |
| "text": "(Sun et al. 2014)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 294, |
| "end": 333, |
| "text": "(Miyao, Ninomiya, and ichi Tsujii 2004;", |
| "ref_id": null |
| }, |
| { |
| "start": 334, |
| "end": 354, |
| "text": "Ivanova et al. 2012)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Data-driven models with appropriate transition systems and disambiguation techniques can produce high-quality deep dependency analysis, comparable to more complex grammar-driven models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Parsers built upon heterogeneous transition systems and decoding orders have complementary prediction strengths, and the parsing quality can be significantly improved by system combination; compared to the best individual system, system combination gets an absolute labeled F-score improvement of 1.21 on average.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "Transition combination significantly improves parsing accuracy on a wide range of conditions, resulting in an absolute labeled F-score improvement of 0.74 on average.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.", |
| "sec_num": null |
| }, |
| { |
| "text": "Pseudo trees contribute to semantic dependency parsing (SDP) equally well to syntactic trees, and result in an absolute labeled F-score improvement of 1.27 on average.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "We compare our parser with representative state-of-the-art parsers Auli and Lopez 2011b; Martins and Almeida 2014; Xu, Clark, and Zhang 2014; Du, Sun, and Wan 2015) with respect to different architectures. To evaluate the impact of grammatical knowledge, we compare our parser with parsers guided by treebank-induced HPSG and CCG grammars. Both of our individual and ensembled parsers achieve equivalent accuracy to HPSG and CCG chart parsers Auli and Lopez 2011b) , and outperform a shift-reduce CCG parser (Xu, Clark, and Zhang 2014) . It is worth noting that our parsers exclude all syntactic and grammatical information. In other words, strictly less information is used. This result demonstrates the effectiveness of data-driven approaches to the deep linguistic processing problem. Compared to other types of data-driven parsers, our individual parser achieves equivalent performance to and our hybrid parser obtains slightly better results than factorization parsers based on dual decomposition (Martins and Almeida 2014; Du, Sun, and Wan 2015) . This result highlights the effectiveness of the lightweight, transitionbased approach.", |
| "cite_spans": [ |
| { |
| "start": 67, |
| "end": 88, |
| "text": "Auli and Lopez 2011b;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 89, |
| "end": 114, |
| "text": "Martins and Almeida 2014;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 115, |
| "end": 141, |
| "text": "Xu, Clark, and Zhang 2014;", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 142, |
| "end": 164, |
| "text": "Du, Sun, and Wan 2015)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 443, |
| "end": 464, |
| "text": "Auli and Lopez 2011b)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 508, |
| "end": 535, |
| "text": "(Xu, Clark, and Zhang 2014)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 1002, |
| "end": 1028, |
| "text": "(Martins and Almeida 2014;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1029, |
| "end": 1051, |
| "text": "Du, Sun, and Wan 2015)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "Parsers based on the two new transition systems have been utilized as base components for parser ensemble (Du et al. 2014) for SemEval 2014 Task 8 (Oepen et al. 2014 ). Our hybrid system obtained the best overall performance of the closed track of this shared task. In this article, we re-implement all models, calibrate features more carefully, and thus obtain improved accuracy. The idea to extract tree-shaped backbone from a deep dependency graph has also been used to design other types of parsing models in our early work (Du et al. 2014 (Du et al. , 2015 Du, Sun, and Wan 2015) . Nevertheless, the idea to train a pseudo tree parser to serve a transition-based graph parser is new.", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 122, |
| "text": "(Du et al. 2014)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 147, |
| "end": 165, |
| "text": "(Oepen et al. 2014", |
| "ref_id": null |
| }, |
| { |
| "start": 528, |
| "end": 543, |
| "text": "(Du et al. 2014", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 544, |
| "end": 561, |
| "text": "(Du et al. , 2015", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 562, |
| "end": 584, |
| "text": "Du, Sun, and Wan 2015)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "The implementation of our parser is available at http://www.icst.pku.edu.cn/ lcwm/grass.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4.", |
| "sec_num": null |
| }, |
| { |
| "text": "A dependency graph G = (V, A) is a labeled directed graph, such that for sentence x = w 1 , . . . , w n the following holds:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Notations", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "1. V = {0, 1, 2, . . . , n}, 2. A \u2286 V \u00d7 R \u00d7 V.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Notations", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The vertex set V consists of n + 1 nodes, each of which is represented by a single integer. In particular, 0 represents a virtual root node w 0 , and all others correspond to words in x. The arc set A represents the labeled dependency relations of the particular analysis G. Specifically, an arc (i, r, j) \u2208 A represents a dependency relation r from head w i to dependent w j . A dependency graph G is thus a set of labeled dependency relations between the root and the words of x. To simplify the description in this section, we mainly consider unlabeled parsing and assume the relation set R is a singleton. Or, taking it another way, we assume A \u2286 V \u00d7 V. It is straightforward to adapt the discussions in this article for labeled parsing. To do so, we can parameterize transitions with possible dependency relations. For empirical evaluation as discussed in Section 5, we will test both labeled and unlabeled parsing models. Following Nivre (2008) , we define a transition system for dependency parsing as a quadruple", |
| "cite_spans": [ |
| { |
| "start": 938, |
| "end": 950, |
| "text": "Nivre (2008)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Notations", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "S = (C, T, c s , C t ), where 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Notations", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "C is a set of configurations, each of which contains a buffer \u03b2 of (remaining) words and a set A of dependency arcs,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background Notations", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "T is a set of transitions, each of which is a (partial) function t :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "C \u2192 C,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "3. c s is an initialization function, mapping a sentence x to a configuration with", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03b2 = [1, . . . , n], 4. C t \u2286 C is a set of terminal configurations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "Given a sentence x = w 1 , . . . , w n and a graph G = (V, A) on it, if there is a sequence of transitions t 1 , . . . , t m and a sequence of", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "configurations c 0 , . . . , c m such that c 0 = c s (x), t i (c i\u22121 ) = c i (i = 1, . . . , m)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": ", c m \u2208 C t , and A c m = A, we say the sequence of transitions is an oracle sequence. And we define\u0100 c i = A \u2212 A c i for the arcs to be built of c i . We could denote a transition sequence as either t 1,m or c 0,m .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "In a typical transition-based parsing process, the input words are put into a queue and partially built structures are organized by a stack. A set of SHIFT/REDUCE actions are performed sequentially to consume words from the queue and update the partial parsing results organized by the stack. Our new systems designed for deep parsing differ with respect to their information structures to define a configuration and the behaviors of transition actions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.", |
| "sec_num": null |
| }, |
| { |
| "text": "For every two nodes, a simple graph-spanning strategy is to check if they can be directly connected by an arc. Accordingly, a \"naive\" spanning algorithm can be implemented by exploring a left-to-right checking order, as introduced by Covington (2001) and modified by Nivre (2008) .", |
| "cite_spans": [ |
| { |
| "start": 234, |
| "end": 250, |
| "text": "Covington (2001)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 267, |
| "end": 279, |
| "text": "Nivre (2008)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Naive Spanning and Locality", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "PARSE(x = (w 1 , . . . , w n )) 1 for j = 1..n 2 for k = j \u2212 1..1 3 Link(j, k)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Naive Spanning and Locality", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The operation Link chooses between 1) adding the arc (i, j) or (j, i) and 2) adding no arc at all. In this way, the algorithm builds a graph by incrementally trying to link every pair of words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Naive Spanning and Locality", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "(\u03c3|i, j|\u03b2) \u21d2 (\u03c3|i, j|\u03b2) RIGHT-ARC (\u03c3|i, j|\u03b2) \u21d2 (\u03c3|i, j|\u03b2) SHIFT (\u03c3, j|\u03b2) \u21d2 (\u03c3|j, \u03b2) POP (\u03c3|i, \u03b2) \u21d2 (\u03c3, \u03b2) SWAP (\u03c3|i|j, \u03b2) \u21d2 (\u03c3|j, i|\u03b2) SWAP T (\u03c3|i|j, \u03b2) \u21d2 (\u03c3|j|i, \u03b2) Figure 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LEFT-ARC", |
| "sec_num": null |
| }, |
| { |
| "text": "Transitions of the online re-ordering approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LEFT-ARC", |
| "sec_num": null |
| }, |
| { |
| "text": "The complexity of naive spanning is \u03b8(n 2 ), 1 because it does nothing to explore the topological properties of a linguistic structure. In other words, the naive graphspanning idea does not fully take advantages of the greedy search of the transitionbased parsing architecture. On the contrary, a well-designed transition system for (projective) tree parsing can decode in linear time by exploiting locality among subtrees. Take the arc-eager system presented in Nivre (2008) , for example: Only the nodes at the top of the stack and the buffer are allowed to be linked. Such limitation is the key to implement a linear time decoder. In the following, we introduce two ideas to localize a transition action, that is, to allow a transition to manipulate only the frontier items in the data structures of a configuration. By this means, we can decrease the number of possible transitions for each configuration and thus minimize the total decoding time.", |
| "cite_spans": [ |
| { |
| "start": 463, |
| "end": 475, |
| "text": "Nivre (2008)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LEFT-ARC", |
| "sec_num": null |
| }, |
| { |
| "text": "The online re-ordering approach that we explore is to provide the system with ability to re-order the nodes during parsing in an online fashion. The key idea, as introduced in Titov et al. (2009) and Nivre (2009) , is to allow a SWAP transition that switches the position of the two topmost nodes on the stack. By changing the linear order of words, the system is able to build crossing arcs for graph spanning. We refer to this approach as online re-ordering. We introduce a stack-based transition system with online reordering for deep dependency parsing. The obtained oracle parser is complete with respect to the class of all directed graphs without self-loop.", |
| "cite_spans": [ |
| { |
| "start": 176, |
| "end": 195, |
| "text": "Titov et al. (2009)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 200, |
| "end": 212, |
| "text": "Nivre (2009)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System 1: Online Re-ordering", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The System. We define a transition system S S = (C, T, c s , C t ), where a configuration c = (\u03c3, \u03b2, A) \u2208 C contains a stack \u03c3 of nodes, besides \u03b2 and A. We set the initial configuration for a sentence {}) , and take C t to be the set of all configurations of the form c t = (\u03c3, [], A) (for any \u03c3 any A). These transitions are shown in Figure 2 and explained as follows.", |
| "cite_spans": [ |
| { |
| "start": 202, |
| "end": 205, |
| "text": "{})", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 336, |
| "end": 344, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "2.3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "x = w 1 , . . . , w n to be c s (x) = ([], [1, . . ., n],", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "r SHIFT (sh) removes the front from the buffer and pushes it onto the stack.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "r LEFT/RIGHT-ARC (la/ra) updates a configuration by adding (j, i)/(i, j) to A where i is the top of the stack, and j is the front of the buffer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "r POP (pop) updates a configuration by popping the top of the stack. r SWAP (sw) updates a configuration with stack \u03c3|i|j by moving i back to the buffer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "A variation of transition SWAP is SWAP T , which updates the configuration by swapping i and j. However, the system of this variation is not complete with respect to directed graphs because the power of transition SWAP T is limited, and counterexamples of completeness can be found. For more theoretical discussion about this system (i.e., THMM), see Titov et al. (2009) . We also denote Titov et al. (2009) 's system as S T .", |
| "cite_spans": [ |
| { |
| "start": 351, |
| "end": 370, |
| "text": "Titov et al. (2009)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 388, |
| "end": 407, |
| "text": "Titov et al. (2009)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.3.1", |
| "sec_num": null |
| }, |
| { |
| "text": "The soundness of S S is trivial. To demonstrate the completeness of the system, we give a constructive proof that can derive oracle transitions for any arbitrary graph. To simplify the description, the label attached to transitions are not considered. The idea is inspired by Titov et al. (2009) . Given a sentence x = w 1 , . . . , w n and a graph G = (V, A) on it, we start with the initial configuration c 0 = c s (x) and compute the oracle transitions step by step. On the i-th step, let p be the top of \u03c3 c i\u22121 , b be the front of \u03b2 c i\u22121 ; let L(j) be the ordered list of nodes connected to j in\u0100", |
| "cite_spans": [ |
| { |
| "start": 276, |
| "end": 295, |
| "text": "Titov et al. (2009)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theoretical Analysis.", |
| "sec_num": "2.3.2" |
| }, |
| { |
| "text": "c i\u22121 for any node j \u2208 \u03c3 c i\u22121 ; let L(\u03c3 c i\u22121 ) = [L(j 0 ), . . . , L(j l )] if \u03c3 c i\u22121 = [j l , . . . , j 0 ].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theoretical Analysis.", |
| "sec_num": "2.3.2" |
| }, |
| { |
| "text": "The oracle transition for each configuration is derived as follows. If there is no arc linked to p in\u0100 c i\u22121 , then we set t i to pop; if there exists a \u2208\u0100 c i\u22121 linking p and b, then we set t i to la or ra correspondingly. When there are only sh and sw left, we see if there is any node q under the top of \u03c3 c i\u22121 such that L(q) precedes L(p) by the lexicographical order. If so, we set t i to sw; else we set t i to sh. An example for when to do sw is shown in Figure 3 . Let c i = t i (c i\u22121 ); we continue to compute t i+1 , until \u03b2 c i is empty.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 463, |
| "end": 471, |
| "text": "Figure 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Theoretical Analysis.", |
| "sec_num": "2.3.2" |
| }, |
| { |
| "text": "If t i is sh, L(\u03c3 c i\u22121 ) = [L(j 0 ), . . . , L(j l )]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lemma 1", |
| "sec_num": null |
| }, |
| { |
| "text": "is complete ordered by lexicographical order.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lemma 1", |
| "sec_num": null |
| }, |
| { |
| "text": "It cannot be the case that for some u > 0, L(j u ) strictly precedes L(j 0 ), otherwise t i should be sw. It also cannot be the case that for some", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proof", |
| "sec_num": null |
| }, |
| { |
| "text": "u > v > 0, L(j u ) strictly precedes L(j v ), because when j v\u22121 is shifted onto the stack, L(j v ) precedes L(j u )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proof", |
| "sec_num": null |
| }, |
| { |
| "text": "and all the transitions do not change L(j v ) and L(j u ) afterwards.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proof", |
| "sec_num": null |
| }, |
| { |
| "text": "For i = 0, . . . , m, there is no arc (j, k) \u2208\u0100 c i such that j, k \u2208 \u03c3 i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lemma 2", |
| "sec_num": null |
| }, |
| { |
| "text": "When j \u2208 \u03c3 c i is shifted onto the stack by the w-th transition t w , there must be no arc ( j, k) or (k, j) in\u0100 c w such that k \u2208 \u03c3 c w . Otherwise, by induction every node in \u03c3 c w\u22121 can only link to nodes in \u03b2 c w\u22121 , which implies that L(k) has one of the smallest lexicographical orders, and from Lemma 1 the top of \u03c3 c w\u22121 must be linked to j. And not sh, but la or ra should be applied. Theorem 1 t 1 , . . . , t m is an oracle sequence of transitions for G.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proof", |
| "sec_num": null |
| }, |
| { |
| "text": "From Lemma 2, we can infer that\u0100 c m = \u2205, so it suffices to show the sequence of transitions is always finite. We define a swap sequence to be a subsequence t i , . . . , t j such that t i and t j are sw, t i\u22121 and t j+1 are not sw, and a shift sequence similarly. It can be seen that a swap sequence is always followed by a shift sequence, the length of which is no less than the swap sequence, and if the two sequences are of the same length, the next transition cannot be sw. Let #(t) to be the number of transition types t in the sequence, then #(la), #(ra), #(pop), and #(sh) \u2212 #(sw) are all finite. Therefore the number of swap sequence is finite, indicating that the transition sequence is finite.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proof", |
| "sec_num": null |
| }, |
| { |
| "text": "A majority of transition systems organize partial parsing results with a stack. Classical parsers, including arc-standard and arc-eager ones, add dependency arcs only between nodes that are adjacent on the stack or the buffer. A natural idea to produce crossing arcs is to temporarily move nodes that block non-adjacent nodes to an extra memory module, like the two-stack-based system for two-planar graphs (G\u00f3mez-Rodr\u00edguez and Nivre 2010) and the list-based system (Nivre 2008) . In this article, we design a new transition system to handle crossing arcs by using two stacks. This system is also complete with respect to the class of directed graphs without self-loop.", |
| "cite_spans": [ |
| { |
| "start": 466, |
| "end": 478, |
| "text": "(Nivre 2008)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System 2: Two-Stack-Based System", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "We define the two-stack-based transition system", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The System.", |
| "sec_num": "2.4.1" |
| }, |
| { |
| "text": "S 2S = (C, T, c s , C t ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The System.", |
| "sec_num": "2.4.1" |
| }, |
| { |
| "text": "where a configuration c = (\u03c3, \u03c3 , \u03b2, A) \u2208 C contains a primary stack \u03c3 and a secondary stack \u03c3 . We set c s (x) {}) for the sentence x = w 1 , . . . , w n , and we take the set C t to be the set of all configurations with empty buffers. The transition set T contains six types of transitions, as shown in Figure 4 . We only explain MEM and RECALL:", |
| "cite_spans": [ |
| { |
| "start": 108, |
| "end": 111, |
| "text": "(x)", |
| "ref_id": null |
| }, |
| { |
| "start": 112, |
| "end": 115, |
| "text": "{})", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 305, |
| "end": 313, |
| "text": "Figure 4", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The System.", |
| "sec_num": "2.4.1" |
| }, |
| { |
| "text": "= ([], [], [1, . . ., n],", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The System.", |
| "sec_num": "2.4.1" |
| }, |
| { |
| "text": "r MEM (mem) pops the top element from the primary stack and pushes it onto the secondary stack.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The System.", |
| "sec_num": "2.4.1" |
| }, |
| { |
| "text": "r RECALL (rc) moves the top element of the secondary stack back to the primary stack.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The System.", |
| "sec_num": "2.4.1" |
| }, |
| { |
| "text": "(\u03c3|i, \u03c3 , j|\u03b2) \u21d2 (\u03c3|i, \u03c3 , j|\u03b2) RIGHT-ARC (\u03c3|i, \u03c3 , j|\u03b2) \u21d2 (\u03c3|i, \u03c3 , j|\u03b2) SHIFT (\u03c3, \u03c3 , j|\u03b2) \u21d2 (\u03c3|j, \u03c3 , \u03b2) POP (\u03c3|i, \u03c3 , \u03b2) \u21d2 (\u03c3, \u03c3 , \u03b2) MEM (\u03c3|i, \u03c3 , \u03b2) \u21d2 (\u03c3, \u03c3 |i, \u03b2) RECALL (\u03c3, \u03c3 |i, \u03b2) \u21d2 (\u03c3|i, \u03c3 , \u03b2) Figure 4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LEFT-ARC", |
| "sec_num": null |
| }, |
| { |
| "text": "Transitions of the two-stack-based system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "LEFT-ARC", |
| "sec_num": null |
| }, |
| { |
| "text": "The soundness of this system is trivial, and the completeness is also straightforward after we give the construction of an oracle transition sequence for an arbitrary graph. The oracle is computed as follows on the i-th step: We do la, ra, and pop transitions just like in Section 2.3.2. After that, let b be the front of", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theoretical Analysis.", |
| "sec_num": "2.4.2" |
| }, |
| { |
| "text": "\u03b2 c i\u22121 , we see if there is j \u2208 \u03c3 c i\u22121 or j \u2208 \u03c3 c i\u22121 linked to b by an arc in\u0100 c i\u22121 . If j \u2208 \u03c3 c i\u22121", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theoretical Analysis.", |
| "sec_num": "2.4.2" |
| }, |
| { |
| "text": ", then we do a sequence of mem to make j the top of \u03c3 c i\u22121 ; if j \u2208 \u03c3 c i\u22121 , then we do a sequence of rc to make j the top of \u03c3 c i\u22121 . When no node in \u03c3 c i\u22121 or \u03c3 c i\u22121 is linked to b, we do sh.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theoretical Analysis.", |
| "sec_num": "2.4.2" |
| }, |
| { |
| "text": "Theorem 2 S 2S is complete with respect to directed graphs without self-loop.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Theoretical Analysis.", |
| "sec_num": "2.4.2" |
| }, |
| { |
| "text": "The completeness immediately follows the fact that the computed oracle sequence is finite, and every time a node is shifted onto \u03c3 c i , no arc in\u0100 c i links nodes in \u03c3 c i to the shifted node.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proof", |
| "sec_num": null |
| }, |
| { |
| "text": "G\u00f3mez-Rodr\u00edguez and Nivre (2010, 2013) introduced a two-stackbased transition system for tree parsing. Their study is motivated by the observation that the majority of dependency trees in various treebanks are actually planar or twoplanar graphs. Accordingly, their algorithm is specially designed to handle projective trees and two-planar trees, but not all graphs. Because many more crossing arcs exist in deep dependency structures and more sentences are assigned with neither planar nor two-planar graphs, their strategy of utilizing two stacks is not suitable for the deep dependency parsing problem. Different from their system, our new system maximizes the utility of two memory modules and is able to handle any directed graphs. The list-based systems, such as the basic one introduced by Nivre (2008) and the extended one introduced by Choi and Palmer (2011), also use two memory modules. The function of the secondary memory module of their systems and ours is very different. In our design, only nodes involved in a subgraph that contains crossing arcs may be put into the second stack. In the existing list-based systems, both lists are heavily used, and nodes may be transferred between them many times. The function of the two lists is to simulate one memory module that allows accessing any unit in it.", |
| "cite_spans": [ |
| { |
| "start": 797, |
| "end": 809, |
| "text": "Nivre (2008)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Systems.", |
| "sec_num": "2.4.3" |
| }, |
| { |
| "text": "It is easy to extend our system to generate arbitrary directed graphs by adding a new transition: r SELF-ARC adds an arc from the top element of the primary memory module (\u03c3) to itself, but does not update any stack nor buffer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extension 2.5.1 Graphs with Loops.", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "Theorem 3 S S and S 2S augmented with SELF-ARC are complete with respect to directed graphs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extension 2.5.1 Graphs with Loops.", |
| "sec_num": "2.5" |
| }, |
| { |
| "text": "It is also straightforward to adapt the two transition systems for labeled dependency graph generation. To do so, we can parameterize LEFT-ARC and RIGHT-ARC transitions with dependency relations. For example, a parameterized transition LEFT-ARC r tells the system not only that there is an arc between the frontier node of the stack and the frontier node of the buffer but also that this arc holds a relation r. Some linguistic representations assign labels to nodes as well. When a deep grammar is considered to license to representation, node labels are usually called \"supertags.\" To assign supertags to words, namely, nodes in a dependency graph, we can parameterize the SHIFT transition with tag labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Labeled Parsing and Supertagging.", |
| "sec_num": "2.5.2" |
| }, |
| { |
| "text": "A transition-based parser must decide which transition is appropriate given its parsing environment (i.e., configuration). As with many other data-driven dependency parsers, we use a global linear model for disambiguation. In other words, a discriminative classifier is utilized to approximate the oracle function for a transition system S that maps a configuration c to a transition t that is defined on c. More formally, a transitionbased statistical parser tries to find the transition sequence c 0,m that maximizes the following score", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Classification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "SCORE(c 0,m ) = m\u22121 i=0 SCORE(c i , t i+1 )", |
| "eq_num": "( 1 )" |
| } |
| ], |
| "section": "Transition Classification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Following the state-of-the-art discriminative disambiguation technique for data-driven parsing, we define the score function as a linear combination of features defined over a configuration and a transition, as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Classification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "SCORE(c i , t i+1 ) = \u03b8 \u03c6(c i , t i+1 )", |
| "eq_num": "( 2 )" |
| } |
| ], |
| "section": "Transition Classification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where \u03c6 defines a vector for each configuration-transition pair and \u03b8 is the weight vector for linear combination. Exact calculation of the maximization is extremely hard without any assumption of \u03c6. Even with a proper \u03c6 for real-word parsing, exact decoding is still impractical for most practical feature designs. In this article, we follow the recent success of using beam search for approximate decoding. During parsing, the parser keeps track of multiple yet a fixed number of partial outputs to avoid making decisions too early. Training a parser in the discriminative setting corresponds to estimating \u03b8 associated with rich features. Previous research on dependency parsing shows that structured perceptron (Collins 2002; Collins and Roark 2004) is one of the strongest learning algorithms. In all experiments, we use the averaged perceptron algorithm with early update to estimate parameters. The whole parser is very similar to the transition-based system introduced in Clark (2008, 2011b) .", |
| "cite_spans": [ |
| { |
| "start": 715, |
| "end": 729, |
| "text": "(Collins 2002;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 730, |
| "end": 753, |
| "text": "Collins and Roark 2004)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 980, |
| "end": 999, |
| "text": "Clark (2008, 2011b)", |
| "ref_id": "BIBREF46" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Classification", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In either THMM, S S , or S 2S , the LEFT/RIGHT-ARC transition does not modify either the stack or the buffer. Only new edges are added to the target graph. When automatic classifiers are utilized to approximate an oracle, a majority of features for predicting an ARC transition will be overlapped with the features for the successive transition. Empirically, this property significantly decreases the parsing accuracy. A key observation of a linguistically motivated bilexical graph is that there is usually at most one edge between any two words, therefore an ARC transition is not followed by another ARC. As a result, any ARC with its successive transition modifies a configuration much. To practically", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Combination", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "LEFT-ARC (\u03c3|i, \u03c3 , j|\u03b2) \u21d2 (\u03c3|i, \u03c3 , j|\u03b2) RIGHT-ARC (\u03c3|i, \u03c3 , j|\u03b2) \u21d2 (\u03c3|i, \u03c3 , j|\u03b2) SHIFT (\u03c3, \u03c3 , j|\u03b2) \u21d2 (\u03c3|j, \u03c3 , \u03b2) POP (\u03c3|i, \u03c3 , \u03b2) \u21d2 (\u03c3, \u03c3 , \u03b2) MEM (\u03c3|i, \u03c3 , \u03b2) \u21d2 (\u03c3, \u03c3 |i, \u03b2) RECALL (\u03c3, \u03c3 |i, \u03b2) \u21d2 (\u03c3|i, \u03c3 , \u03b2) LEFT-ARC-SHIFT (\u03c3|i, \u03c3 , j|\u03b2) \u21d2 (\u03c3|i|j, \u03c3 , \u03b2) LEFT-ARC-POP (\u03c3|i, \u03c3 , j|\u03b2) \u21d2 (\u03c3, \u03c3 , j|\u03b2) LEFT-ARC-MEM (\u03c3|i, \u03c3 , j|\u03b2) \u21d2 (\u03c3, \u03c3 |i, j|\u03b2) LEFT-ARC-RECALL (\u03c3|i , \u03c3 |i, j|\u03b2) \u21d2 (\u03c3|i |i, \u03c3 , j|\u03b2) RIGHT-ARC-SHIFT (\u03c3|i, \u03c3 , j|\u03b2) \u21d2 (\u03c3|i|j, \u03c3 , \u03b2) RIGHT-ARC-POP (\u03c3|i, \u03c3 , j|\u03b2) \u21d2 (\u03c3, \u03c3 , j|\u03b2) RIGHT-ARC-MEM (\u03c3|i, \u03c3 , j|\u03b2) \u21d2 (\u03c3, \u03c3 |i, j|\u03b2) RIGHT-ARC-RECALL (\u03c3|i , \u03c3 |i, j|\u03b2) \u21d2 (\u03c3|i |i, \u03c3 , j|\u03b2) Figure 5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Combination", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Original and combined transitions for the two-stack combined system. Two-cycle is not considered here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Combination", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "improve the performance of a statistical parser, we combine every pair of two successive transitions starting with ARC and transform the proposed two transition systems into two modified ones. For example, in our two-stack-based system, after combining, we obtain the transitions presented in Figure 5 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 293, |
| "end": 301, |
| "text": "Figure 5", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Transition Combination", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The number of edges between any two words could be at most two in real data. If there are two edges between two words w a and w b , it must be w a \u2192 w b and w b \u2192 w a . We call these two edges a two-cycle, and call this problem the two-cycle problem. In our combined transitions, a LEFT/RIGHT-ARC transition should appear before a non-ARC transition. In order to generate two edges between two words, we have two strategies: A) Add a new type of transitions to each system, which consist of a LEFT-ARC transition, a RIGHT-ARC transition, and any other non-ARC transition (e.g., LEFT-ARC-RIGHT-ARC-RECALL for S 2S ). B) Use a non-directional ARC transition instead of LEFT/RIGHT-ARC. Here, an ARC transition may add one or two edges depends on its label. In detail, we propose two algorithms, namely, ENCODELABEL and DECODELABEL (see Algorithms 1 and 2), to deal with labels for ARC transition. return \"both\" + lLabel + \"|\" + rLabel To our best efforts, the strategy B performs better. First, let us consider accuracy. Generally speaking, it is harder for transition classification if more target transitions are defined. Using strategy A, we should add additional transitions to handle the two-cycle condition. Based on our experiments, the performance decreases when using more transitions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Combination", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Considering efficiency, we can save time by only using labels that appear in training data in strategy B. If we have a total of K possible labels in training data, they will generate K 2 two-cycle types, but only k possible combinations of two-cycle appear in training data (k K 2 ). In strategy A, we must add K 2 transitions to deal with all possible two-cycle types, but most of them do not make sense. Using fewer two-cycle types helps us eliminate the invalid calculation and save time effectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Combination", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Using strategy B, we change the original edges' labels and use the ARC(label)-non-ARC transition instead of LEFT/RIGHT-ARC(label)-non-ARC. An ARC(label)-non-ARC transition should execute the ARC(label) transition first, then execute the non-ARC transition. ARC(label) generates one or two edges depends on its label. Not only do we encode two-cycle labels, but also LEFT/RIGHT-ARC labels. In practice, we only use those labels that appear in training data. Because labels that do not appear only contribute non-negative weights while training, we can eliminate them without any performance loss. For each transition system and each dependency graph, we generate an oracle transition, and train our model according to this oracle. The constructive proofs presented in Section 2.3 and 2.4 define two kinds of oracles. However, they are not directly applicable when the transition combination strategy is utilized. The main challenge is the existence of cycles. In this article, we propose three algorithms to derive oracles for THMM, S S , and S 2S , respectively. Algorithms 3 to 5 illustrate the key steps of the procedure of our system, which find the next transition t given a configuration c and gold graph G gold = (V x , A gold ) for the three systems. When this key procedure, namely, the EXTRACTONEORACLE method, is well defined, the entire transition system can be derived as follows: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Combination", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EXTRACTORACLE(c 0 , A gold ) 1 oracle = \u2205 2 while t \u2190 EXTRACTONEORACLE(c 0 , A gold , nil) do 3 oracle.push back = t 4 c 0 \u2190 t(c 0 ) 5 end while", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Combination", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "if c = (\u03c3|i, j|\u03b2, A) \u2227 \u00ac\u2203k[k j \u2227 \u2203l[(i, l, k) \u2208 A", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Combination", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "else if c = (\u03c3|i, j|\u03b2, A) \u2227 \u2203l[(i, l, j) \u2208 A gold ] then 9: A gold \u2190 A gold \u2212 (i, l, j)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Combination", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "10:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Combination", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "return EXTRACTONEORACLE(c, A gold , label)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Combination", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "11: We want to emphasize that, although the EXTRACTORACLE methods initialize the parameter LABEL in EXTRACTONEORACLE as nil, if an arc transition is predicted in the EXTRACTONEORACLE method, it will call EXTRACTONEORACLE recursively to return an ARC(label)-non-ARC transition and assign a value for that LABEL.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Combination", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "else if c = (\u03c3|i 1 |i 0 , j|\u03b2, A) \u2227 \u2203k 0 k 1 [k 0 j \u2227 k 1 j \u2227 \u2203l 0 [(i 0 , l 0 , k 0 ) \u2208 A gold ] \u2227 \u2203l 1 [(i 1 , l 1 , k 1 ) \u2208 A gold ] \u2227 \u00ac\u2203k 0 [k 0 < k 0 \u2227 \u2203l 0 [(i 0 , l 0 , k 0 ) \u2208 A gold ]] \u2227 \u2203l 1 [(i 1 , l 1 , k 1 ) \u2208 A gold ]] \u2227 k 0 < k 1 ] \u2228 \u00ac\u2203k 1 [k 1 j \u2227 \u2203l 1 [(i 1 , l 1 , k 1 ) \u2208 A gold ]]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition Combination", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Developing features has been shown to be crucial to advancing the state-of-the-art in dependency parsing (Koo and Collins 2010; Zhang and Nivre 2011) . To build accurate deep dependency parsers, we utilize a large set of features for transition classification.", |
| "cite_spans": [ |
| { |
| "start": 105, |
| "end": 127, |
| "text": "(Koo and Collins 2010;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 128, |
| "end": 149, |
| "text": "Zhang and Nivre 2011)", |
| "ref_id": "BIBREF47" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Design", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "To conveniently define all features, we use the following notation. In a configuration with stack \u03c3 and buffer \u03b2, we denote the top two nodes in \u03c3 by \u03c3 0 and \u03c3 1 , and the front of \u03b2 by \u03b2 0 . In a configuration of the two-stack-based system with the second stack \u03c3 , the top element of \u03c3 is denoted by \u03c3 0 and the front of \u03b2 by \u03b2 0 . The left-most dependent of node n is denoted by n.lc, the right-most one by n.rc. The left-most parent of node n is denoted by n.lp, the right-most one by n.rp. Then we denote the word and POS-tag ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Design", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "if c = (\u03c3|i, j|\u03b2, A) \u2227 \u00ac\u2203k[k j \u2227 \u2203l[(i, l, k) \u2208 A", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Design", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "else if c = (\u03c3|i, j|\u03b2, A) \u2227 \u2203l[(i, l, j) \u2208 A gold ] then 9: A gold \u2190 A gold \u2212 (i, l, j)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Design", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "10:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Design", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "return EXTRACTONEORACLE(c, A gold , label)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Design", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "11: return nil 26: end procedure of node n by w n , p n , respectively. Our parser derives the so-called path features from dependency trees. The path features collect POS tags or the first letter of POS tags along the tree between two nodes. Given two nodes n 1 and n 2 , we denote the path feature as path(n 1 , n 2 ) and the coarse-grained path feature as cpath(n 1 , n 2 ). The syntactic head of a node n is denoted as n.h.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Design", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "else if c = (\u03c3|i, j|\u03b2, A) \u2227 \u2203i [i < i \u2227 i \u2208 \u03c3 \u2227 \u2203l [(i , l , j) \u2208 A gold ]]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Design", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We use the same feature templates for the online re-ordering and the two-stackbased systems, and they are slightly different from THMM. Figure 6 defines basic feature template functions. All feature templates are described here. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 136, |
| "end": 144, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Feature Design", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "r THMM system: f uni (\u03c3 0 ), f uni (\u03c3 1 ), g uni (\u03b2 0 ), f context (\u03c3 0 ), f context (\u03b2 0 ), f pair\u2212l (\u03c3 0 , \u03b2 0 ), f pair\u2212l (\u03c3 1 , \u03b2 0 ), f pair (\u03c3 0 , \u03c3 1 ), f tri (\u03c3 0 , \u03b2 0 , \u03c3 1 ), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .lp), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .rp), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .lc), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .lc), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03b2 0 .lp), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03b2 0 .lc), f tri\u2212l (\u03c3 1 , \u03b2 0 , \u03c3 1 .lp), f tri\u2212l (\u03c3 1 , \u03b2 0 , \u03c3 1 .rp), f tri\u2212l (\u03c3 1 , \u03b2 0 , \u03c3 1 .lc), f tri\u2212l (\u03c3 1 , \u03b2 0 , \u03c3 1 .lc), f tri\u2212l (\u03c3 1 , \u03b2 0 , \u03b2 0 .lp), f tri\u2212l (\u03c3 1 , \u03b2 0 , \u03b2 0 .lc), f quar\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .rp, \u03c3 0 .rc), f quar\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .lc, \u03c3 0 .lc2), f quar\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .rc, \u03c3 0 .rc2), f quar\u2212l (\u03c3 0 , \u03b2 0 , \u03b2 0 .lp, \u03b2 0 .lc), f quar\u2212l (\u03c3 0 , \u03b2 0 , \u03b2 0 .lc, \u03b2 0 .lc2), f quar\u2212l (\u03c3 1 , \u03b2 0 , \u03c3 1 .rp, \u03c3 1 .rc), f quar\u2212l (\u03c3 1 , \u03b2 0 , \u03c3 1 .lc, \u03c3 1 .lc2), f quar\u2212l (\u03c3 1 , \u03b2 0 , \u03c3 1 .rc, \u03c3 1 .rc2),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Design", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "if c = (\u03c3|i, \u03c3 s , j|\u03b2, A) \u2227 \u00ac\u2203k[k j \u2227 \u2203l[(i, l, k) \u2208 A gold ]]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Design", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "else if c = (\u03c3|i, \u03c3 s , j|\u03b2, A) \u2227 \u2203l[(i, l, j) \u2208 A gold ] then 9: A gold \u2190 A gold \u2212 (i, l, j) 10: return EXTRACTONEORACLE(c, \u03c3 s , A gold , label)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Design", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "11: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Design", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "else if c = (\u03c3|i, \u03c3 s , j|\u03b2, A) \u2227 \u2203i [i < i \u2227 i \u2208 \u03c3 \u2227 \u2203l [(i , l , j) \u2208 A gold ]] then", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Design", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "f quar\u2212l (\u03c3 1 , \u03b2 0 , \u03b2 0 .lp, \u03b2 0 .lc), f quar\u2212l (\u03c3 1 , \u03b2 0 , \u03b2 0 .lc, \u03b2 0 .lc2), f path (\u03c3 0 , \u03b2 0 ), f path (\u03c3 1 , \u03b2 0 ), f char (\u03c3 0 ), f char (\u03b2 0 ), r Online re-ordering/two stack system: f uni (\u03c3 0 ), f uni (\u03c3 1 ), f uni (\u03c3 0 ), g uni (\u03b2 0 ), f context (\u03c3 0 ), f context (\u03b2 0 ), f pair\u2212l (\u03c3 0 , \u03b2 0 ), f pair\u2212l (\u03c3 1 , \u03b2 0 ), f pair\u2212l (\u03c3 0 , \u03b2 0 ), f pair (\u03c3 0 , \u03c3 1 ), f pair (\u03c3 0 , \u03c3 0 ), f tri (\u03c3 0 , \u03b2 0 , \u03c3 1 ), f tri (\u03c3 0 , \u03b2 0 , \u03c3 0 ), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .lp), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .rp), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .lc), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .lc), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03b2 0 .lp), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03b2 0 .lc), f tri\u2212l (\u03c3 1 , \u03b2 0 , \u03c3 1 .lp), f tri\u2212l (\u03c3 1 , \u03b2 0 , \u03c3 1 .rp), f tri\u2212l (\u03c3 1 , \u03b2 0 , \u03c3 1 .lc), f tri\u2212l (\u03c3 1 , \u03b2 0 , \u03c3 1 .lc), f tri\u2212l (\u03c3 1 , \u03b2 0 , \u03b2 0 .lp), f tri\u2212l (\u03c3 1 , \u03b2 0 , \u03b2 0 .lc), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .lp), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .rp), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .lc), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .lc), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03b2 0 .lp), f tri\u2212l (\u03c3 0 , \u03b2 0 , \u03b2 0 .lc), f quar\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .rp, \u03c3 0 .rc), f quar\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .lc, \u03c3 0 .lc2), f quar\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .rc, \u03c3 0 .rc2), f uni (X): X.w, X.p, X.w \u2022 X.lp.l, X.w \u2022 X.rp.l, X.w \u2022 X.lc.l, X.w \u2022 X.rc.l, X.w \u2022 X.lp.a, X.w \u2022 X.rp.a, X.w \u2022 X.lc.a, X.w \u2022 X.rc.a, X.w \u2022 X.p.a, X.w \u2022 X.c.a, X.w \u2022 X.lc.set, X.p \u2022 X.lc.set, X.w \u2022 X.rc.set, X.p \u2022 X.rc.set g uni (X): X.w, X.p, X.w \u2022 X.lp.l, X.p \u2022 X.lp.l, X.w \u2022 X.lc.l, X.p \u2022 X.lc.l, X.w \u2022 X.lp.a, X.p \u2022 X.lp.a, X.w \u2022 X.lc.a, X.p \u2022 X.lc.a X.w \u2022 X.lc.set, X.p \u2022 X.lc.set f context (X): X \u22122 .w, X \u22121 .w, X +1 .w, X +2 .w, X \u22122 .p, X \u22121 .p, X +1 .p, X +2 .p, X \u22122 .w \u2022 X \u22121 .w, X \u22121 .w \u2022 X +1 .w, X +1 .w \u2022 X +2 .w, X \u22122 .p \u2022 X \u22121 .p, X \u22121 .p \u2022 X +1 .p, X +1 .p \u2022 X +2 .p f pair (X, Y): X.wp \u2022 Y.wp, X.wpY.w, X.wp \u2022 Y.p, X.w \u2022 Y.wp, X.p \u2022 Y.wp, X.w \u2022 Y.w, X.w \u2022 Y.p, X.p \u2022 Y.w, X.p \u2022 Y.p f pair\u2212l (X, Y): X.wp \u2022 Y.wp, X.wpY.w, X.wp \u2022 Y.p, X.w \u2022 Y.wp, X.p \u2022 Y.wp, X.w \u2022 Y.w, X.w \u2022 Y.p, X.p \u2022 Y.w, X.p \u2022 Y.p, X.w \u2022 Y.w \u2022 X.rc.a, X.w \u2022 Y.w \u2022 Y.lc.a, X.w \u2022 Y.w \u2022 X, Y .d, X.p \u2022 Y.p \u2022 X, Y .d, X.w \u2022 Y.p \u2022 X, Y .d, X.p \u2022 Y.w \u2022 X, Y .d, X.p \u2022 Y.p \u2022 X.lc.set, X.p \u2022 Y.p \u2022 X.rc.set, X.p \u2022 Y.p \u2022 Y.lc.set f tri (X, Y, Z): X.w \u2022 Y.p \u2022 Z.p, X.p \u2022 Y.w \u2022 Z.p, X.p \u2022 Y.p \u2022 Z.w, X.p \u2022 Y.p \u2022 Z.p f tri\u2212l (X, Y, Z): X.w \u2022 Y.p \u2022 Z.p \u2022 X, Z .l, X.p \u2022 Y.w \u2022 Z.p \u2022 X, Z .l, X.p \u2022 Y.p \u2022 Z.w \u2022 X, Z .l, X.p \u2022 Y.p \u2022 Z.p \u2022 X, Z .l f quar\u2212l (X, Y, Z, W): X.p \u2022 Y.p \u2022 Z.p \u2022 W.p \u2022 X, Z .l \u2022 X, W .l f path (X, Y): X, Y .path, X, Y .cpath, X.p \u2022 Y.p \u2022 X.tp.w, X.p \u2022 Y.w \u2022 X.tp.p, X.w \u2022 Y.p \u2022 X.tp.p, X.p \u2022 Y.p \u2022 Y.tp.w, X.p \u2022 Y.w \u2022 Y.tp.p, X.w \u2022 Y.p \u2022 Y.tp.p f char (X): X [\u22121,\u22121] .w, X [\u22122,\u22121] .w, X [\u22123,\u22121] .w, X [+1,+1] .w, X [+1,+2] .w, X [+1,+3] .w", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Design", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Feature template functions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 6", |
| "sec_num": null |
| }, |
| { |
| "text": "f quar\u2212l (\u03c3 0 , \u03b2 0 , \u03b2 0 .lp, \u03b2 0 .lc), f quar\u2212l (\u03c3 0 , \u03b2 0 , \u03b2 0 .lc, \u03b2 0 .lc2), f quar\u2212l (\u03c3 1 , \u03b2 0 , \u03c3 1 .rp, \u03c3 1 .rc), f quar\u2212l (\u03c3 1 , \u03b2 0 , \u03c3 1 .lc, \u03c3 1 .lc2), f quar\u2212l (\u03c3 1 , \u03b2 0 , \u03c3 1 .rc, \u03c3 1 .rc2), f quar\u2212l (\u03c3 1 , \u03b2 0 , \u03b2 0 .lp, \u03b2 0 .lc), f quar\u2212l (\u03c3 1 , \u03b2 0 , \u03b2 0 .lc, \u03b2 0 .lc2), f quar\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .rp, \u03c3 0 .rc), f quar\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .lc, \u03c3 0 .lc2), f quar\u2212l (\u03c3 0 , \u03b2 0 , \u03c3 0 .rc, \u03c3 0 .rc2), f quar\u2212l (\u03c3 0 , \u03b2 0 , \u03b2 0 .lp, \u03b2 0 .lc), f quar\u2212l (\u03c3 0 , \u03b2 0 , \u03b2 0 .lc, \u03b2 0 .lc2), f path (\u03c3 0 , \u03b2 0 ), f path (\u03c3 1 , \u03b2 0 ), f path (\u03c3 0 , \u03b2 0 ), f char (\u03c3 0 ), f char (\u03b2 0 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 6", |
| "sec_num": null |
| }, |
| { |
| "text": "Tree structures exhibit many computationally good properties, and parsing techniques for tree-structured representations are quite mature to some extent. When we consider semantics-oriented graphs, such as the representations for semantic role labeling (SRL; Surdeanu et al. 2008; Haji\u010d et al. 2009) , CCG-grounded functor-argument (Clark, Hockenmaier, and Steedman 2002) analysis, HPSG-grounded predicate-argument analysis (Miyao, Ninomiya, and ichi Tsujii 2004) , and reduction of MRS (Ivanova et al. 2012) , syntactic trees can provide very useful features for semantic disambiguation (Punyakanok, Roth, and Yih 2008) . Our parser also utilizes a path feature template (as defined in Section 3.3) to incorporate syntactic information for disambiguation.", |
| "cite_spans": [ |
| { |
| "start": 259, |
| "end": 280, |
| "text": "Surdeanu et al. 2008;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 281, |
| "end": 299, |
| "text": "Haji\u010d et al. 2009)", |
| "ref_id": null |
| }, |
| { |
| "start": 332, |
| "end": 371, |
| "text": "(Clark, Hockenmaier, and Steedman 2002)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 424, |
| "end": 463, |
| "text": "(Miyao, Ninomiya, and ichi Tsujii 2004)", |
| "ref_id": null |
| }, |
| { |
| "start": 487, |
| "end": 508, |
| "text": "(Ivanova et al. 2012)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 588, |
| "end": 620, |
| "text": "(Punyakanok, Roth, and Yih 2008)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Approximation", |
| "sec_num": "4." |
| }, |
| { |
| "text": "In case syntactic tree information is not available, we introduce a tree approximation technique to induce tree backbones from deep dependency graphs. Such tree backbones can be utilized to train a tree parser which provides pseudo path features. In particular, we introduce an algorithm to associate every graph with a projective dependency tree, which we call weighted conversion. The tree reflects partial information about the corresponding graph. The key idea underlying this algorithm is to assign heuristic weights to all ordered pairs of words, and then find the tree with maximum weights. That means a tree frame of a given graph is automatically derived as an alternative for syntactic analysis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Approximation", |
| "sec_num": "4." |
| }, |
| { |
| "text": "We assign weights to all the possible edges (i.e., all pairs of words) and then determine which edges are to be kept by finding the maximum spanning tree. More formally, given a set of nodes V, each possible edge (i, j), where i, j \u2208 V, is assigned a heuristic weight \u03c9(i, j). Among all trees (denoted as T ) over V, the maximum spanning tree T max contains the maximum sum of values of edges:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Approximation", |
| "sec_num": "4." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "T max = arg max (V,A T )\u2208T (i,j)\u2208A T t(i, j)\u03c9(i, j)", |
| "eq_num": "( 3 )" |
| } |
| ], |
| "section": "Tree Approximation", |
| "sec_num": "4." |
| }, |
| { |
| "text": "We separate the \u03c9(i, j) into three parts (\u03c9(i, j) = A(i, j) + B(i, j) + C(i, j)) that are as defined here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Approximation", |
| "sec_num": "4." |
| }, |
| { |
| "text": "r A(i, j) = a \u2022 max{y(i, j), y(j, i)}: a is the weight for the existing edge on graph ignoring direction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Approximation", |
| "sec_num": "4." |
| }, |
| { |
| "text": "r B(i, j) = b \u2022 y(i, j): b is the weight for the forward edge on the graph. r C(i, j) = n \u2212 |i \u2212 j|: This term estimates the importance of an edge where n is the length of the given sentence. For dependency parsing, we consider edges with short distance to be more important because those edges can be predicted more accurately in future parsing processes. r a b n or a > bn > n 2 : The converted tree should contain as many arcs as possible in original graph, and the direction of the arcs should not be changed if possible. The relationship of a, b, and c guarantees this.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Approximation", |
| "sec_num": "4." |
| }, |
| { |
| "text": "After all edges are weighted, we can use maximum spanning tree algorithms to obtain the converted tree. To obtain the projective tree, we choose Eisner's algorithm. For any graph, we can call this algorithm and get a corresponding tree. However, the tree is informative only when the given graph is dense enough. Fortunately, this condition holds for semantic dependency parsing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Approximation", |
| "sec_num": "4." |
| }, |
| { |
| "text": "We present empirical evaluation of different incremental graph spanning algorithms for CCG-style functor-argument analysis, LFG-style grammatical relation analysis, and HPSGstyle semantic dependency analysis for English and Chinese. Linguistically speaking, these types of syntacto-semantic dependencies directly encode information such as coordination, extraction, raising, control, as well as many other long-range dependencies. Experiments for a variety of formalisms and languages profile different aspects of transition-based deep dependency parsing models. Figure 7 visualizes cross-format annotations assigned to the English sentence: A similar technique is almost impossible to apply to other crops, such as cotton, soybeans, and rice. This running example illustrates a range of linguistic phenomena such as coordination, verbal chains, argument and modifier prepositional phrases, complex noun phrases, and the so-called tough construction. The first format is from the popular corpus PropBank, which is widely used by various SRL systems. We can clearly see that compared with SRL, SDP uses dense graphs to represent much more syntactosemantic information. This difference suggests to us that we should explore different algorithms for producing SRL and SDP graphs. Another thing worth noting is that, for the same phenomenon, annotation schemes may not agree with each other. Take the coordination construction, for example. For more details about the difference among different data sets, please refer to Ivanova et al. (2012) .", |
| "cite_spans": [ |
| { |
| "start": 1518, |
| "end": 1539, |
| "text": "Ivanova et al. (2012)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 563, |
| "end": 571, |
| "text": "Figure 7", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Set-up", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "For CCG analysis, we conduct experiments on English and Chinese CCGBank (Hockenmaier and Steedman 2007; Tse and Curran 2010). Following previous experimental set-up for English CCG parsing, we use Section 02-21 as training data, Section 00 as the development data, and Section 23 for testing. To conduct Chinese parsing experiments, we use data setting C of Tse and Curran (2012) . For grammatical relation analysis, we conduct experiments on Chinese GRBank data (Sun et al. 2014) . The selection for training, development, and test data is also according to Sun et al.'s (2014) experiments.", |
| "cite_spans": [ |
| { |
| "start": 358, |
| "end": 379, |
| "text": "Tse and Curran (2012)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 463, |
| "end": 480, |
| "text": "(Sun et al. 2014)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 559, |
| "end": 578, |
| "text": "Sun et al.'s (2014)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Set-up", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We also evaluate all parsing models using more HPSG-grounded semantics-oriented data, namely, DeepBank 2 (Flickinger, Zhang, and Kordoni 2012) and EnjuBank (Miyao, Ninomiya, and ichi Tsujii 2004) . Different from Penn Treebank-converted corpus, DeepBank's annotations are essentially based on the parsing results given a large-scale linguistically precise HPSG grammar, namely, LingGO English resource grammar (ERG; Flickinger 2000), and manually disambiguated. As part of the full HPSG sign, the ERG also makes available a logical-form representation of propositional semantics, in the framework of minimal recursion semantics (MRS; Copestake et al. 2005) . Such semantic information is reduced into variable-free bilexical dependency graphs (Oepen and L\u00f8nning 2006; Ivanova et al. 2012) . In summary, DeepBank gives the reduction of logicalform meaning representations with respect to MRS. EnjuBank (Miyao, Ninomiya, and ichi Tsujii 2004) provides another corpus for semantic dependency parsing. This type of annotation is somehow shallower than DeepBank, given that only basic predicateargument structures are concerned. Different from DeepBank but similar to CCGBank and GRBank, EnjuBank is semi-automatically converted from Penn Treebank-style annotations with linguistic heuristics. To conduct HPSG experiments, we use Sections 00 to 19 as training data and Section 20 as development data to tune parameters. For final A similar technique is almost impossible to apply to other crops, such as cotton, soybeans and rice A1 A2 (a) Format 1: Propositional semantics, from PropBank.", |
| "cite_spans": [ |
| { |
| "start": 156, |
| "end": 195, |
| "text": "(Miyao, Ninomiya, and ichi Tsujii 2004)", |
| "ref_id": null |
| }, |
| { |
| "start": 634, |
| "end": 656, |
| "text": "Copestake et al. 2005)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 743, |
| "end": 767, |
| "text": "(Oepen and L\u00f8nning 2006;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 768, |
| "end": 788, |
| "text": "Ivanova et al. 2012)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 901, |
| "end": 940, |
| "text": "(Miyao, Ninomiya, and ichi Tsujii 2004)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Set-up", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "A similar technique is almost impossible to apply to other crops, such as cotton, soybeans and rice. evaluation, we use Sections 00 to 20 as training data and section 21 as test data. The DeepBank and EnjuBank data sets are from SemEval 2014 Task 8 (Oepen et al. 2014) , and the data splitting policy follows the shared task. Table 1 gives a summary of the data sets for experiments. Experiments for English CCG-grounded analysis were performed using automatically assigned POS-tags that are generated by a symbol-refined generative HMM tagger 3 (SR-HMM; Huang, Harper, and Petrov 2010) . Experiments for English HPSGgrounded analysis used POS-tags provided by the shared task. For the experiments on Chinese CCGBank and GRBank, we use gold-standard POS tags.", |
| "cite_spans": [ |
| { |
| "start": 249, |
| "end": 268, |
| "text": "(Oepen et al. 2014)", |
| "ref_id": null |
| }, |
| { |
| "start": 555, |
| "end": 586, |
| "text": "Huang, Harper, and Petrov 2010)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 326, |
| "end": 333, |
| "text": "Table 1", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Set-up", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We use the averaged perceptron algorithm with early update to estimate parameters, and beam search for decoding. We set the beam size to 16 and the number of iterations to 20 for all experiments. The measure for comparing two dependency graphs is precision and recall of tokens that are defined as w h , w d , l tuples, where w h is the head, w d is the dependent, and l is the relation. Labeled precision/recall (LP/LR) is the ratio of tuples correctly identified by the automatic generator, and unlabeled precision/recall (UP/UR) is the ratio regardless of l. F-score is a harmonic mean of precision and recall. These measures correspond to attachment scores (LAS/UAS) in dependency tree parsing and also used by the SemEval 2014 Task 8. The de facto standard to evaluate CCG parsers also considers supertags. Because no supertagging is performed in our experiments, only the unlabeled precision/recall/F-score is comparable to the results reported in other papers. And the labeled performance reported here only considers the labels assigned to dependency arcs that indicate the argument types. For example, an arc label arg1 denotes that the dependent is the first argument of the head.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Set-up", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We evaluate the real running time of our final trained parser using realistic data. The test sentences are collected from English Wikipedia and Chinese Gigaword (LDC2005T14). First, we show the influence of beam size in Figure 8 . In this experiment, the DeepBank trained models are used for test. We can see that the parsers run in nearly linear time regardless of the beam width in realistic situations. Second, we report the the averaged real running time of models trained on different data sets in Figure 9 . Again, we can see that the parser runs in close to linear time for a variety of linguistically motivated representations. The results also suggest that our proposed transition-based parsers can automatically learn the complexity of linguistically motivated dependency structures from an annotated corpus. Note that although within the deep parsing framework, the study of formal grammars is partially relevant for data-driven dependency parsing, Online re-ordering Two-stack Online re-ordering Two-stack", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 220, |
| "end": 228, |
| "text": "Figure 8", |
| "ref_id": "FIGREF9" |
| }, |
| { |
| "start": 503, |
| "end": 511, |
| "text": "Figure 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parsing Efficiency", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Real running time relative to models trained on different data sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "where our parsers rely on inductive inference from treebank data, and only implicitly use a grammar. Figure 10 and Table 2 summarize the labeled parsing results on all of the five data sets. In this experiment, we distinguish parsing models with and without transition combination. All models take only the surface word form and POS tag information and do not derive features from any syntactic analysis. The importance of transition combination is highlighted by the comparative evaluation on parsers using this mechanism or not. Significant improvements are observed over a wide range of conditions: Parsers based on different transition systems for different languages and different formalisms almost always benefit. This result suggests a necessary strategy for designing transition systems for producing deep dependency graphs: Configurations should be essentially modified by every transition. Because of the importance of transition combination, all the following experiments utilize the transition combination strategy. 5.4 .1 Model Diversity. For model ensemble, besides the accuracy of each single model, it is also essential that the models to be integrated should be very different. We argue that heterogeneous parsing models can be built by varying the underlying transition systems. By reversing the sentence from right to left, we can build other model variants with the same transition system. To evaluate the differences between two models A and B, we define the following metric:", |
| "cite_spans": [ |
| { |
| "start": 1028, |
| "end": 1031, |
| "text": "5.4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 101, |
| "end": 110, |
| "text": "Figure 10", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 115, |
| "end": 122, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 9", |
| "sec_num": null |
| }, |
| { |
| "text": "2 * |D A \u2229 D B | |D A | + |D B |", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Diversity and Parser Ensemble", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "where D X denotes the set of dependencies related to held out sentences returned by model X. Tables 3 and 4 show the model diversity evaluated on English and Chinese data, respectively. We can see that parsing models built upon different transition systems do vary. Even for one specific transition system, different processing directions yield quite different parsing results. Labeled parsing F-scores of different transition system with and without transition combination. \"Standard\" denotes the standard systems, which do not combine an ARC transition with its following transition.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 93, |
| "end": 107, |
| "text": "Tables 3 and 4", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model Diversity and Parser Ensemble", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Parser ensemble has been shown very effective to boost the performance of data-driven tree parsers (Nivre and McDonald 2008; Surdeanu and Manning 2010; Sun and Wan 2013) . Empirically, the two proposed systems together with the existing THMM system exhibit complementary prediction powers, and their combination yields superior accuracy. We present a simple yet effective voting strategy for parser ensemble. For each pair of words in each sentence, we count the number of models that give positive predictions. If the number is greater than a threshold (we set it to half the number of models in this work), we put this arc to the final graph, and label the arc with the most common label of what the models give. Table 5 presents the parsing accuracy of the combined model where six base models are utilized for voting. We can see that a system ensemble is quite helpful. Given that our graph parsers all run in expected linear time, the combined system also runs very efficiently.", |
| "cite_spans": [ |
| { |
| "start": 99, |
| "end": 124, |
| "text": "(Nivre and McDonald 2008;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 125, |
| "end": 151, |
| "text": "Surdeanu and Manning 2010;", |
| "ref_id": null |
| }, |
| { |
| "start": 152, |
| "end": 169, |
| "text": "Sun and Wan 2013)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 715, |
| "end": 722, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parser Ensemble.", |
| "sec_num": "5.4.2" |
| }, |
| { |
| "text": "5.5 Impact of Syntactic Parsing 5.5.1 Effectiveness of Syntactic Features. Syntactic parsing, especially the full one, has been shown very important for boosting the performance of SRL, a well studied shallow semantic parsing task (Punyakanok, Roth, and Yih 2008) . According to the comprehensive evaluation presented in Punyakanok, Roth, and Yih (2008) and Zhuang and Zong Table 2 Performance of different transition system with and without transition combination on the test set of the DeepBank/EnjuBank data, on the development set of the English and Chinese CCGBank data, and on the development set of the Chinese GRBank data. S std", |
| "cite_spans": [ |
| { |
| "start": 231, |
| "end": 263, |
| "text": "(Punyakanok, Roth, and Yih 2008)", |
| "ref_id": null |
| }, |
| { |
| "start": 333, |
| "end": 353, |
| "text": "Roth, and Yih (2008)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 374, |
| "end": 381, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parser Ensemble.", |
| "sec_num": "5.4.2" |
| }, |
| { |
| "text": "x denotes the standard system, which does not combine an ARC transition with its following transition. (2010) (see Table 6 ), there is an essential gap between full and shallow parsing-based SRL systems. If we consider a system that takes only word form and POS tags as input, the performance gap will be larger. When we consider semantics-oriented deep dependency structures, including the representations for CCG-grounded functor-argument (Clark, Hockenmaier, and Steedman 2002) analysis, HPSG-grounded predicate-argument analysis (Miyao, Ninomiya, and ichi Tsujii 2004) , and reduction of MRS (Ivanova et al. 2012) , syntactic parses can also provide very useful features for disambiguation. To evaluate the impact of syntactic tree parsing, we include more features, namely, path features, to our parsing models. The detailed description of syntactic features are presented in Section 3.3. In this work, we apply syntactic dependency parsers rather than phrase-structure parsers. Figure 11 summarizes the impact of features derived from syntactic trees. We can clearly see that syntactic features are effective to enhance semantic dependency parsing. These informative features lead to on average 1.14% and 1.03% absolute improvements for English and Chinese CCG parsing. Compared with SRL, the improvement brought by syntactic parsing is smaller. We think one main reason for this difference is the information density of different types of graphs. SRL graphs usually annotate only on verbal predicates and their nominalization, whereas the semantic graphs grounded by CCG and HPSG target all words. In other words, SRL provides partial analysis and semantic dependency parsing provides full analysis. Accordingly, SRL needs structural information generated by a syntactic parser much more than semantic dependency parsing.", |
| "cite_spans": [ |
| { |
| "start": 441, |
| "end": 480, |
| "text": "(Clark, Hockenmaier, and Steedman 2002)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 533, |
| "end": 572, |
| "text": "(Miyao, Ninomiya, and ichi Tsujii 2004)", |
| "ref_id": null |
| }, |
| { |
| "start": 596, |
| "end": 617, |
| "text": "(Ivanova et al. 2012)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 115, |
| "end": 122, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 984, |
| "end": 993, |
| "text": "Figure 11", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parser Ensemble.", |
| "sec_num": "5.4.2" |
| }, |
| { |
| "text": "Model diversity between different models on the test set of the DeepBank/EnjuBank data and on the development set of the English CCGBank data. S rev", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Table 3", |
| "sec_num": null |
| }, |
| { |
| "text": "x means processing a sentence with system S x but in the right-to-left word order. Table 5 Performance of base and combined models on the test set of the DeepBank/EnjuBank data, on the development set of the English and Chinese CCGBank data and on the development set of the Chinese GRBank data. Note that the labeled results for CCG parsing do not consider supertags. Table 6 Performance of English and Chinese SRL achieved by representative full and shallow parsing-based systems. The results are copied from Punyakanok, Roth, and Yih (2008) and Zhuang and Zong (2010 -based (McDonald 2006; Torres Martins, Smith, and Xing 2009) . In terms of overall per token prediction, the transition-based and graph-based tree parsers achieve comparable performance (Suzuki et al. 2009; Weiss et al. 2015) . To evaluate the impact of the two tree parsing approaches on semantic dependency parsing, we use two tree parsers to serve our graph parser. The first one is our in-house implementation of the algorithm presented in Zhang and Nivre (2011) , and the second one is a second-order graph-based parser 4 (Bohnet 2010) . The tree parsers are trained with the unlabeled tree annotations provided by the English and Chinese CCGBank data. For both English and Chinese experiments, 5-fold cross validation is performed to parse the training data to avoid overfitting. The accuracy of tree parsers is shown in Table 7 . Results presented in Figure 12 indicate that the two parsers are also equivalently effective for producing semantic analysis. This result is somehow non-obvious given that the combination of a graph-based and transition-based parser usually gives significantly better parsing performance (Nivre and McDonald 2008; Torres Martins et al. 2008) .", |
| "cite_spans": [ |
| { |
| "start": 511, |
| "end": 543, |
| "text": "Punyakanok, Roth, and Yih (2008)", |
| "ref_id": null |
| }, |
| { |
| "start": 548, |
| "end": 569, |
| "text": "Zhuang and Zong (2010", |
| "ref_id": null |
| }, |
| { |
| "start": 570, |
| "end": 592, |
| "text": "-based (McDonald 2006;", |
| "ref_id": null |
| }, |
| { |
| "start": 593, |
| "end": 630, |
| "text": "Torres Martins, Smith, and Xing 2009)", |
| "ref_id": null |
| }, |
| { |
| "start": 756, |
| "end": 776, |
| "text": "(Suzuki et al. 2009;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 777, |
| "end": 795, |
| "text": "Weiss et al. 2015)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 1014, |
| "end": 1036, |
| "text": "Zhang and Nivre (2011)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 1097, |
| "end": 1110, |
| "text": "(Bohnet 2010)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1695, |
| "end": 1720, |
| "text": "(Nivre and McDonald 2008;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 1721, |
| "end": 1748, |
| "text": "Torres Martins et al. 2008)", |
| "ref_id": "BIBREF41" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 83, |
| "end": 90, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 369, |
| "end": 376, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 1397, |
| "end": 1404, |
| "text": "Table 7", |
| "ref_id": "TABREF12" |
| }, |
| { |
| "start": 1428, |
| "end": 1437, |
| "text": "Figure 12", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Table 3", |
| "sec_num": null |
| }, |
| { |
| "text": "In case syntactic information is not available, we propose a tree approximation technique to induce tree backbones from deep dependency graphs. In particular, our technique guarantees that the automatically derived trees are projective, which is a necessary condition for a number of effective tree parsing algorithms. We can utilize these pseudo trees as an alternative to syntactic analysis. To evaluate the effectiveness of tree approximation, we compare the contribution to semantic dependency parsing of syntactic trees and pseudo trees. In this experiment, we use a transition-based tree parser to generate automatic analysis. Figure 13 presents the results. Generally speaking, pseudo trees contribute to semantic dependency parsing equally well as syntactic trees. Sometimes, they perform even better. There is a considerable drop when DeepBank data are applied. We think the main reason is the density of DeepBank graphs. Because there are fewer edges in the original graphs, it is harder to extract informative pseudo trees. As a result, the final graph parsing benefits less. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 633, |
| "end": 642, |
| "text": "Figure 13", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Effectiveness of Tree Approximation", |
| "sec_num": "5.6" |
| }, |
| { |
| "text": "Two stack (reverse)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Two stack", |
| "sec_num": null |
| }, |
| { |
| "text": "Parsing accuracy with and without syntactic features. The syntactic trees for experiments on DeepBank and EnjuBank data sets are provided by the SemEval 2014 shared task, and they are automatically generated by the Stanford Parser. The syntactic trees for experiments on English and Chinese CCG data sets are generated by our in-house implementation of the model introduced in Zhang and Nivre (2011) .", |
| "cite_spans": [ |
| { |
| "start": 377, |
| "end": 399, |
| "text": "Zhang and Nivre (2011)", |
| "ref_id": "BIBREF47" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "It is also possible to build a parser ensemble on pseudo tree enhanced models. However, the effectiveness of system combination is not as effective as integrating nontree models. Table 8 summarizes the detailed parsing accuracy. We can see that system ensemble is still helpful, though the improvement is limited. r The first type of parser implements a shift-reduce parsing architecture and also uses beam search for practical decoding. In particular, we compare our parser with the state-of-the-art CCG parser introduced in Xu, Clark, and Two stack Two stack (reverse)", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 179, |
| "end": 186, |
| "text": "Table 8", |
| "ref_id": "TABREF13" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 11", |
| "sec_num": null |
| }, |
| { |
| "text": "Parsing accuracy based on syntactic and pseudo tree features. All trees are generated by our in-house implementation of the model introduced in Zhang and Nivre (2011). Zhang (2014) . 5 This parser extends a shift-reduce CFG parser (Zhang and Clark 2011a ) with a dependency model.", |
| "cite_spans": [ |
| { |
| "start": 168, |
| "end": 180, |
| "text": "Zhang (2014)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 183, |
| "end": 184, |
| "text": "5", |
| "ref_id": null |
| }, |
| { |
| "start": 231, |
| "end": 253, |
| "text": "(Zhang and Clark 2011a", |
| "ref_id": "BIBREF46" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 13", |
| "sec_num": null |
| }, |
| { |
| "text": "r The second type of parser implements the chart parsing architecture with some refinements. For CCG analysis, we focus on the parser proposed by Auli and Lopez (2011b). The basic system architecture follows the well-engineered C&C Parser, 6 and additionally applies a number of advanced machine learning and optimization techniques, including belief propagation, dual decomposition Auli and Lopez (2011a) , and parameter estimation with softmax-margin loss (Auli and Lopez 2011b) , to enhance the results. For HPSG analysis, we compare with the well-studied Enju Table 9 Parsing results on test sets obtained by representative parsers. State-of-the-art results on these data sets, as reported in Oepen et al. (2014) , Martins and Almeida (2014) , Xu, Clark, and Zhang (2014) , Auli and Lopez (2011b) , Du, Sun, and Wan (2015) , Sun et al. (2014) Parser, 7 which develops a number of advanced techniques for discriminative deep parsing-for example, maximum entropy estimation with feature forest and efficient decoding with supertagging and CFG-filtering (Matsuzaki, Miyao, and Tsujii 2007) . Table 9 shows the final results on the test data for each data set. The representative shift-reduce parser for comparison utilizes a very similar learning and decoding architectures to our system. Similar to our parser, Xu, Clark, and Zhang's (2014) parser incrementally processes a sentence and uses a beam decoder that performs an inexact search. Xu, Clark, and Zhang's parser sets beam width to 128, while ours is 16. It also uses the structured prediction algorithm for parameter estimation. The major difference is that the shift-reduce CCG parser explicitly utilizes a core grammar to guide decoding, whereas our parser excludes all such information. Actually, our models reported here also exclude all syntactic information because no syntactic parse is used for feature extraction. We can see that our individual system based on the two stack transition system achieves equivalent performance to the CCG-driven parser. Moreover, when this individual system is augmented with tree approximation, the accuracy is significantly improved. Note that the individual system with both settings does not rely on any explicit syntactic information. This result on one hand indicates the effectiveness of adapting syntactic parsing techniques for full semantic parsing, and on the other hand suggests the possibility of using semantically structural (not syntactically structural) information only to achieve high-accuracy semantic parsing.", |
| "cite_spans": [ |
| { |
| "start": 383, |
| "end": 405, |
| "text": "Auli and Lopez (2011a)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 458, |
| "end": 480, |
| "text": "(Auli and Lopez 2011b)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 697, |
| "end": 716, |
| "text": "Oepen et al. (2014)", |
| "ref_id": null |
| }, |
| { |
| "start": 719, |
| "end": 745, |
| "text": "Martins and Almeida (2014)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 748, |
| "end": 775, |
| "text": "Xu, Clark, and Zhang (2014)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 778, |
| "end": 800, |
| "text": "Auli and Lopez (2011b)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 803, |
| "end": 826, |
| "text": "Du, Sun, and Wan (2015)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 829, |
| "end": 846, |
| "text": "Sun et al. (2014)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 1055, |
| "end": 1090, |
| "text": "(Matsuzaki, Miyao, and Tsujii 2007)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1313, |
| "end": 1342, |
| "text": "Xu, Clark, and Zhang's (2014)", |
| "ref_id": "BIBREF45" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 564, |
| "end": 571, |
| "text": "Table 9", |
| "ref_id": null |
| }, |
| { |
| "start": 1093, |
| "end": 1100, |
| "text": "Table 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 13", |
| "sec_num": null |
| }, |
| { |
| "text": "Statistical parsers based on chart parsing are able to perform a more principled search and therefore usually achieve better parsing accuracy than a normal shift-reduce parser. We also compare our parsing models with two state-of-the-art chart parsers, namely, the Enju Parser and Auli and Lopez's (2011b) parser. Different from Xu, Clark, and Zhang's (2014) shift-reduce parser and our models, Auli and Lopez's (2011b) parser does not guarantee to produce analysis for arbitrary sentences. Usually, the numerical performance evaluated on all sentences is lower than the results obtained on sentences that can be parsed. Note that Auli and Lopez (2011b) only reported results on sentences that are covered, whereas Oepen et al. (2014) reported results on all sentences, which is achieved by Enju Parser. From Table 9 , we can clearly see that our graph-spanning models are very competitive. The best individual and combined models outperform the Enju Parser and perform equally well to Auli and Lopez's (2011b) parser. It is worth noting that strictly less information is used by our parsers.", |
| "cite_spans": [ |
| { |
| "start": 281, |
| "end": 305, |
| "text": "Auli and Lopez's (2011b)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 329, |
| "end": 358, |
| "text": "Xu, Clark, and Zhang's (2014)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 631, |
| "end": 653, |
| "text": "Auli and Lopez (2011b)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 715, |
| "end": 734, |
| "text": "Oepen et al. (2014)", |
| "ref_id": null |
| }, |
| { |
| "start": 986, |
| "end": 1010, |
| "text": "Auli and Lopez's (2011b)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 809, |
| "end": 816, |
| "text": "Table 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Figure 13", |
| "sec_num": null |
| }, |
| { |
| "text": "Other Data-Driven Parsers. We also compare our parser with recently developed data-driven, factorization models (Martins and Almeida 2014; Du, Sun, and Wan 2015) . Different from projective but similar to non-projective tree parsing, decoding for factorization models where very basic second-order sibling factors are incorporated is NP-hard. See the proof presented in our early work (Du, Sun, and Wan 2015) for details. To perform principled decoding, dual decomposition is used and achieves good empirical results (Martins and Almeida 2014; Du, Sun, and Wan 2015) .", |
| "cite_spans": [ |
| { |
| "start": 112, |
| "end": 138, |
| "text": "(Martins and Almeida 2014;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 139, |
| "end": 161, |
| "text": "Du, Sun, and Wan 2015)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 385, |
| "end": 408, |
| "text": "(Du, Sun, and Wan 2015)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 517, |
| "end": 543, |
| "text": "(Martins and Almeida 2014;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 544, |
| "end": 566, |
| "text": "Du, Sun, and Wan 2015)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison with", |
| "sec_num": "5.7.2" |
| }, |
| { |
| "text": "From Table 9 , we can see that the transition-based approach augmented with tree approximation is comparable to the factorization approach in general. Compared with the Turbo Parser, our individual and hybrid models perform significantly worse on DeepBank but significantly better on EnjuBank. We think one main reason is because of the annotation styles. Though both corpora are based on HPSG, the annotations in question are quite different. DeepBank graphs are more sparse than EnjuBank, which makes tree approximation less effective. It seems that the transition-based parser suffers more when fewer output edges are targeted. The two approaches achieve equivalent performance for CCG parsing.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 5, |
| "end": 12, |
| "text": "Table 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison with", |
| "sec_num": "5.7.2" |
| }, |
| { |
| "text": "Deep linguistic processing is concerned with NLP approaches that aim at modeling the complexity of natural languages in rich linguistic representations. Such approaches are typically related to a particular computational linguistic theory (e.g., CCG, LFG, and HPSG). Parsing in these formalisms provides an elegant way to generate deep syntactosemantic dependency structures with high quality (Clark and Curran 2007; Miyao, Sagae, and Tsujii 2007; . The incremental shift-reduce parsing architecture has been implemented for CCG parsing (Zhang and Clark 2011a; Ambati et al. 2015) . Besides using phrase-structure rules only, a shift-reduce parser can be enhanced by incorporating a dependency model (Xu, Clark, and Zhang 2014) . Our parser and the two above parsers have some essential resemblances, including learning and decoding algorithms. The main difference is the usage of syntactic and grammatical information. The comparison in Section 5.7 gives a rough idea of the impact of explicitly using grammatical constraints. A deep-grammar-guided parsing model usually cannot produce full coverage and the time complexity of the corresponding parsing algorithms is very high. Some NLP applications may favor lightweight solutions to build deep dependency structures.", |
| "cite_spans": [ |
| { |
| "start": 393, |
| "end": 416, |
| "text": "(Clark and Curran 2007;", |
| "ref_id": null |
| }, |
| { |
| "start": 417, |
| "end": 447, |
| "text": "Miyao, Sagae, and Tsujii 2007;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 537, |
| "end": 560, |
| "text": "(Zhang and Clark 2011a;", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 561, |
| "end": 580, |
| "text": "Ambati et al. 2015)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 700, |
| "end": 727, |
| "text": "(Xu, Clark, and Zhang 2014)", |
| "ref_id": "BIBREF45" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6." |
| }, |
| { |
| "text": "Different from grammar-guided approaches, data-driven approaches make essential use of machine learning from linguistic annotations in order to parse new sentences. Such approaches, for example, transition-based (Yamada and Matsumoto 2003; Nivre 2008 ) and graph-based (McDonald 2006; Torres Martins, Smith, and Xing 2009) models, have attracted the most attention of dependency parsing in recent years. Several successful parsers (e.g., MST, Mate, and Malt parsers) have been built and applied to many NLP applications. Recently, two advanced techniques have been studied to enhance a transition-based parser. First, developing features has been shown crucial to advancing parsing accuracy and a very rich feature set is carefully evaluated by Zhang and Nivre (2011) . Second, beyond deterministic greedy search, beam search and principled dynamic programming strategies have been used to explore more possible hypotheses (Zhang and Clark 2008; Huang and Sagae 2010) . When we implement our graph parser, we also leverage rich features and beam search to obtain good parsing accuracy.", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 239, |
| "text": "(Yamada and Matsumoto 2003;", |
| "ref_id": null |
| }, |
| { |
| "start": 240, |
| "end": 250, |
| "text": "Nivre 2008", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 269, |
| "end": 284, |
| "text": "(McDonald 2006;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 285, |
| "end": 322, |
| "text": "Torres Martins, Smith, and Xing 2009)", |
| "ref_id": null |
| }, |
| { |
| "start": 745, |
| "end": 767, |
| "text": "Zhang and Nivre (2011)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 923, |
| "end": 945, |
| "text": "(Zhang and Clark 2008;", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 946, |
| "end": 967, |
| "text": "Huang and Sagae 2010)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6." |
| }, |
| { |
| "text": "Most research concentrated on surface dependency structures, and the majority of existing approaches are limited to producing only tree-shaped graphs. We notice three distinguished exceptions in early work. Sagae and Tsujii (2008) proposed a DAG parser that is able to handle projective directed dependency graphs, and that uses the pseudo-projective parsing technique (Nivre and Nilsson 2005) to build crossing arcs. Titov et al. (2009) and Henderson et al. (2013) introduced non-planar parsing to parse PropBank (Palmer, Gildea, and Kingsbury 2005) structures. However, neither technique handles crossing arcs fully well. There have been a number of papers trying to build non-projective trees, which inspired the design of our transition systems. Especially, we borrow key ideas from Nivre (2009) , G\u00f3mez-Rodr\u00edguez and Nivre (2010), and G\u00f3mez-Rodr\u00edguez and Nivre (2013) . In addition to the investigation on the transition-based approach, McDonald and Pereira (2006) presented a factorization parser that can generate dependency graphs in which a word may depend on multiple heads, and evaluated it on the Danish Treebank. Very recently, the dual decomposition technique has been adopted to achieve principled decoding for factorization models. High-accuracy models have been introduced in Martins and Almeida (2014) and Du, Sun, and Wan (2015) .", |
| "cite_spans": [ |
| { |
| "start": 207, |
| "end": 230, |
| "text": "Sagae and Tsujii (2008)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 369, |
| "end": 393, |
| "text": "(Nivre and Nilsson 2005)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 418, |
| "end": 437, |
| "text": "Titov et al. (2009)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 442, |
| "end": 465, |
| "text": "Henderson et al. (2013)", |
| "ref_id": null |
| }, |
| { |
| "start": 514, |
| "end": 550, |
| "text": "(Palmer, Gildea, and Kingsbury 2005)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 787, |
| "end": 799, |
| "text": "Nivre (2009)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 860, |
| "end": 872, |
| "text": "Nivre (2013)", |
| "ref_id": null |
| }, |
| { |
| "start": 942, |
| "end": 969, |
| "text": "McDonald and Pereira (2006)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 1324, |
| "end": 1347, |
| "text": "Du, Sun, and Wan (2015)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6." |
| }, |
| { |
| "text": "We study transition-based approaches that produce general dependency graphs directly from input sequences of words, in a way nearly as simple as tree parsers. We introduce two new graph-spanning algorithms to generate arbitrary directed graphs, which suit deep dependency parsing well. We also introduce transition combination and tree approximation for statistical disambiguation. Statistical parsers built upon these new techniques have been evaluated with dependency structures that are extracted from linguistically deep CCG, LFG, and HPSG derivations. Our models achieve state-of-the-art performance on five representative data sets for English and Chinese parsing. Experiments demonstrate the effectiveness of grammar-free, transition-based approaches to dealing with complex linguistic phenomena beyond surface syntax.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7." |
| }, |
| { |
| "text": "In addition to deep dependency parsing, many other NLP tasks (e.g., quantifier scope disambiguation [Manshadi, Gildea, and Allen 2013] and event extraction [Li, Ji, and Huang 2013] ), can be formulated as graph spanning problems. We think such tasks can benefit from algorithms that span general graphs rather than trees, and our new transition-based parsers can provide practical solutions to these tasks.", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 134, |
| "text": "[Manshadi, Gildea, and Allen 2013]", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 156, |
| "end": 180, |
| "text": "[Li, Ji, and Huang 2013]", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7." |
| }, |
| { |
| "text": "We assume that at most one edge exists between two words. This is a reasonable assumption for a linguistic representation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://moin.delph-in.net/DeepBank.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.code.google.com/p/umd-featured-parser/.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.code.google.com/p/mate-tools/.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The unlabeled parsing results are not reported in the original paper. The figures presented inTable 9are provided by Wenduan Xu.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://svn.ask.it.usyd.edu.au/trac/candc.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://kmcs.nii.ac.jp/enju/?lang=en.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was supported by the National Natural Science Foundation of China under grants 61300064 and 61331011, and the National High-Tech R&D Program under grant 2015AA015403. We are very grateful to the anonymous reviewers for their insightful and constructive comments and suggestions. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "An incremental algorithm for transition-based CCG parsing", |
| "authors": [ |
| { |
| "first": "Bharat", |
| "middle": [], |
| "last": "Ambati", |
| "suffix": "" |
| }, |
| { |
| "first": "Tejaswini", |
| "middle": [], |
| "last": "Ram", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Deoskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "53--63", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ambati, Bharat Ram, Tejaswini Deoskar, Mark Johnson, and Mark Steedman. 2015. An incremental algorithm for transition-based CCG parsing. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 53-63, Denver, CO.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A comparison of loopy belief propagation and dual decomposition for integrated CCG supertagging and parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Lopez", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "470--480", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Auli, Michael and Adam Lopez. 2011a. A comparison of loopy belief propagation and dual decomposition for integrated CCG supertagging and parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 470-480, Portland, OR. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Training a log-linear parser with loss functions via softmax-margin", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Lopez", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "333--343", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Auli, Michael and Adam Lopez. 2011b. Training a log-linear parser with loss functions via softmax-margin. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 333-343, Edinburgh.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Top accuracy and fast dependency parsing is not a contradiction", |
| "authors": [ |
| { |
| "first": "Bernd", |
| "middle": [], |
| "last": "Bohnet", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "89--97", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bohnet, Bernd. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 89-97, Beijing.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Introduction: Grammars as mental representations of language", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Bresnan", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "M" |
| ], |
| "last": "Kaplan", |
| "suffix": "" |
| } |
| ], |
| "year": 1982, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "xvii--lii", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bresnan, J. and R. M. Kaplan. 1982. Introduction: Grammars as mental representations of language. In J. Bresnan, editor, The Mental Representation of Grammatical Relations. MIT Press, Cambridge, MA, pages xvii-lii.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Wide-coverage efficient statistical parsing with CCG and log-linear models", |
| "authors": [ |
| { |
| "first": "Jinho", |
| "middle": [ |
| "D" |
| ], |
| "last": "Choi", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "R" |
| ], |
| "last": "Curran", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "33", |
| "issue": "", |
| "pages": "493--552", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Choi, Jinho D. and Martha Palmer. 2011. Getting the most out of transition-based dependency parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 687-692, Portland, OR. Clark, Stephen and James R. Curran. 2007. Wide-coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493-552.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Building deep dependency structures using a wide-coverage CCG parser", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Hockenmaier", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "327--334", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Clark, Stephen, Julia Hockenmaier, and Mark Steedman. 2002. Building deep dependency structures using a wide-coverage CCG parser. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 327-334. Philadelphia, PA.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Collins, Michael. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pages 1-8. Philadelphia, PA.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Incremental parsing with the perceptron algorithm", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Roark", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume", |
| "volume": "", |
| "issue": "", |
| "pages": "111--118", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Collins, Michael and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04), Main Volume, pages 111-118, Barcelona.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Minimal recursion semantics: An introduction", |
| "authors": [ |
| { |
| "first": "Ann", |
| "middle": [], |
| "last": "Copestake", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Flickinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Carl", |
| "middle": [], |
| "last": "Pollard", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [ |
| "A" |
| ], |
| "last": "Sag", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Research on Language and Computation", |
| "volume": "3", |
| "issue": "", |
| "pages": "281--332", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Copestake, Ann, Dan Flickinger, Carl Pollard, and Ivan A. Sag. 2005. Minimal recursion semantics: An introduction. Research on Language and Computation, 3:281-332.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "A fundamental algorithm for dependency parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [ |
| "A" |
| ], |
| "last": "Covington", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of the 39th Annual ACM Southeast Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "95--102", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Covington, Michael A. 2001. A fundamental algorithm for dependency parsing. In Proceedings of the 39th Annual ACM Southeast Conference, pages 95-102.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn treebank", |
| "authors": [ |
| { |
| "first": "Yantao", |
| "middle": [], |
| "last": "Du", |
| "suffix": "" |
| }, |
| { |
| "first": "Weiwei", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaojun", |
| "middle": [], |
| "last": "Wan", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the 2007", |
| "volume": "33", |
| "issue": "", |
| "pages": "355--396", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Du, Yantao, Weiwei Sun, and Xiaojun Wan. 2015. A data-driven, factorization parser for CCG dependency structures. In Proceedings of the 53rd Annual Meeting of the 2007. CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn treebank. Computational Linguistics, 33(3):355-396.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Dynamic programming for linear-time incremental parsing", |
| "authors": [ |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1077--1086", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Huang, Liang and Kenji Sagae. 2010. Dynamic programming for linear-time incremental parsing. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1077-1086, Uppsala.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Self-training with products of latent variable grammars", |
| "authors": [ |
| { |
| "first": "Zhongqiang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [], |
| "last": "Harper", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "12--22", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Huang, Zhongqiang, Mary Harper, and Slav Petrov. 2010. Self-training with products of latent variable grammars. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 12-22, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Who did what to whom? A contrastive study of syntacto-semantic dependencies", |
| "authors": [ |
| { |
| "first": "Angelina", |
| "middle": [], |
| "last": "Ivanova", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Oepen", |
| "suffix": "" |
| }, |
| { |
| "first": "Lilja", |
| "middle": [], |
| "last": "\u00d8vrelid", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Flickinger", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Sixth Linguistic Annotation Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "2--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ivanova, Angelina, Stephan Oepen, Lilja \u00d8vrelid, and Dan Flickinger. 2012. Who did what to whom? A contrastive study of syntacto-semantic dependencies. In Proceedings of the Sixth Linguistic Annotation Workshop, pages 2-11, Jeju Island.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Efficient third-order dependency parsers", |
| "authors": [ |
| { |
| "first": "Terry", |
| "middle": [], |
| "last": "Koo", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Koo, Terry and Michael Collins. 2010. Efficient third-order dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1-11, Uppsala.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Joint event extraction via structured prediction with global features", |
| "authors": [ |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "73--82", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li, Qi, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 73-82, Sofia.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Plurality, negation, and quantification: Towards comprehensive quantifier scope disambiguation", |
| "authors": [ |
| { |
| "first": "Mehdi", |
| "middle": [], |
| "last": "Manshadi", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Allen", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "64--72", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Manshadi, Mehdi, Daniel Gildea, and James Allen. 2013. Plurality, negation, and quantification: Towards comprehensive quantifier scope disambiguation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 64-72, Sofia.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Priberam: A turbo semantic parser with second order features", |
| "authors": [ |
| { |
| "first": "Andr\u00e9", |
| "middle": [ |
| "F T" |
| ], |
| "last": "Martins", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "C" |
| ], |
| "last": "Mariana", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Almeida", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "471--476", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Martins, Andr\u00e9 F. T. and Mariana S. C. Almeida. 2014. Priberam: A turbo semantic parser with second order features. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 471-476, Dublin.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Efficient HPSG parsing with supertagging and CFG-filtering", |
| "authors": [ |
| { |
| "first": "Takuya", |
| "middle": [], |
| "last": "Matsuzaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun'ichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 20th International Joint Conference on Artificial intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "1671--1676", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matsuzaki, Takuya, Yusuke Miyao, and Jun'ichi Tsujii. 2007. Efficient HPSG parsing with supertagging and CFG-filtering. In Proceedings of the 20th International Joint Conference on Artificial intelligence, pages 1671-1676, San Francisco, CA.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Discriminative Learning and Spanning Tree Algorithms for Dependency Parsing", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "McDonald, Ryan. 2006. Discriminative Learning and Spanning Tree Algorithms for Dependency Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Online learning of approximate dependency parsing algorithms", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of 11th Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "6", |
| "issue": "", |
| "pages": "81--88", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "McDonald, Ryan and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL-2006)), volume 6, pages 81-88, Trento.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Analyzing and integrating dependency parsers", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [ |
| "T" |
| ], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Computational Linguistics", |
| "volume": "37", |
| "issue": "1", |
| "pages": "197--230", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "McDonald, Ryan T. and Joakim Nivre. 2011. Analyzing and integrating dependency parsers. Computational Linguistics, 37(1):197-230.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Corpus-oriented grammar development for acquiring a head-driven phrase structure grammar from the penn treebank", |
| "authors": [ |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "Takashi", |
| "middle": [], |
| "last": "Ninomiya", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun'ichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "IJCNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "684--693", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miyao, Yusuke, Takashi Ninomiya, and Jun'ichi Tsujii. 2004. Corpus-oriented grammar development for acquiring a head-driven phrase structure grammar from the penn treebank. In IJCNLP, pages 684-693, Hainan Island.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Task-oriented evaluation of syntactic parsers and their representations", |
| "authors": [ |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "Rune", |
| "middle": [], |
| "last": "Saetre", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| }, |
| { |
| "first": "Takuya", |
| "middle": [], |
| "last": "Matsuzaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun'ichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACL-08: HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "46--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miyao, Yusuke, Rune Saetre, Kenji Sagae, Takuya Matsuzaki, and Jun'ichi Tsujii. 2008. Task-oriented evaluation of syntactic parsers and their representations. In Proceedings of ACL-08: HLT, pages 46-54, Columbus, OH.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Towards frameworkindependent evaluation of deep linguistic parsers", |
| "authors": [ |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun'ichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the GEAF 2007 Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "238--258", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miyao, Yusuke, Kenji Sagae, and Jun'ichi Tsujii. 2007. Towards framework- independent evaluation of deep linguistic parsers. In Proceedings of the GEAF 2007 Workshop, pages 238-258, Stanford, CA.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Feature forest models for probabilistic HPSG parsing", |
| "authors": [ |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun'ichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computational Linguistics", |
| "volume": "34", |
| "issue": "1", |
| "pages": "35--80", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miyao, Yusuke and Jun'ichi Tsujii. 2008. Feature forest models for probabilistic HPSG parsing. Computational Linguistics, 34(1):35-80.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Algorithms for deterministic incremental dependency parsing", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computational Linguistics", |
| "volume": "34", |
| "issue": "", |
| "pages": "513--553", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nivre, Joakim. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34:513-553.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Integrating graph-based and transition-based dependency parsers", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Suntec", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "950--958", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nivre, Joakim. 2009. Non-projective dependency parsing in expected linear time. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 351-359, Suntec. Nivre, Joakim and Ryan McDonald. 2008. Integrating graph-based and transition-based dependency parsers. In Proceedings of ACL-08: HLT, pages 950-958, Columbus, OH.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Semeval 2014 task 8: Broad-coverage semantic dependency parsing", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", |
| "volume": "", |
| "issue": "", |
| "pages": "63--72", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nivre, Joakim and Jens Nilsson. 2005. Pseudo-projective dependency parsing. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 99-106, Ann Arbor, MI. Oepen, Stephan, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajic, Angelina Ivanova, and Yi Zhang. 2014. Semeval 2014 task 8: Broad-coverage semantic dependency parsing. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 63-72, Dublin.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Discriminant-based MRS banking", |
| "authors": [ |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Oepen", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC-2006)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oepen, Stephan and Jan Tore L\u00f8nning. 2006. Discriminant-based MRS banking. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC-2006), Genoa.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "The proposition bank: An annotated corpus of semantic roles", |
| "authors": [ |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Kingsbury", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Computational Linguistics", |
| "volume": "31", |
| "issue": "", |
| "pages": "71--106", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Palmer, Martha, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31:71-106.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "The importance of syntactic parsing and inference in semantic role labeling", |
| "authors": [ |
| { |
| "first": "Carl", |
| "middle": [], |
| "last": "Pollard", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [ |
| "A" |
| ], |
| "last": "Sag", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Computational Linguistics", |
| "volume": "34", |
| "issue": "2", |
| "pages": "257--287", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pollard, Carl and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. The University of Chicago Press, Chicago. Punyakanok, Vasin, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257-287.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Large-scale semantic parsing without question-answer pairs", |
| "authors": [ |
| { |
| "first": "Siva", |
| "middle": [], |
| "last": "Reddy", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Transactions of the Association for Computational Linguistics (TACL)", |
| "volume": "2", |
| "issue": "", |
| "pages": "377--392", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Reddy, Siva, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic parsing without question-answer pairs. Transactions of the Association for Computational Linguistics (TACL), 2:377-392.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Parser combination by reparsing", |
| "authors": [ |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| }, |
| { |
| "first": "Alon", |
| "middle": [], |
| "last": "Lavie", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "129--132", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sagae, Kenji and Alon Lavie. 2006. Parser combination by reparsing. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 129-132, Stroudsburg, PA.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Shift-reduce dependency DAG parsing", |
| "authors": [ |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun'ichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "753--760", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sagae, Kenji and Jun'ichi Tsujii. 2008. Shift-reduce dependency DAG parsing. In Proceedings of the 22nd International Conference on Computational Linguistics, pages 753-760, Manchester. Steedman, Mark. 2000. The Syntactic Process. MIT Press, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Weiwei and Xiaojun Wan. 2013. Data-driven, PCFG-based and pseudo-PCFG-based models for Chinese dependency parsing", |
| "authors": [ |
| { |
| "first": "Weiwei", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Yantao", |
| "middle": [], |
| "last": "Du", |
| "suffix": "" |
| }, |
| { |
| "first": "Xin", |
| "middle": [], |
| "last": "Kou", |
| "suffix": "" |
| }, |
| { |
| "first": "Shuoyang", |
| "middle": [], |
| "last": "Ding", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaojun", |
| "middle": [], |
| "last": "Wan", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "301--314", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sun, Weiwei, Yantao Du, Xin Kou, Shuoyang Ding, and Xiaojun Wan. 2014. Grammatical relations in Chinese: GB-ground extraction and data-driven parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 446-456, Baltimore. Sun, Weiwei and Xiaojun Wan. 2013. Data-driven, PCFG-based and pseudo-PCFG-based models for Chinese dependency parsing. Transactions of the Association for Computational Linguistics (TACL), 1:301-314.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "The CONLL 2008 shared task on joint parsing of syntactic and semantic dependencies", |
| "authors": [ |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Johansson", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Meyers", |
| "suffix": "" |
| }, |
| { |
| "first": "Llu\u00eds", |
| "middle": [], |
| "last": "M\u00e0rquez", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "649--652", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Surdeanu, Mihai, Richard Johansson, Adam Meyers, Llu\u00eds M\u00e0rquez, and Joakim Nivre. 2008. The CONLL 2008 shared task on joint parsing of syntactic and semantic dependencies. In CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning, pages 159-177, Manchester. Surdeanu, Mihai and Christopher D. Manning. 2010. Ensemble models for dependency parsing: Cheap and good? In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 649-652, Los Angeles, CA.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "An empirical study of semi-supervised structured conditional models for dependency parsing", |
| "authors": [ |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Suzuki", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Isozaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Carreras", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "551--560", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Suzuki, Jun, Hideki Isozaki, Xavier Carreras, and Michael Collins. 2009. An empirical study of semi-supervised structured conditional models for dependency parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 551-560, Singapore.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Online graph planarisation for synchronous parsing of semantic and syntactic dependencies", |
| "authors": [ |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Henderson", |
| "suffix": "" |
| }, |
| { |
| "first": "Paola", |
| "middle": [], |
| "last": "Merlo", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriele", |
| "middle": [], |
| "last": "Musillo", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "342--350", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Titov, Ivan, James Henderson, Paola Merlo, and Gabriele Musillo. 2009. Online graph planarisation for synchronous parsing of semantic and syntactic dependencies. In Proceedings of the 21st International Joint Conference on Artificial Intelligence, pages 1562-1567, San Francisco, CA. Torres Martins, Andre, Noah Smith, and Eric Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 342-350, Suntec.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Stacking dependency parsers", |
| "authors": [ |
| { |
| "first": "Torres", |
| "middle": [], |
| "last": "Martins", |
| "suffix": "" |
| }, |
| { |
| "first": "Andr\u00e9", |
| "middle": [], |
| "last": "Filipe", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [ |
| "P" |
| ], |
| "last": "Xing", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "157--166", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Torres Martins, Andr\u00e9 Filipe, Dipanjan Das, Noah A. Smith, and Eric P. Xing. 2008. Stacking dependency parsers. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 157-166, Honolulu, HI.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Chinese CCGbank: Extracting CCG derivations from the Penn Chinese treebank", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Tse", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "R" |
| ], |
| "last": "Curran", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1083--1091", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tse, Daniel and James R. Curran. 2010. Chinese CCGbank: Extracting CCG derivations from the Penn Chinese treebank. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 1083-1091, Beijing.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "The challenges of parsing Chinese with combinatory categorial grammar", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Tse", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "R" |
| ], |
| "last": "Curran", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "295--304", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tse, Daniel and James R. Curran. 2012. The challenges of parsing Chinese with combinatory categorial grammar. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 295-304, Montr\u00e9al.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Structured training for neural network transition-based parsing", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Alberti", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "323--333", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Weiss, David, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 323-333, Beijing.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Statistical dependency analysis with support vector machines", |
| "authors": [ |
| { |
| "first": "Wenduan", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "195--206", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xu, Wenduan, Stephen Clark, and Yue Zhang. 2014. Shift-reduce CCG parsing with a dependency model. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 218-227, Baltimore, MD. Yamada, Hiroyasu and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In 8th International Workshop of Parsing Technologies (IWPT2003), pages 195-206, Nancy.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing", |
| "authors": [ |
| { |
| "first": "Hui", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Min", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Haizhou", |
| "middle": [], |
| "last": "Chew Lim Tan", |
| "suffix": "" |
| }, |
| { |
| "first": ";", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark ; Honolulu", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "I" |
| ], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": ";", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "37", |
| "issue": "", |
| "pages": "105--151", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, Hui, Min Zhang, Chew Lim Tan, and Haizhou Li. 2009. K-best combination of syntactic parsers. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1552-1560, Singapore. Zhang, Yue and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 562-571, Honolulu, HI. Zhang, Yue and Stephen Clark. 2011a. Shift-reduce CCG parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 683-692, Portland, OR. Zhang, Yue and Stephen Clark. 2011b. Syntactic processing using the generalized perceptron and beam search. Computational Linguistics, 37(1):105-151.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "and Chengqing Zong. 2010. A minimum error weighting combination strategy for Chinese semantic role labeling", |
| "authors": [ |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1362--1370", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, Yue and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 188-193, Portland, OR. Zhuang, Tao and Chengqing Zong. 2010. A minimum error weighting combination strategy for Chinese semantic role labeling. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 1362-1370, Beijing.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "An example from CCGBank. The upper curves represent a deep dependency graph and the bottom curves represent a traditional dependency tree.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "text": "\u03c3, \u03b2, and\u0100 of two configurations c 1 and c 2 . In the left graphic, L(\u03c3 c 1 ) = [[6], [5], [5, 6], [7]]. Because [5, 6] and [5] precedes [6], we apply two SWAPs and then two SHIFTs, obtaining the right graphic.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "text": "combined label, return a pair of left label and right label 1: procedure DECODELABEL(label) 2: if label.startswith?(\"left\") then 3: return {label[4 :], nil} 4: else if label.startswith?(\"right\") then 5: return {nil, label[5 :]} 6: else 7: return {label[4 : label.index( | )], label[(label.index( | ) + 1) :", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "num": null, |
| "text": "Oracle generation for the online re-ordering system 1: procedure EXTRACTONEORACLE(c, A gold , label) 2:", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "num": null, |
| "text": "Oracle generation for the two-stack-based system 1: procedure EXTRACTONEORACLE(c, A gold , label) 2:", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF5": { |
| "num": null, |
| "text": "c = (\u03c3|i, \u03c3 s |i s , j|\u03b2, A) then", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF6": { |
| "num": null, |
| "text": "Format 2: MRS-derived dependencies, from DeepBank HPSG annotations.A similar technique is almost impossible to apply to other crops , such as cotton , soybeans and rice Format 3: Predicate-argument structures, from Enju HPSG annotation.A similar technique is almost impossible to apply to other crops , such as cotton , soybeans and rice Format 4: Functor-argument structures, from CCGBank.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF7": { |
| "num": null, |
| "text": "Dependency representations in (a) PropBank, (b) DeepBank, (c) Enju HPSGBank, and (d) CCGBank formats.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF9": { |
| "num": null, |
| "text": "Real running time relative to beam size. Tested using DeepBank-trained models.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF11": { |
| "num": null, |
| "text": "Figure 10 Labeled parsing F-scores of different transition system with and without transition combination. \"Standard\" denotes the standard systems, which do not combine an ARC transition with its following transition.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF13": { |
| "num": null, |
| "text": ") CCG(en/rev) CCG(cn) CCG(cn/rev) ) CCG(en/rev) CCG(cn) CCG(cn/rev) ) CCG(en/rev) CCG(cn) CCG(cn/rev) scores with respect to different tree parsing techniques. Results shown here are from experiments for English and Chinese CCG parsing.", |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "html": null, |
| "content": "<table/>", |
| "text": "Oracle generation for the THMM system 1: procedure EXTRACTONEORACLE(c, A gold , label)", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF7": { |
| "html": null, |
| "content": "<table><tr><td colspan=\"3\">Language Formalism Data</td><td>Training</td><td>Test</td></tr><tr><td>English</td><td>CCG</td><td>CCGBank</td><td>39,604</td><td>2,407</td></tr><tr><td/><td>HPSG</td><td>DeepBank</td><td>34,003</td><td>1,348</td></tr><tr><td/><td>HPSG</td><td>EnjuBank</td><td>34,003</td><td>1,348</td></tr><tr><td>Chinese</td><td>CCG</td><td>CCGBank</td><td>22,339</td><td>2,813</td></tr><tr><td/><td>LFG</td><td>GRBank</td><td>22,277</td><td>2,557</td></tr></table>", |
| "text": "Data sets for experiments. Columns \"Training\" and \"Test\" present the number of sentences in training and test sets, respectively.", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF12": { |
| "html": null, |
| "content": "<table><tr><td/><td colspan=\"2\">UAS(Tr) UAS(Gr)</td></tr><tr><td>English</td><td>93.48%</td><td>93.47%</td></tr><tr><td colspan=\"2\">Chinese 80.97%</td><td>80.81%</td></tr><tr><td colspan=\"3\">5.7 Comparison with Other Parsers</td></tr><tr><td colspan=\"3\">5.7.1 Comparison with Grammar-Based Parsers. We compare our parser with several</td></tr><tr><td colspan=\"3\">representative Treebank-guided, grammar-based parsers that achieve state-of-the-art</td></tr><tr><td colspan=\"3\">performance for CCG and HPSG analysis. The grammar-based parsers selected represent</td></tr><tr><td colspan=\"3\">two different architectures.</td></tr></table>", |
| "text": "Accuracy of preprocessing on the development data for CCG analysis. Tr and Gr, respectively, denote transition-based and graph-based tree parsers.", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF13": { |
| "html": null, |
| "content": "<table><tr><td/><td/><td/><td>English</td><td/><td/><td/></tr><tr><td>DeepBank</td><td>UP</td><td>UR</td><td>UF</td><td>LP</td><td>LR</td><td>LF</td></tr><tr><td>S T S rev T S S S rev S S 2S S rev 2S</td><td>87.99% 88.94% 87.76% 88.72% 87.92% 89.04%</td><td>87.64% 88.98% 87.45% 88.65% 87.60% 88.85%</td><td>87.81 88.96 87.60 88.69 87.76 88.95</td><td>85.95% 87.07% 85.83% 86.88% 86.03% 87.15%</td><td>85.61% 87.11% 85.52% 86.82% 85.72% 86.96%</td><td>85.78 87.09 85.67 86.85 85.87 87.05</td></tr><tr><td>Combined</td><td>88.54%</td><td>90.25%</td><td>89.39</td><td>86.65%</td><td>88.32%</td><td>87.48</td></tr><tr><td>EnjuBank</td><td>UP</td><td>UR</td><td>UF</td><td>LP</td><td>LR</td><td>LF</td></tr><tr><td>S T S rev T S S S rev S S 2S S rev 2S</td><td>91.88% 92.82% 91.76% 92.66% 91.85% 92.92%</td><td>91.45% 92.83% 91.39% 92.65% 91.54% 92.83%</td><td>91.66 92.83 91.58 92.65 91.70 92.87</td><td>90.60% 91.61% 90.50% 91.45% 90.63% 91.77%</td><td>90.17% 91.61% 90.14% 91.44% 90.33% 91.68%</td><td>90.38 91.61 90.32 91.45 90.48 91.73</td></tr><tr><td>Combined</td><td>92.47%</td><td>93.52%</td><td>92.99</td><td>91.34%</td><td>92.38%</td><td>91.86</td></tr><tr><td>CCGBank</td><td>UP</td><td>UR</td><td>UF</td><td>LP</td><td>LR</td><td>LF</td></tr><tr><td>S T S rev T S S S rev S S 2S S rev 2S</td><td>92.15% 92.46% 91.91% 92.34% 91.86% 92.53%</td><td>91.05% 92.27% 91.18% 92.43% 91.13% 92.41%</td><td>91.60 92.37 91.54 92.39 91.49 92.47</td><td>88.20% 88.78% 87.97% 88.67% 87.92% 88.85%</td><td>87.15% 88.61% 87.28% 88.76% 87.22% 88.73%</td><td>87.67 88.70 87.62 88.72 87.57 88.79</td></tr><tr><td>Combined</td><td>92.38%</td><td>93.20%</td><td>92.79</td><td>88.92%</td><td>89.71%</td><td>89.31</td></tr><tr><td/><td/><td/><td>Chinese</td><td/><td/><td/></tr><tr><td>CCGBank</td><td>UP</td><td>UR</td><td>UF</td><td>LP</td><td>LR</td><td>LF</td></tr><tr><td>S T S rev T S S S rev S S 2S S rev 2S</td><td>87.44% 87.11% 86.76% 86.46% 87.09% 86.57%</td><td>86.41% 87.04% 86.52% 87.54% 86.91% 87.69%</td><td>86.93 87.07 86.64 87.00 87.00 87.13</td><td>83.44% 83.24% 82.85% 82.69% 83.21% 82.75%</td><td>82.45% 83.17% 82.63% 83.72% 83.03% 83.82%</td><td>82.94 83.21 82.74 83.20 83.12 83.28</td></tr><tr><td>Combined</td><td>87.27%</td><td>89.00%</td><td>88.12</td><td>83.57%</td><td>85.23%</td><td>84.39</td></tr></table>", |
| "text": "Performance of base and combined models on the test set of the DeepBank/EnjuBank data and on the development set of the English and Chinese CCGBank data. Features extracted from pseudo trees are utilized for disambiguation.", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF14": { |
| "html": null, |
| "content": "<table><tr><td>DeepBank Our system</td><td>S rev 2S Combined S rev 2S +Pseudo Tree Combined+Pseudo Tree</td><td>LP 86.28% 86.46% 87.15% 86.65%</td><td>LR 86.26% 86.27 LF 88.40% 87.42 86.96% 87.05 88.32% 87.48</td></tr><tr><td>Factorization (Turbo)</td><td>(Martins and Almeida 2014)</td><td>88.82%</td><td>87.35% 88.08</td></tr><tr><td>EnjuBank Our system</td><td>S rev 2S Combined S rev 2S +Pseudo Tree Combined+Pseudo Tree</td><td>LP 90.88% 90.15% 91.77% 91.34%</td><td>LR 90.80% 90.84 LF 92.43% 91.28 91.68% 91.73 92.38% 91.86</td></tr><tr><td>Chart parsing (Enju)</td><td>(Oepen et al. 2014)</td><td>92.09%</td><td>92.02% 92.06</td></tr><tr><td>Factorization (Turbo)</td><td>(Martins and Almeida 2014)</td><td>91.95%</td><td>89.92% 90.93</td></tr><tr><td>English CCGBank Our system</td><td>S rev 2S Combined S rev 2S +Pseudo Tree Combined+Pseudo Tree</td><td>UP 91.84% 92.06% 92.49% 92.52%</td><td>UR 91.75% 91.80 UF 93.14% 92.60 92.30% 92.40 93.13% 92.82</td></tr><tr><td>Shift-reduce</td><td>(Xu, Clark, and Zhang 2014)</td><td>93.15%</td><td>91.06% 92.09</td></tr><tr><td>Chart parsing</td><td>(Auli and Lopez 2011b)</td><td>93.08%</td><td>92.44% 92.76</td></tr><tr><td>Factorization</td><td>(Du, Sun, and Wan 2015)</td><td>93.03%</td><td>92.03% 92.53</td></tr><tr><td>Chinese GRBank Our system</td><td>S rev 2S Combined</td><td>LP 82.28% 84.92%</td><td>LR 83.11% 82.69 LF 85.28% 85.10</td></tr><tr><td>Transition-based</td><td>(Sun et al. 2014)</td><td>83.93%</td><td>79.82% 81.82</td></tr><tr><td>Chinese CCGBank Our system</td><td>S rev 2S Combined S rev 2S +Pseudo Tree Combined+Pseudo Tree</td><td>UP 85.07% 86.35% 86.65% 87.14%</td><td>UR 86.02% 85.54 UF 88.85% 87.58 87.34% 86.99 88.60% 87.86</td></tr></table>", |
| "text": ", are included.", |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |