| { |
| "paper_id": "K17-1005", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T07:08:06.833519Z" |
| }, |
| "title": "Parsing for Grammatical Relations via Graph Merging", |
| "authors": [ |
| { |
| "first": "Weiwei", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "The MOE Key Laboratory of Computational Linguistics", |
| "institution": "Peking University", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Yantao", |
| "middle": [], |
| "last": "Du", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "The MOE Key Laboratory of Computational Linguistics", |
| "institution": "Peking University", |
| "location": {} |
| }, |
| "email": "duyantao@pku.edu.cn" |
| }, |
| { |
| "first": "Xiaojun", |
| "middle": [], |
| "last": "Wan", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "The MOE Key Laboratory of Computational Linguistics", |
| "institution": "Peking University", |
| "location": {} |
| }, |
| "email": "wanxiaojun@pku.edu.cn" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper is concerned with building deep grammatical relation (GR) analysis using data-driven approach. To deal with this problem, we propose graph merging, a new perspective, for building flexible dependency graphs: Constructing complex graphs via constructing simple subgraphs. We discuss two key problems in this perspective: (1) how to decompose a complex graph into simple subgraphs, and (2) how to combine subgraphs into a coherent complex graph. Experiments demonstrate the effectiveness of graph merging. Our parser reaches state-of-the-art performance and is significantly better than two transition-based parsers.", |
| "pdf_parse": { |
| "paper_id": "K17-1005", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper is concerned with building deep grammatical relation (GR) analysis using data-driven approach. To deal with this problem, we propose graph merging, a new perspective, for building flexible dependency graphs: Constructing complex graphs via constructing simple subgraphs. We discuss two key problems in this perspective: (1) how to decompose a complex graph into simple subgraphs, and (2) how to combine subgraphs into a coherent complex graph. Experiments demonstrate the effectiveness of graph merging. Our parser reaches state-of-the-art performance and is significantly better than two transition-based parsers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Grammatical relations (GRs) represent functional relationships between language units in a sentence. Marking not only local but also a wide variety of long distance dependencies, GRs encode in-depth information of natural language sentences. Traditionally, GRs are generated as a byproduct by grammar-guided parsers, e.g. RASP (Carroll and Briscoe, 2002) , C&C (Clark and Curran, 2007b) and Enju (Miyao et al., 2007) . Very recently, by representing GR analysis using general directed dependency graphs, Sun et al. (2014) and Zhang et al. (2016) showed that considerably good GR structures can be directly obtained using data-driven, transition-based parsing techniques. We follow their encouraging work and study the data-driven approach for producing GR analyses.", |
| "cite_spans": [ |
| { |
| "start": 327, |
| "end": 354, |
| "text": "(Carroll and Briscoe, 2002)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 361, |
| "end": 386, |
| "text": "(Clark and Curran, 2007b)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 396, |
| "end": 416, |
| "text": "(Miyao et al., 2007)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 504, |
| "end": 521, |
| "text": "Sun et al. (2014)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 526, |
| "end": 545, |
| "text": "Zhang et al. (2016)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The key challenge of building GR graphs is due to their flexibility. Different from surface syntax, the GR graphs are not constrained to trees, which is a fundamental consideration in design-ing parsing algorithms. To deal with this problem, we propose graph merging, a new perspective, for building flexible representations. The basic idea is to decompose a GR graph into several subgraphs, each of which captures most but not the complete information. On the one hand, each subgraph is simple enough to allow efficient construction. On the other hand, the combination of all subgraphs enables whole target GR structure to be produced.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There are two major problems in the graph merging perspective. First, how to decompose a complex graph into simple subgraphs in a principled way? To deal with this problem, we considered structure-specific properties of the syntactically-motivated GR graphs. One key property is their reachability: In a given GR graph, almost every node is reachable from a same and unique root. If a node is not reachable, it is disconnected from other nodes. This property ensures a GR graph to be successfully decomposed into limited number of forests, which in turn can be accurately and efficiently built via tree parsing. We model the graph decomposition problem as an optimization problem and employ Lagrangian Relaxation for solutions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Second, how to merge subgraphs into one coherent structure in a principled way? The problem of finding an optimal graph that consistently combines the subgraphs obtained through individual models is non-trivial. We treat this problem as a combinatory optimization problem and also employ Lagrangian Relaxation to solve the problem. In particular, the parsing phase consists of two steps. First, graph-based models are applied to assign scores to individual arcs and various tuples of arcs. Then, a Lagrangian Relaxation-based joint decoder is applied to efficiently produces globally optimal GR graphs according to all graph-based models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We conduct experiments on Chinese GRBank Figure 1: An example: Pudong recently enacted regulatory documents involving the economic field. (Sun et al., 2014) . Though our parser does not use any phrase-structure information, it produces high-quality GR analysis with respect to dependency matching. Our parsers obtain a labeled fscore of 84.57 on the test set, resulting in an error reduction of 15.13% over Sun et al. (2014) 's single system. and 10.86% over Zhang et al. (2016) 's system. The remarkable parsing result demonstrates the effectiveness of the graph merging framework. This framework can be adopted to other types of flexible representations, e.g. semantic dependency graphs (Oepen et al., 2014 (Oepen et al., , 2015 and abstract meaning representations (Banarescu et al., 2013) .", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 156, |
| "text": "(Sun et al., 2014)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 407, |
| "end": 424, |
| "text": "Sun et al. (2014)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 459, |
| "end": 478, |
| "text": "Zhang et al. (2016)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 689, |
| "end": 708, |
| "text": "(Oepen et al., 2014", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 709, |
| "end": 730, |
| "text": "(Oepen et al., , 2015", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 768, |
| "end": 792, |
| "text": "(Banarescu et al., 2013)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we focus on building GR analysis for Mandarin Chinese. Mandarin is an analytic language that lacks inflectional morphology (almost) entirely and utilizes highly configurational ways to convey syntactic and semantic information. This analytic nature allows to represent all GRs as bilexical dependencies. Sun et al. (2014) showed that analysis for a variety of complicated linguistic phenomena, e.g. coordination, raising/control constructions, extraction, topicalization, can be conveniently encoded with directed graphs. Moreover, such deep syntactic dependency graphs can be effectively derived from Chinese TreeBank (Xue et al., 2005) with very high quality. Figure 1 is an example. In this graph, \"subj*ldd\" between the word \"\u6d89\u53ca/involve\" and the word \"\u6587 \u4ef6/documents\" represents a longdistance subject-predicate relation. The arguments and adjuncts of the coordinated verbs, namely \"\u9881 \u5e03/issue\" and \"\u5b9e\u884c/practice,\" are separately yet distributively linked to the two heads. By encoding GRs as directed graphs over words, Sun et al. (2014) and Zhang et al. (2016) showed that the data-driven, transition-based ap-proach can be applied to build Chinese GR structures with very promising results. This architecture is complementary to the traditional approach to English GR analysis, which leverages grammarguided parsing under deep formalisms, such as LFG (Kaplan et al., 2004) , CCG (Clark and Curran, 2007a) and HPSG (Miyao et al., 2007) . We follow Sun et al.'s and Zhang et al.'s encouraging work and study the discriminative, factorization models for obtaining GR analysis.", |
| "cite_spans": [ |
| { |
| "start": 319, |
| "end": 336, |
| "text": "Sun et al. (2014)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 634, |
| "end": 652, |
| "text": "(Xue et al., 2005)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1037, |
| "end": 1054, |
| "text": "Sun et al. (2014)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1059, |
| "end": 1078, |
| "text": "Zhang et al. (2016)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1370, |
| "end": 1391, |
| "text": "(Kaplan et al., 2004)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1398, |
| "end": 1423, |
| "text": "(Clark and Curran, 2007a)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1433, |
| "end": 1453, |
| "text": "(Miyao et al., 2007)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 677, |
| "end": 685, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The key idea of this work is constructing a complex structure via constructing simple partial structures. Each partial structure is simple in the sense that it allows efficient construction. For instance, projective trees, 1-endpoint-corssing trees, non-crossing dependency graphs and 1-endpointcrossing, pagenumber-2 graphs can be taken as simple structures, given that low-degree polynomial time parsing algorithms exist (Eisner, 1996; Pitler et al., 2013; Kuhlmann and Jonsson, 2015; . To construct each partial structure, we can employ mature parsing techniques. To get the final target output, we also require the total of all partial structures enables whole target structure to be produced. In this paper, we exemplify the above idea by designing a new parser for obtaining GR graphs. Take the GR graph in Figure 1 for example. It can be decomposed into two tree-like subgraphs, shown in Figure 2 . If we can parse the sentence into subgraphs and combine them in a principled way, we get the original GR graph.", |
| "cite_spans": [ |
| { |
| "start": 423, |
| "end": 437, |
| "text": "(Eisner, 1996;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 438, |
| "end": 458, |
| "text": "Pitler et al., 2013;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 459, |
| "end": 486, |
| "text": "Kuhlmann and Jonsson, 2015;", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 813, |
| "end": 821, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 895, |
| "end": 903, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Idea", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Under this perspective, we need to develop a principled method to decompose a complex structure into simple sturctures, which allows us to generate data to train simple solvers. We also need to develop a principled method to integrate partial structures, which allows us to produce coherent Figure 2 : A graph decomposition for the GR graph in Figure 1 . The two subgraphs are shown on two sides of the sentence respectively. The subgraph on the upper side of the sentence is exactly a tree, while the one on the lower side is slightly different. The edge from the word \"\u6587\u4ef6/document\" to \"\u6d89 \u53ca/involve\" is tagged \"[inverse]\" to indicate that the direction of the edge in the subgraph is in fact opposite to that in the original graph.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 291, |
| "end": 299, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 344, |
| "end": 352, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Idea", |
| "sec_num": "3" |
| }, |
| { |
| "text": "structures as outputs. We are going to demonstrate the techniques we use to solve these two problems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Idea", |
| "sec_num": "3" |
| }, |
| { |
| "text": "4 Decomposing GR Graphs", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Idea", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Given a sentence s = w 1 w 2 \u2022 \u2022 \u2022 w n of length n, we use a vector y of length n 2 to denote a graph on it. We use indices i and j to index the elements in the vector, y(i, j) \u2208 {0, 1}, denoting whether there is an arc from w i to w j (1 \u2264 i, j \u2264 n). Given a graph y, we hope to find m subgraphs y 1 , ..., y m , each of which belongs to a specific class of graphs G k (k = 1, 2, \u2022 \u2022 \u2022 , m). Each class should allow efficient construction. For example, we may need a subgraph to be a tree or a noncrossing dependency graph. The combination of all y k gives enough information to construct y. Furthermore, the graph decomposition procedure is utilized to generate training data for building sub-models. Therefore, we hope each subgraph y k is informative enough to train a good disambiguation model. To do so, for each y k , we define a score function s k that indicates the \"goodness\" of y k . Integrating all ideas, we can formalize graph decomposition as an optimization problem, max.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Decomposition as Optimization", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "k s k (y k ) s.t. y i belongs to G i k y k (i, j) \u2265 y(i, j), \u2200i, j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Decomposition as Optimization", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The last condition in this optimization problem en-sures that all edges in y appear at least in one subgraph.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Decomposition as Optimization", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For a specific graph decomposition task, we should define good score functions s k and graph classes G k according to key properties of the target structure y.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Decomposition as Optimization", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "One key property of GR graphs is their reachability: Every node is either reachable from a unique root or by itself an independent connected component. This property allows a GR graph to be decomposed into limited number of tree-like subgraphs. By tree-like we mean if we treat a graph on a sentence as undirected, it is a tree, or it is a subgraph of some tree on the sentence. The advantage of tree-like subgraphs is that they can be effectively built by adapting data-driven tree parsing techniques. Take the sentence in Figure 1 for example. For every word, there is at least one path link the virtual root and this word. Furthermore, we can decompose the graph into two tree-like subgraphs, as shown in Figure 2 . In this decomposition, one subgraph is exactly a tree, and the other is very close to a tree.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 524, |
| "end": 532, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 708, |
| "end": 716, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Decomposing GR Graphs into Tree-like Subgraphs", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We restrict the number of subgraphs to 3. The intuition is that we use one tree to capture long distance information and the other two to capture coordination information. 1 In other words, we decompose each given graph y into three tree-like subgraphs g 1 , g 2 and g 3 . The goal is to let g 1 , g 2 and g 3 carry important information of the graph as well as cover all edges in y. The optimization problem can be written as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposing GR Graphs into Tree-like Subgraphs", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "max. s 1 (g 1 ) + s 2 (g 2 ) + s 3 (g 3 ) s.t. g 1 , g 2 , g 3 are tree-like g 1 (i, j) + g 2 (i, j) + g 3 (i, j) \u2265 y(i, j), \u2200i, j 4.2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decomposing GR Graphs into Tree-like Subgraphs", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We score a subgraph in a first order arc-factored way, which first scores the edges separately and then adds up the scores. Formally, the score function is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring a Subgraph", |
| "sec_num": "1" |
| }, |
| { |
| "text": "s k (g) = \u03c9 k (i, j)g k (i, j) (k = 1, 2, 3) where \u03c9 k (i, j)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring a Subgraph", |
| "sec_num": "1" |
| }, |
| { |
| "text": "is the score of the edge from i to j. Under this score function, we can use the Maximum Spanning Tree (MST) algorithm (Chu and Liu, 1965; Edmonds, 1967; Eisner, 1996) to decode the tree-like subgraph with the highest score.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 137, |
| "text": "(Chu and Liu, 1965;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 138, |
| "end": 152, |
| "text": "Edmonds, 1967;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 153, |
| "end": 166, |
| "text": "Eisner, 1996)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring a Subgraph", |
| "sec_num": "1" |
| }, |
| { |
| "text": "After we define the score function, extracting a subgraph from a GR graph works like this: We first assign heuristic weights \u03c9 k (i, j) (1 \u2264 i, j \u2264 n) to the potential edges between all the pairs of words, then compute a best projective tree g k using the Eisner's Algorithm:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring a Subgraph", |
| "sec_num": "1" |
| }, |
| { |
| "text": "g k = arg max g s k (g) = arg max g \u03c9 k (i, j)g(i, j).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring a Subgraph", |
| "sec_num": "1" |
| }, |
| { |
| "text": "g k is not exactly a subgraph of y, because there may be some edges in the tree but not in the graph. To guarantee we get a subgraph of the original graph, we add labels to the edges in trees to encode necessary information. We label g k (i, j) with the original label, if y(i, j) = 1; with the original label appended by \"\u223cR\" if y(j, i) = 1; with \"None\" else. With this labeling, we can have a function t2g to transform the extracted trees into tree-like graphs. t2g(g k ) is not necessary the same as the original graph y, but must be a subgraph of it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scoring a Subgraph", |
| "sec_num": "1" |
| }, |
| { |
| "text": "With different weight assignments, we can extract different trees from a graph, obtaining different subgraphs. We devise three variations of weight assignment: \u03c9 1 , \u03c9 2 , and \u03c9 3 . Each \u03c9 k (k is 1,2 or 3) consists of two parts. One is shared by all, denoted by S, and the other is different from each other, denoted by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "V . Formally, \u03c9 k (i, j) = S(i, j) + V k (i, j) (k = 1, 2, 3 and 1 \u2264 i, j \u2264 n).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "Given a graph y, S is defined as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "S(i, j) = S 1 (i, j) + S 2 (i, j) + S 3 (i, j) + S 4 (i, j), where S 1 (i, j) = c 1 if y(i, j) = 1 or y(j, i) = 1 0 else S 2 (i, j) = c 2 if y(i, j) = 1 0 else S 3 (i, j) = c 3 (n \u2212 |i \u2212 j|) S 4 (i, j) = c 4 (n \u2212 l p (i, j))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "In the definitions above, c 1 , c 2 , c 3 and c 4 are coefficients, satisfying c 1 c 2 c 3 , and l p is a function of i and j. l p (i, j) is the length of shortest path from i to j that either i is a child of an ancestor of j or j is a child of an ancestor of i. That is to say, the paths are in the form", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "i \u2190 n 1 \u2190 \u2022 \u2022 \u2022 \u2190 n k \u2192 j or i \u2190 n 1 \u2192 \u2022 \u2022 \u2022 \u2192 n k \u2192 j. If no such path exits, then l p (i, j) = n.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "The intuition behind the design is illustrated below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "S 1 indicates whether there is an edge between i and j, and we want it to matter mostly;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "S 2 indicates whether the edge is from i to j, and we want the edge with correct direction to be selected more likely;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "S 3 indicates the distance between i and j, and we like the edge with short distance because it is easier to predict;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "S 4 indicates the length of certain type of path between i and j that reflects c-commanding relationships, and the coefficient remains to be tuned.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "We want the score V to capture different information of the GR graph. In GR graphs, we have an additional information (as denoted as \"*ldd\" in Figure 1 ) for long distance dependency edges. Moreover, we notice that conjunction is another important structure, and they can be derived from the GR graph. Assume that we tag the edges relating to conjunctions with \"*cjt.\" The three variation scores, i.e. V 1 , V 2 and V 3 , reflect long distance and the conjunction information in different ways. V 1 . First for edges y(i, j) whose label is tagged with *ldd, we assign V 1 (i, j) = d. d is a coefficient to be tuned on validation data.. Whenever we come across a parent p with a set of conjunction children cjt 1 , cjt 2 , \u2022 \u2022 \u2022 , cjt n , we find the rightmost child gc 1r of the leftmost child in conjunction cjt 1 , and add d to each V 1 (p, cjt 1 ) and V 1 (cjt 1 , gc 1r ). The edges in conjunction that are added additional d's to are shown in blue in Figure 3 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 143, |
| "end": 151, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 956, |
| "end": 965, |
| "text": "Figure 3", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "w p ... w c1 ... w gc2 ... w gc1 ... w c2 ... w l X*cjt X*cjt X*cjt X*ldd", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "V 2 . Different from V 1 , for edges y(i, j)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "whose label is tagged with *ldd, we assign an V 2 (j, i) = d. Then for each conjunction structure with a parent p and a set of conjunction children cjt 1 , cjt 2 , \u2022 \u2022 \u2022 , cjt n , we find the leftmost child gc nl of the rightmost child in conjunction cjt n , and add d to each", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "V 2 (p, cjt n ) and V 2 (cjt n , gc nl ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "The concerned edges in conjunction are shown in green in Figure 3 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 57, |
| "end": 65, |
| "text": "Figure 3", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "V 3 . We do not assign d's to the edges with tag *ldd. For each conjunction with parent p and conjunction children", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "cjt 1 , cjt 2 , \u2022 \u2022 \u2022 , cjt n , we add an d to V 3 (p, cjt 1 ), V 3 (p, cjt 2 ), \u2022 \u2022 \u2022 , and V 3 (p, cjt n ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Three Variations of Scoring", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "As soon as we get three trees g 1 , g 2 and g 3 , we get three subgraphs t2g(g 1 ), t2g(g 2 ) and t2g(g 3 ). As is stated above, we want every edge in a graph y to be covered by at least one subgraph, and we want to maximize the sum of the edge weights of all trees. Note that the inequality in the constrained optimization problem above can be replaced by a maximization, written as max.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "s 1 (g 1 ) + s 2 (g 2 ) + s 3 (g 3 ) s.t. g 1 , g 2 , g 3 are trees max{t2g(g 1 )(i, j), t2g(g 2 )(i, j), t2g(g 3 )(i, j)} = y(i, j), \u2200i, j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "where s k (g k ) = \u03c9 k (i, j)g k (i, j) Let g m = max{t2g(g 1 ), t2g(g 2 ), t2g(g 3 )}, and by max{g 1 , g 2 , g 3 } we mean to take the maximum of three vectors pointwisely. The Lagrangian Algorithm 1: The Tree Extraction Algorithm Initialization: set u (0) to 0 for k = 0 to K do", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "g 1 \u2190 arg max g 1 s 1 (g 1 ) + u (k) g 1 g 2 \u2190 arg max g 2 s 2 (g 2 ) + u (k) g 2 g 3 \u2190 arg max g 3 s 3 (g 3 ) + u (k) g 3 if max{g 1 , g 2 , g 3 } = y then return g 1 , g 2 , g 3 u (k+1) \u2190 u (k) \u2212 \u03b1 (k) (max{g 1 , g 2 , g 3 } \u2212 y) return g 1 , g 2 , g 3 of the problem is L(g 1 , g 2 , g 3 ; u) = s 1 (g 1 ) + s 2 (g 2 ) + s 3 (g 3 ) +u (g m \u2212 y)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "where u is the Lagrangian multiplier.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Then the dual is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "L(u) = max g 1 ,g 2 ,g 3 L(g 1 , g 2 , g 3 ; u) = max g 1 (s 1 (g 1 ) + 1 3 u g m ) + max g 2 (s 2 (g 2 ) + 1 3 u g m ) + max g 3 (s 3 (g 3 ) + 1 3 u g m ) \u2212 u y", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "According to the duality principle, max g 1 ,g 2 ,g 3 ;u min u L(g 1 , g 2 , g 3 ) = min u L(u), so we can find the optimal solution for the problem if we can find min u L(u). However it is very hard to compute L(u), not to mention min u L(u). The challenge is that g m in the three maximizations must be consistent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The idea is to separate the overall maximization into three maximization problems by approximation. We observe that g 1 , g 2 , and g 3 are very close to g m , so we can approximate L(u) by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "L (u) = max g 1 ,g 2 ,g 3 L(g 1 , g 2 , g 3 ; u) = max g 1 (s 1 (g 1 ) + 1 3 u g 1 ) + max g 2 (s 2 (g 2 ) + 1 3 u g 2 ) + max g 3 (s 3 (g 3 ) + 1 3 u g 3 ) \u2212 u y", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In this case, the three maximization problem can be decoded separately, and we can try to find the optimal u using the subgradient method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Algorithm 1 is our tree decomposition algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Algorithm", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "In the algorithm, we use subgradient method to find min u L (u) iteratively. In each iteration, we first compute g 1 , g 2 , and g 3 to find L (u), then update u until the graph is covered by the subgraphs. The coefficient 1 3 's can be merged into the steps \u03b1 (k) , so we omit them. The three separate problems g k \u2190 arg max g k s k (g k ) + u g k (k = 1, 2, 3) can be solved using Eisner's algorithm, similar to solving arg max g k s k (g k ). Intuitively, the Lagrangian multiplier u in our Algorithm can be regarded as additional weights for the score function. The update of u is to increase weights to the edges that are not covered by any tree-like subgraph, so that it will be more likely for them to be selected in the next iteration.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Algorithm", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "The extraction algorithm gives three classes of trees for each graph. We apply the algorithm to the graph training set, and get three training tree sets. After that, we can train three parsing models with the three tree sets. In this work, the parser we use to train models and parse trees is Mate (Bohnet, 2010) , a second-order graph-based dependency parser.", |
| "cite_spans": [ |
| { |
| "start": 298, |
| "end": 312, |
| "text": "(Bohnet, 2010)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Merging", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Let the scores the three models use be f 1 , f 2 , f 3 respectively. Then the parsers can find trees with highest scores for a sentence. That is solving the following optimization problems: arg max g 1 f 1 (g 1 ), arg max g 2 f 2 (g 2 ) and arg max g 2 f 3 (g 3 ). We can parse a given sentence with the three models, obtain three trees, and then transform them into subgraphs, and combine them together to obtain the graph parse of the sentence by putting all the edges in the three subgraphs together. That is to say, we obtain the graph y = max{t2g(g 1 ), t2g(g 2 ), t2g(g 3 )}. We call this process simple merging.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Merging", |
| "sec_num": "5" |
| }, |
| { |
| "text": "However, the simple merging process omits some consistency that the three trees extracted from the same graph achieve, thus losing some important information. The information is that when we decompose a graph into three subgraphs, some edges tend to appear in certain classes of subgraphs at the same time. We want to retain the co-occurrence relationship of the edges when doing parsing and merging. To retain the hidden consistency, we must do joint decoding instead of decode the three models separately.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Merging", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In order to capture the hidden consistency, we add consistency tags to the labels of the extracted trees to represent the co-occurrence. The basic idea is to use additional tag to encode the relationship of the edges in the three trees. The tag set is T = {0, 1, 2, 3, 4, 5, 6} . Given a tag t \u2208 T , t&1, t&2, t&4 denote whether the edge is contained in g 1 , g 2 , g 3 respectively, where the operator \"&\" is the bitwise AND operator. Specially, since we do not need to consider first bit of the tags of edges in g 1 , the second bit in g 2 , and the third bit in g 3 , we always assign 0 to them. For example, if y(i, j) = 1, g 1 (i, j) = 1, g 2 (j, i) = 1, g 3 (i, j) = 0 and t 3 (j, i) = 0, we tag g 1 (i, j) as 2 and g 2 (j, i) as 1.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 254, |
| "end": 278, |
| "text": "= {0, 1, 2, 3, 4, 5, 6}", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Capturing the Hidden Consistency", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "When it comes to parsing, we also get labels with consistency information. Our goal is to guarantee the tags in edges of the parse trees for a same sentence are consistent while graph merging. Since the consistency tags emerge, for convenience we index the graph and tree vector representation using three indices. g(i, j, t) denotes whether there is an edge from word w i to word w j with tag t in graph g.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Capturing the Hidden Consistency", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The joint decoding problem can be written as a constrained optimization problem as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Capturing the Hidden Consistency", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "max. f 1 (g 1 ) + f 2 (g 2 ) + f 3 (g 3 ) s.t. g 1 (i, j, 2) + g 1 (i, j, 6) \u2264 t g 2 (i, j, t) g 1 (i, j, 4) + g 1 (i, j, 6) \u2264 t g 3 (i, j, t) g 2 (i, j, 1) + g 2 (i, j, 5) \u2264 t g 1 (i, j, t) g 2 (i, j, 4) + g 2 (i, j, 5) \u2264 t g 3 (i, j, t) g 3 (i, j, 1) + g 3 (i, j, 3) \u2264 t g 1 (i, j, t) g 3 (i, j, 2) + g 3 (i, j, 3) \u2264 t g 2 (i, j, t) \u2200i, j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Capturing the Hidden Consistency", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "where g k = t2g(g k )(k = 1, 2, 3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Capturing the Hidden Consistency", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The inequality constraints in the problem are the consistency constraints. Each of them gives the constraint between two classes of trees. For example, the first inequality says that an edge in g 1 with tag t&2 = 0 exists only when the same edge in g 2 exist. If all of these constraints are satisfied, the subgraphs achieve the consistency.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Capturing the Hidden Consistency", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "To solve the constrained optimization problem above, we do some transformations and then apply the Lagrangian Relaxation to it with approximation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Let a 12 (i, j) = g 1 (i, j, 2) + g 1 (i, j, 6), then the first constraint can be written as an equity constraint g 1 (:, :, 2) + g 1 (:, :, 6) = a 12 . * ( t g 2 (:, :, t))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "where \":\" is to take out all the elements in the corresponding dimension, and \". * \" is to do multiplication pointwisely. So can the other inequality constraints. If we take a 12 , a 13 , \u2022 \u2022 \u2022 , a 32 as constants, then all the constraints are linear. The constraints thus can be written as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "A 1 g 1 + A 2 g 2 + A 3 g 3 = 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "where A 1 , A 2 , and A 3 are matrices that can be constructed from a 12 , a 13 ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "\u2022 \u2022 \u2022 , a 32 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The Lagrangian of the optimization problem is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "L(g 1 , g 2 , g 3 ; u) = f 1 (g 1 ) + f 2 (g 2 ) + f 3 (g 3 ) + u (A 1 g 1 + A 2 g 2 + A 3 g 3 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "where u is the Lagrangian multiplier. Then the dual is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "L(u) = max g 1 ,g 2 ,g 3 L(g 1 , g 2 , g 3 ; u) = max g 1 (f 1 (g 1 ) + u A 1 g 1 ) + max g 2 (f 2 (g 2 ) + u A 2 g 2 ) + max g 3 (f 3 (g 3 ) + u A 3 g 3 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Again, we use the subgradient method to minimize L(u). During the deduction, we take a 12 , a 13 , \u2022 \u2022 \u2022 , a 32 as constants, but unfortunately they are not. We propose an approximation for the a's in each iteration: Using the a's we got in the previous iteration instead. It is a reasonable approximation given that the u's in two consecutive iterations are similar and so are the a's.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lagrangian Relaxation with Approximation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The pseudo code of our algorithm is shown in Algorithm 2. We know that the score functions f 1 , f 2 , and f 3 each consist of first-order scores and higher order scores. So they can be written as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Algorithm", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "f k (g) = s 1st k (g) + s h k (g)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Algorithm", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "where s 1st k (g) = \u03c9 k (i, j)g(i, j) (k = 1, 2, 3). With this property, each individual problem g k \u2190 arg max g k f k (g k ) + u A k g k can be decoded easily, with modifications to the first order weights Algorithm 2: The Joint Decoding Algorithm Initialization: set u (0) , A 1 , A 2 , A 3 to 0, for k = 0 to K do", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Algorithm", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "g 1 \u2190 arg max g 1 f 1 (g 1 ) + u (k) A 1 g 1 g 2 \u2190 arg max g 2 f 2 (g 2 ) + u (k) A 2 g 2 g 3 \u2190 arg max g 3 f 3 (g 3 ) + u (k) A 3 g 3 update A 1 , A 2 , A 3 if A 1 g 1 + A 2 g 2 + A 3 g 3 = 0 then return g 1 , g 2 , g 3 u (k+1) \u2190 u (k) \u2212 \u03b1 (k) (A 1 g 1 + A 2 g 2 + A 3 g 3 ) return g 1 , g 2 , g 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Algorithm", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "of the edges in the three models. Specifically, let w k = u A k , then we can modify the \u03c9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Algorithm", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "k in s k to \u03c9 k , such that \u03c9 k (i, j, t) = \u03c9 k (i, j, t)+w k (i, j, t)+ w k (j, i, t).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Algorithm", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The update of w 1 , w 2 , w 3 can be understood in an intuitive way. When one of the constraints is not satisfied, without loss of generality, say, the first one for edge y(i, j). We know g 1 (i, j) is tagged to represent that g 2 (i, j) = 1, but it is not the case. So we increase the weight of that edge with all kinds of tags in g 2 , and decrease the weight of the edge with tag representing g 2 (i, j) = 1 in g 1 . After the update of the weights, the consistency is more likely to be achieved.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Algorithm", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "For sake of formal concision, we illustrate our algorithms omitting the labels. It is straightforward to extend the algorithms to labeled parsing. In the joint decoding algorithm, we just need to extend the weights w 1 , w 2 , w 3 for every label that appears in the three tree sets, and the algorithm can be deduced similarly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Labeled Parsing", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "We conduct experiments on Chinese GRBank (Sun et al., 2014) , an LFG-style GR corpus for Mandarin Chinese. Linguistically speaking, this deep dependency annotation directly encodes information such as coordination, extraction, raising, control as well as many other long-range dependencies. The selection for training, development, test data is also according to Sun et al. (2014) The measure for comparing two dependency graphs is precision/recall of GR tokens which are defined as w h , w d , l tuples, where w h is the head, w d is the dependent and l is the relation. Labeled precision/recall (LP/LR) is the ratio of tuples correctly identified by the automatic generator, while unlabeled precision/recall (UP/UR) is the ratio regardless of l. F-score is a harmonic mean of precision and recall. These measures correspond to attachment scores (LAS/UAS) in dependency tree parsing. To evaluate our GR parsing models that will be introduced later, we also report these metrics. Table 3 shows the results of graph decomposition on the training set. If we use simple decomposition, say, directly extracting three trees from a graph, we get three subgraphs. On the training set, each kind of the subgraphs cover around 90% edges and 30% sentences. When we merge them together, they cover nearly 97% edges and over 70% sentences. This indicates that the ability of a single tree is limited and three trees can cover most of the edges.", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 59, |
| "text": "(Sun et al., 2014)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 363, |
| "end": 380, |
| "text": "Sun et al. (2014)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 980, |
| "end": 987, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "When we apply Lagrangian Relaxation to the decomposition process, both the edge coverage and the sentence coverage gain great error reduc- Table 1 shows the results of graph merging on the development set, and Table 2 on test set. The three training sets of trees are from the decomposition with Lagrangian Relaxation and the models are trained from them. In both tables, simple merging (SM) refers to first decode the three trees for a sentence then combine them by putting all the edges together. As is shown, the merged graph achieves higher f-score than other single models. With Lagrangian Relaxation, the performance of not only the merged graph but also the three subgraphs are improved, due to capturing the consistency information. When we do simple merging, though the recall of each kind of subgraphs is much lower than the precision of them, it is opposite of the merged graph. This is because the consistency between three models is not required and the models tend to give diverse subgraph predictions. When we require the consistency between the three models, the precision and recall become comparable, and higher f-scores are achieved.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 139, |
| "end": 146, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 210, |
| "end": 217, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results of Graph Decomposition", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "The best scores reported by previous work, i.e. (Sun et al., 2014) and (Zhang et al., 2016) are also listed in Table 2 . We can see that our subgraphs already achieve competitive scores, and the merged graph with Lagrangian Relaxation improves both unlabeled and labeled f-scores substantially, with an error reduction of 15.13% and 10.86%. We also include Zhang et al.'s parsing result obtained by an ensemble model that integrate six different transition-based models. We can see that parser ensemble is very helpful for deep dependency parsing and the accuracy of our graph merging parser is sightly lower than this ensemble model. Given that the architecture of graph merging is quite different from transition-based parsing, we think system combination of our parser and the transition-based parser is promising.", |
| "cite_spans": [ |
| { |
| "start": 48, |
| "end": 66, |
| "text": "(Sun et al., 2014)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 71, |
| "end": 91, |
| "text": "(Zhang et al., 2016)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 111, |
| "end": 118, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results of Graph Merging", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "To construct complex linguistic graphs beyond trees, we propose a new perspective, namely graph merging. We take GR parsing as a case study and exemplify the idea. There are two key problems in this perspective, namely graph decomposition and merging. To solve these two problems in a principled way, we treat both problems as optimization problems and employ combinatorial optimization techniques. Experiments demonstrate the effectiveness of the graph merging framework. This framework can be adopted to other types of flexible representations, e.g. semantic dependency graphs (Oepen et al., 2014 (Oepen et al., , 2015 and abstract meaning representations (Banarescu et al., 2013) .", |
| "cite_spans": [ |
| { |
| "start": 579, |
| "end": 598, |
| "text": "(Oepen et al., 2014", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 599, |
| "end": 620, |
| "text": "(Oepen et al., , 2015", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 658, |
| "end": 682, |
| "text": "(Banarescu et al., 2013)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "In this paper, we employ projective parsers. The minimal number of sub-graphs is related to the pagenumber of GR graphs. The pagenumber of 90.96% GR graphs is smaller than or equal to 2, while the pagenumber of 98.18% GR graphs is at most 3. That means 3 projective trees are perhaps good enough to handle Chinese sentences, but 2 projective trees are not. Due to the empirical results inTable 3, using three projective trees can handle 99.55% GR arcs. Therefore, we think three is suitable for our problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was supported by 863 Program of China (2015AA015403), NSFC (61331011), and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank anonymous reviewers for their valuable comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Abstract meaning representation for sembanking", |
| "authors": [ |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Banarescu", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Bonial", |
| "suffix": "" |
| }, |
| { |
| "first": "Shu", |
| "middle": [], |
| "last": "Cai", |
| "suffix": "" |
| }, |
| { |
| "first": "Madalina", |
| "middle": [], |
| "last": "Georgescu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kira", |
| "middle": [], |
| "last": "Griffitt", |
| "suffix": "" |
| }, |
| { |
| "first": "Ulf", |
| "middle": [], |
| "last": "Hermjakob", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Schneider", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "178--186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representa- tion for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoper- ability with Discourse. Association for Computa- tional Linguistics, Sofia, Bulgaria, pages 178-186. http://www.aclweb.org/anthology/W13-2322.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Top accuracy and fast dependency parsing is not a contradiction", |
| "authors": [ |
| { |
| "first": "Bernd", |
| "middle": [], |
| "last": "Bohnet", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "89--97", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernd Bohnet. 2010. Top accuracy and fast depen- dency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computa- tional Linguistics (Coling 2010). Coling 2010 Or- ganizing Committee, Beijing, China, pages 89-97. http://www.aclweb.org/anthology/C10-1011.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Parsing to 1-endpoint-crossing, pagenumber-2 graphs", |
| "authors": [ |
| { |
| "first": "Junjie", |
| "middle": [], |
| "last": "Cao", |
| "suffix": "" |
| }, |
| { |
| "first": "Sheng", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Weiwei", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaojun", |
| "middle": [], |
| "last": "Wan", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Junjie Cao, Sheng Huang, Weiwei Sun, and Xiao- jun Wan. 2017. Parsing to 1-endpoint-crossing, pagenumber-2 graphs. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "High precision extraction of grammatical relations", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Carroll", |
| "suffix": "" |
| }, |
| { |
| "first": "Ted", |
| "middle": [], |
| "last": "Briscoe", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 19th International Conference on Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1--7", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/1072228.1072241" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Carroll and Ted Briscoe. 2002. High pre- cision extraction of grammatical relations. In Proceedings of the 19th International Conference on Computational Linguistics -Volume 1. As- sociation for Computational Linguistics, Strouds- burg, PA, USA, COLING '02, pages 1-7. https://doi.org/10.3115/1072228.1072241.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "On the shortest arborescence of a directed graph", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [ |
| "J" |
| ], |
| "last": "Chu", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [ |
| "H" |
| ], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 1965, |
| "venue": "Science Sinica", |
| "volume": "14", |
| "issue": "", |
| "pages": "1396--1400", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y.J. Chu and T.H. Liu. 1965. On the shortest arbores- cence of a directed graph. Science Sinica pages 14:1396-1400.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Formalismindependent parser evaluation with CCG and Dep-Bank", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Curran", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "7--1032", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Clark and James Curran. 2007a. Formalism- independent parser evaluation with CCG and Dep- Bank. In Proceedings of the 45th Annual Meet- ing of the Association of Computational Lin- guistics. Association for Computational Linguis- tics, Prague, Czech Republic, pages 248-255. http://www.aclweb.org/anthology/P07-1032.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Wide-coverage efficient statistical parsing with CCG and log-linear models", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "R" |
| ], |
| "last": "Curran", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Computational Linguistics", |
| "volume": "33", |
| "issue": "4", |
| "pages": "493--552", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Clark and James R. Curran. 2007b. Wide-coverage efficient statistical pars- ing with CCG and log-linear models. Computational Linguistics 33(4):493-552.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Optimum branchings", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Edmonds", |
| "suffix": "" |
| } |
| ], |
| "year": 1967, |
| "venue": "Journal of Research of the NationalBureau of Standards", |
| "volume": "71", |
| "issue": "", |
| "pages": "233--240", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Edmonds. 1967. Optimum branchings. Journal of Research of the NationalBureau of Standards pages 71B:233-240.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Three new probabilistic models for dependency parsing: an exploration", |
| "authors": [ |
| { |
| "first": "Jason", |
| "middle": [ |
| "M" |
| ], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 16th conference on Computational", |
| "volume": "1", |
| "issue": "", |
| "pages": "340--345", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: an exploration. In Proceed- ings of the 16th conference on Computational lin- guistics -Volume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 340-345.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Speed and accuracy in shallow and deep stochastic parsing", |
| "authors": [ |
| { |
| "first": "Ron", |
| "middle": [], |
| "last": "Kaplan", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Riezler", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Tracy", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "T" |
| ], |
| "last": "King", |
| "suffix": "" |
| }, |
| { |
| "first": "Iii", |
| "middle": [], |
| "last": "Maxwell", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Vasserman", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Crouch", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "HLT-NAACL 2004: Main Proceedings", |
| "volume": "", |
| "issue": "", |
| "pages": "97--104", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ron Kaplan, Stefan Riezler, Tracy H King, John T Maxwell III, Alex Vasserman, and Richard Crouch. 2004. Speed and accuracy in shallow and deep stochastic parsing. In Daniel Marcu Susan Du- mais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings. Association for Computational Linguistics, Boston, Massachusetts, USA, pages 97-104.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Parsing to noncrossing dependency graphs", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Kuhlmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Jonsson", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "3", |
| "issue": "", |
| "pages": "559--570", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Kuhlmann and Peter Jonsson. 2015. Parsing to noncrossing dependency graphs. Transactions of the Association for Computational Linguistics 3:559- 570.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Towards framework-independent evaluation of deep linguistic parsers", |
| "authors": [ |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun'ichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the GEAF", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yusuke Miyao, Kenji Sagae, and Jun'ichi Tsu- jii. 2007. Towards framework-independent evaluation of deep linguistic parsers. In Ann Copestake, editor, Proceedings of the GEAF 2007", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Workshop. CSLI Publications, CSLI Studies in Computational Linguistics Online", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "238--258", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Workshop. CSLI Publications, CSLI Studies in Computational Linguistics Online, pages 238-258. http://www.cs.cmu.edu/ sagae/docs/geaf07miyaoetal.pdf.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Semeval 2015 task 18: Broad-coverage semantic dependency parsing", |
| "authors": [ |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Oepen", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Kuhlmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| }, |
| { |
| "first": "Silvie", |
| "middle": [], |
| "last": "Cinkov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Flickinger", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov\u00e1, Dan Flickinger, Jan Hajic, and Zdenka Uresov\u00e1. 2015. Semeval 2015 task 18: Broad-coverage semantic dependency pars- ing. In Proceedings of the 9th International Work- shop on Semantic Evaluation (SemEval 2015).", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Semeval 2014 task 8: Broad-coverage semantic dependency parsing", |
| "authors": [ |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Oepen", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Kuhlmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Flickinger", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "63--72", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajic, An- gelina Ivanova, and Yi Zhang. 2014. Semeval 2014 task 8: Broad-coverage semantic dependency pars- ing. In Proceedings of the 8th International Work- shop on Semantic Evaluation (SemEval 2014). As- sociation for Computational Linguistics and Dublin City University, Dublin, Ireland, pages 63-72. http://www.aclweb.org/anthology/S14-2008.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Finding optimal 1-endpoint-crossing trees", |
| "authors": [ |
| { |
| "first": "Emily", |
| "middle": [], |
| "last": "Pitler", |
| "suffix": "" |
| }, |
| { |
| "first": "Sampath", |
| "middle": [], |
| "last": "Kannan", |
| "suffix": "" |
| }, |
| { |
| "first": "Mitchell", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "TACL", |
| "volume": "1", |
| "issue": "", |
| "pages": "13--24", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emily Pitler, Sampath Kannan, and Mitchell Mar- cus. 2013. Finding optimal 1-endpoint-crossing trees. TACL 1:13-24. http://www.transacl.org/wp- content/uploads/2013/03/paper13.pdf.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Semantic dependency parsing via book embedding", |
| "authors": [ |
| { |
| "first": "Weiwei", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Junjie", |
| "middle": [], |
| "last": "Cao", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaojun", |
| "middle": [], |
| "last": "Wan", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Weiwei Sun, Junjie Cao, and Xiaojun Wan. 2017. Se- mantic dependency parsing via book embedding. In Proceedings of the 55th Annual Meeting of the Asso- ciation for Computational Linguistics. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Grammatical relations in Chinese: GB-ground extraction and data-driven parsing", |
| "authors": [ |
| { |
| "first": "Weiwei", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Yantao", |
| "middle": [], |
| "last": "Du", |
| "suffix": "" |
| }, |
| { |
| "first": "Xin", |
| "middle": [], |
| "last": "Kou", |
| "suffix": "" |
| }, |
| { |
| "first": "Shuoyang", |
| "middle": [], |
| "last": "Ding", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaojun", |
| "middle": [], |
| "last": "Wan", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "446--456", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Weiwei Sun, Yantao Du, Xin Kou, Shuoyang Ding, and Xiaojun Wan. 2014. Grammatical relations in Chi- nese: GB-ground extraction and data-driven pars- ing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers). Association for Computa- tional Linguistics, Baltimore, Maryland, pages 446- 456. http://www.aclweb.org/anthology/P14-1042.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "The penn Chinese treebank: Phrase structure annotation of a large corpus", |
| "authors": [ |
| { |
| "first": "Naiwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| }, |
| { |
| "first": "Fu-Dong", |
| "middle": [], |
| "last": "Chiou", |
| "suffix": "" |
| }, |
| { |
| "first": "Marta", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Natural Language Engineering", |
| "volume": "11", |
| "issue": "", |
| "pages": "207--238", |
| "other_ids": { |
| "DOI": [ |
| "10.1017/S135132490400364X" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Naiwen Xue, Fei Xia, Fu-dong Chiou, and Marta Palmer. 2005. The penn Chinese treebank: Phrase structure annotation of a large corpus. Natural Language Engineering 11:207-238. https://doi.org/10.1017/S135132490400364X.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Transition-based parsing for deep dependency structures", |
| "authors": [ |
| { |
| "first": "Xun", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yantao", |
| "middle": [], |
| "last": "Du", |
| "suffix": "" |
| }, |
| { |
| "first": "Weiwei", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaojun", |
| "middle": [], |
| "last": "Wan", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Computational Linguistics", |
| "volume": "42", |
| "issue": "3", |
| "pages": "353--389", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xun Zhang, Yantao Du, Weiwei Sun, and Xiaojun Wan. 2016. Transition-based parsing for deep de- pendency structures. Computational Linguistics 42(3):353-389. http://aclweb.org/anthology/J16- 3001.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "Examples to illustrate the additional weights.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "TABREF2": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>'s experiments. Gold standard POS-tags are</td></tr><tr><td>used for deriving features for disambiguation.</td></tr></table>", |
| "html": null, |
| "text": "Results on development set. SM is for Simple Merging, and LR for Lagrangian Relaxation." |
| }, |
| "TABREF3": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "text": "Lagrangian Relaxation Results on test set." |
| }, |
| "TABREF5": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "text": "Results of graph decomposition. SD is for Simple Decomposition and LR for Lagrangian Relaxation tion, indicating that Lagrangian Relaxation is very effective on the task of decomposition." |
| } |
| } |
| } |
| } |