ACL-OCL / Base_JSON /prefixD /json /D17 /D17-1003.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D17-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:14:40.185039Z"
},
"title": "Quasi-Second-Order Parsing for 1-Endpoint-Crossing, Pagenumber-2 Graphs",
"authors": [
{
"first": "Junjie",
"middle": [],
"last": "Cao",
"suffix": "",
"affiliation": {
"laboratory": "The MOE Key Laboratory of Computational Linguistics",
"institution": "Peking University",
"location": {}
},
"email": "junjie.cao@pku.edu.cn"
},
{
"first": "Sheng",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "The MOE Key Laboratory of Computational Linguistics",
"institution": "Peking University",
"location": {}
},
"email": "huangsheng@pku.edu.cn"
},
{
"first": "Weiwei",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "The MOE Key Laboratory of Computational Linguistics",
"institution": "Peking University",
"location": {}
},
"email": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": "",
"affiliation": {
"laboratory": "The MOE Key Laboratory of Computational Linguistics",
"institution": "Peking University",
"location": {}
},
"email": "wanxiaojun@pku.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose a new Maximum Subgraph algorithm for first-order parsing to 1endpoint-crossing, pagenumber-2 graphs. Our algorithm has two characteristics: (1) it separates the construction for noncrossing edges and crossing edges; (2) in a single construction step, whether to create a new arc is deterministic. These two characteristics make our algorithm relatively easy to be extended to incorporiate crossing-sensitive second-order features. We then introduce a new algorithm for quasi-second-order parsing. Experiments demonstrate that second-order features are helpful for Maximum Subgraph parsing.",
"pdf_parse": {
"paper_id": "D17-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose a new Maximum Subgraph algorithm for first-order parsing to 1endpoint-crossing, pagenumber-2 graphs. Our algorithm has two characteristics: (1) it separates the construction for noncrossing edges and crossing edges; (2) in a single construction step, whether to create a new arc is deterministic. These two characteristics make our algorithm relatively easy to be extended to incorporiate crossing-sensitive second-order features. We then introduce a new algorithm for quasi-second-order parsing. Experiments demonstrate that second-order features are helpful for Maximum Subgraph parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Previous work showed that treating semantic dependency parsing as the search for Maximum Subgraphs is not only elegant in theory but also effective in practice (Kuhlmann and Jonsson, 2015; . In particular, our previous work showed that 1-endpoint-crossing, pagenumber-2 (1EC/P2) graphs are an appropriate graph class for modelling semantic dependency structures . On the one hand, it is highly expressive to cover a majority of semantic analysis. On the other hand, the corresponding Maximum Subgraph problem with an arc-factored disambiguation model can be solved in low-degree polynomial time.",
"cite_spans": [
{
"start": 160,
"end": 188,
"text": "(Kuhlmann and Jonsson, 2015;",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Defining disambiguation models on wider contexts than individual bi-lexical dependencies improves various syntactic parsers in different architectures. This paper studies exact algorithms for second-order parsing for 1EC/P2 graphs. The existing algorithm, viz. our previous algorithm (GCHSW, hereafter), has two properties that make it hard to incorporate higher-order features in a principled way. First, GCHSW does not explicitly consider the construction of noncrossing arcs. We will show that incorporiating higher-order factors containing crossing arcs without increasing time and space complexity is extremely hard. An effective strategy is to only include higher-order factors containing only noncrossing arcs (Pitler, 2014) . But this crossing-sensitive strategy is incompatible with GCHSW. Second, all existing higherorder parsing algorithms for projective trees, including (McDonald and Pereira, 2006; Carreras, 2007; Koo and Collins, 2010) , require that which arcs are created in a construction step be deterministic. This design is also incompatible with GCHSW. In summary, it is not convenient to extend GCHSW to incorporate higher-order features while keeping the same time complexity.",
"cite_spans": [
{
"start": 717,
"end": 731,
"text": "(Pitler, 2014)",
"ref_id": "BIBREF15"
},
{
"start": 883,
"end": 911,
"text": "(McDonald and Pereira, 2006;",
"ref_id": "BIBREF12"
},
{
"start": 912,
"end": 927,
"text": "Carreras, 2007;",
"ref_id": "BIBREF2"
},
{
"start": 928,
"end": 950,
"text": "Koo and Collins, 2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we introduce an alternative Maximum Subgraph algorithm for first-order parsing to 1EC/P2 graphs. while keeping the same time and space complexity to GCHSW, our new algorithm has two characteristics that make it relatively easy to be extended to incorporate crossingsensitive, second-order features: (1) it separates the construction for noncrossing edges and possible crossing edges; (2) whether an edge is created is deterministic in each construction rule. We then introduce a new algorithm to perform secondorder parsing. When all second-order scores are greater than or equal to 0, it exactly solves the corresponding optimization problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We implement a practical parser with a statistical disambiguation model and evaluate it on four data sets: those used in SemEval 2014 Task 8 (Oepen et al., 2014) , and the dependency graphs extracted from CCGbank (Hockenmaier and Steedman, 2007) . On all data sets, we find that our second-order parsing models are more ac-curate than the first-order baseline. If we do not use features derived from syntactic trees, we get an absolute unlabeled F-score improvement of 1.3 on average. When syntactic analysis is used, we get an improvement of 0.4 on average.",
"cite_spans": [
{
"start": 141,
"end": 161,
"text": "(Oepen et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 213,
"end": 245,
"text": "(Hockenmaier and Steedman, 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Semantic dependency parsing can be formulated as the search for Maximum Subgraph for graph class G: Given a graph G = (V, A), find a subset A \u2286 A with maximum total score such that the induced subgraph G = (V, A ) belongs to G. Formally, we have the following optimization problem:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Subgraph Parsing",
"sec_num": "2.1"
},
{
"text": "arg max",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Subgraph Parsing",
"sec_num": "2.1"
},
{
"text": "G * \u2208G(s,G) p in G * s part (s, p)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Subgraph Parsing",
"sec_num": "2.1"
},
{
"text": "G(s, G) denotes the set of all graphs that belong to G and are compatible with s and G. G is usually a complete digraph. s part (s, p) evaluates the event that part p (from a candidate graph G * ) is good. We define the order of p according to the number of arcs it contains, in analogy with tree parsing in terminology. Previous work only discussed the first-order case:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Subgraph Parsing",
"sec_num": "2.1"
},
{
"text": "arg max G * \u2208G(G) d\u2208ARC(G * ) s arc (d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Subgraph Parsing",
"sec_num": "2.1"
},
{
"text": "If G is the set of noncrossing or 1EC/P2 graphs, the above optimization problem can be solved in cubic-time (Kuhlmann and Jonsson, 2015) and quintic-time respectively. Furthermore, ignoring one linguistically-rare structure in 1EC/P2 graphs descreases the complexity to O(n 4 ). This paper is concerned with secondorder parsing, with a special focus on the following factorizations:",
"cite_spans": [
{
"start": 108,
"end": 136,
"text": "(Kuhlmann and Jonsson, 2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Subgraph Parsing",
"sec_num": "2.1"
},
{
"text": "And the objective function turns to be:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Subgraph Parsing",
"sec_num": "2.1"
},
{
"text": "d\u2208ARC(G * ) s arc (d) + s\u2208SIB(G * ) s sib (s)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Subgraph Parsing",
"sec_num": "2.1"
},
{
"text": "Sun et al. (2017) introduced a dynamic programming algorithm for second-order planar parsing. Their empirical evaluation showed that secondorder features are effective to improve parsing accuracy. It is still unknown how to incorporate such features for 1EC/P2 parsing. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Subgraph Parsing",
"sec_num": "2.1"
},
{
"text": "The formal description of the 1-endpoint-crossing property is adopted from (Pitler et al., 2013) .",
"cite_spans": [
{
"start": 75,
"end": 96,
"text": "(Pitler et al., 2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "1-Endpoint-Crossing, Pagenumber-2 Graphs",
"sec_num": "2.2"
},
{
"text": "Definition 1. Edges e 1 and e 2 cross if e 1 and e 2 have distinct endpoints and exactly one of the endpoints of e 1 lies between the endpoints of e 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1-Endpoint-Crossing, Pagenumber-2 Graphs",
"sec_num": "2.2"
},
{
"text": "Definition 2. A dependency graph is 1-Endpoint-Crossing if for any edge e, all edges that cross e share an endpoint p named pencil point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1-Endpoint-Crossing, Pagenumber-2 Graphs",
"sec_num": "2.2"
},
{
"text": "Given a sentence s = w 0 w 1 \u2022 \u2022 \u2022 w n\u22121 of length n, the vertices, i.e. words, are indexed with integers, an arc from w i to w j as a (i,j) , and the common endpoint, namely pencil point, of all edges crossed with a (i,j) or a (j,i) as pt(i, j). We denote an edge as e (i,j) , if we do not consider its direction. Figure 1 is an example.",
"cite_spans": [],
"ref_spans": [
{
"start": 315,
"end": 323,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "1-Endpoint-Crossing, Pagenumber-2 Graphs",
"sec_num": "2.2"
},
{
"text": "Definition 3. A pagenumber-k graph means it consists at most k half-planes, and arcs on each half-plane are noncrossing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1-Endpoint-Crossing, Pagenumber-2 Graphs",
"sec_num": "2.2"
},
{
"text": "These half-planes may be thought of as the pages of a book, with the vertex line corresponding to the books spine, and the embedding of a graph into such a structure is known as a book embedding. Figure 2 is an example. (Pitler et al., 2013) proved that 1-endpointcrossing trees are a subclass of graphs whose pagenumber is at most 2. In , we studied graphs that are constrained to be both 1-endpoint-crossing and pagenumber-2. In this paper, we ignored a complex and linguistic-rare Figure 4 : A prototype backbone of 1EC/P2 graphs. To decompose this structure, GCHSW focuses on e (i,j) and e (l,j) , because these two edges can be optionally created without violation of both 1EC and P2 restrictions. Our algorithm focuses on the existence of e (i,k) , and makes it the only edge that is constructed by applying a corresponding rule. structure and studied a subset of 1EC/P2 graphs. The complex structure is named as C structures in our previous paper, and Figure 3 is the prototype of C structures. In this paper, we present new algorithms for finding optimal 1EC/P2, C-free graphs.",
"cite_spans": [
{
"start": 220,
"end": 241,
"text": "(Pitler et al., 2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 196,
"end": 204,
"text": "Figure 2",
"ref_id": null
},
{
"start": 484,
"end": 492,
"text": "Figure 4",
"ref_id": null
},
{
"start": 959,
"end": 967,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "1-Endpoint-Crossing, Pagenumber-2 Graphs",
"sec_num": "2.2"
},
{
"text": "Cao et al. (2017) designed a polynomial time Maximum Subgraph algorithm, viz. GCHSW, for 1EC/P2 graphs by exploring the following property: Every subgraph of a 1EC/P2 graph is also a 1EC/P2 graph. GCHSW defines a number of prototype backbones for decomposing a 1EC/P2 graph in a principled way. In each decomposition step, GCHSW focuses on the edges that can be created without violating either the 1EC nor P2 restriction. Sometimes, multiple edges can be created simultaneously in one single step. Figure 4 is an example.",
"cite_spans": [],
"ref_spans": [
{
"start": 499,
"end": 507,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "The GCHSWAlgorithm",
"sec_num": "2.3"
},
{
"text": "There is an important difference between GCHSW and Eisner-style Maximum Spanning Tree algorithms (MST; Eisner, 1996; McDonald and Pereira, 2006; Koo and Collins, 2010) . In each construction step, GCHSW allows multiple arcs to be constructed, but whether or not such arcs are added to the target graph depends on their arc-weights. If all arcs are assigned scores that are greater than 0, the output of our algorithm includes the most complicated 1EC/P2 graphs. For the higher-order MST algorithms, in a single construction step, it is clear whether adding a new arc, and which one. There is no local search. This deterministic strategy is also followed by Kuhlmann and Jonsson's Maximum Subgraph algorithm for noncrossing graphs. Higher-order MST models associate higher-order score functions with the construction of individual dependencies. Therefore the deterministic strategy is a prerequisite to incorporate higher-order features. The design of GCHSW is incompatible with this strategy. Figure 5 : A typical structure of crossing arcs.",
"cite_spans": [
{
"start": 103,
"end": 116,
"text": "Eisner, 1996;",
"ref_id": "BIBREF4"
},
{
"start": 117,
"end": 144,
"text": "McDonald and Pereira, 2006;",
"ref_id": "BIBREF12"
},
{
"start": 145,
"end": 167,
"text": "Koo and Collins, 2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 993,
"end": 1001,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "The GCHSWAlgorithm",
"sec_num": "2.3"
},
{
"text": "x i k j r i l j r x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The GCHSWAlgorithm",
"sec_num": "2.3"
},
{
"text": "It is very difficult to enumerate all high-order features for crossing arcs. Figure 5 illustrates the idea. There is a pair of corssing arcs, viz. e (x,k) and e (i,j) . The key strategy to develop a dynamic programming algorithm to generate such crossing structure is to treat parts of this structures as intervals/spans together with an external vertex (Pitler et al., 2013; . Without loss of generality, we assume [i, j] makes up such an interval and x is the corresponding external vertex. When we consider e (i,j) , its neighboring edges can be e (i,r i ) and e (l j ,j) , and therefore we need to consider searching the best positions of both r i and l j . Because we have already taken into account three vertices, viz. x, i and j, the two new positions increase the time complexity to be at least quintic.",
"cite_spans": [
{
"start": 354,
"end": 375,
"text": "(Pitler et al., 2013;",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 77,
"end": 85,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Challenge of Second-Order Decoding",
"sec_num": "2.4"
},
{
"text": "Now consider e (x,k) . When we decompose the whole graph into inverval [i, j] plus x and remaining part, we will factor out e (x,k) in a successive decomposition for resolving [i, j] plus x. We cannot capture the second features associated to e (x,k) and e (x,rx) , because they are in different intervals, and when these intervals are combined, we have already hidden the position information of k. Explicitly encoding k increases the time complexity to be at least quintic too.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenge of Second-Order Decoding",
"sec_num": "2.4"
},
{
"text": "Pitler (2014) showed that it is still possible to build accurate tree parsers by considering only higher-order features of noncrossing arcs. This is in part because only a tiny fraction of neighboring arcs involve crossing arcs. However, this strategy is not easy to by applied to GCHSW, because GCHSW does not explicitly analyze sub-graphs of noncrossing arcs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenge of Second-Order Decoding",
"sec_num": "2.4"
},
{
"text": "Based on the discussion of Section 2.3 and 2.4, we can see that it is not easy to extend the existing algorithm, viz. GCHSW, to handle second-order features. In this paper, we propose an alternative first-order dynamic programming algorithm. Because ignoring one linguistically-rare structure associated with the C problem in GCHSW descreases the complexity, we exclude this structure in our algorithm. Formally, we introduce a new algorithm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A New Maximum Subgraph Algorithm",
"sec_num": "3"
},
{
"text": "Int O [i, j] i j LR[i, j, x] x i j N O [i, j, x] x i j L O [i, j, x] x i j R O [i, j, x] x i j Int C [i, j] i j N C [i, j, x] x i j L C [i, j, x] x i j R C [i, j, x]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A New Maximum Subgraph Algorithm",
"sec_num": "3"
},
{
"text": "x i j Figure 6 : Graphical representations of sub-problems. Gray curves mean the corresponding edge in this sub-problem, but should be included in the final generated graph.",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 14,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "A New Maximum Subgraph Algorithm",
"sec_num": "3"
},
{
"text": "IntO(i, j) \u2190 max \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 IntO(i + 1, j) IntC (i, j) IntC (i, k) + IntO(k, j) RC (i, k, x) + IntO(k, x) + LO(x, j, k) + sarc(i, k) LR(i, k, x) + IntO(k, x) + IntO(x, j, k) + sarc(i, k) IntO[i, x] + LC [x, k, i] + NO[k, j, x] + sarc(i, k) RO[i, x, k] + IntO[x, k] + LO[k, j, x] + sarc(i, k) Int C (i, j) \u2190 s arc (i, j) + max \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Int O (i + 1, j) Int C (i, k) + Int O (k, j) R C (i, k, x) + Int O (k, x) + L O (x, j, k) + s arc (i, k) LR(i, k, x) + Int O (k, x) + Int O (x, j, k) + s arc (i, k) Int O [i, x] + L C [x, k, i] + N O [k, j, x] + s arc (i, k) R O [i, x, k] + Int O [x, k] + L O [k, j, x] + s arc (i, k) N O (i, j, x) \u2190 max \uf8f1 \uf8f2 \uf8f3 Int O (i, j) N C (i, j, x) + s arc (x, j) N C (i, k, x) + Int O (k, j) + s arc (x, k) N C (i, j, x) \u2190 max Int O (i, j) N C (i, k, x) + Int O (k, j) + s arc (x, k) LR(i, j, x) \u2190 max L O (i, k, x) + R O (k, j, x) L O (i, j, x) \u2190 max \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 Int O (i, j) L C (i, j, x) + s arc (x, j) L C (i, k, x) + N O (k, j) + s arc (x, k) Int O (i, k, x) + L O (k, j) + s arc (x, k) L C (i, j, x) \u2190 max \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 Int O (i, j) L C (i, j, x) + s arc (x, j) L C (i, k, x) + N O (k, j, i) + s arc (x, k) Int O (i, k) + L O (k, j, i) + s arc (x, k) R O (i, j, x) \u2190 max \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 Int O (i, j) R C (i, j, x) + s arc (x, j) N C (i, k, j) + R O (k, j, x) + s arc (x, k) R O (i, k, x) + Int O (k, j) + s arc (x, k) R C (i, j, x) \u2190 max N C (i, k, j) + R O (k, j, x) + s arc (x, k) R O (i, k, x) + Int O (k, j) + s arc (x, k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A New Maximum Subgraph Algorithm",
"sec_num": "3"
},
{
"text": "Figure 7: A dynamic program to find optimal 1EC/P2, C-free graphs with arc-factored weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A New Maximum Subgraph Algorithm",
"sec_num": "3"
},
{
"text": "to solve the following optimization problem:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A New Maximum Subgraph Algorithm",
"sec_num": "3"
},
{
"text": "arg max G * \u2208G(G) d\u2208ARC(G * ) s arc (d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A New Maximum Subgraph Algorithm",
"sec_num": "3"
},
{
"text": "where G means 1EC/P2, C-free graphs. Our algorithm has the same time and space complexity to the degenerated version of GCHSW. We represent our algorithm using undirected graphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A New Maximum Subgraph Algorithm",
"sec_num": "3"
},
{
"text": "Following GCHSW, we consider five sub-problems when we construct a maximum dependency graph on a given interval [i, k] . Though the subproblems introduced by GCHSW and us handle similar structures, their definitions are quite different. The sub-problems are explained as follows: . When it is combined with others, e (i,j) is immediately created.",
"cite_spans": [
{
"start": 112,
"end": 118,
"text": "[i, k]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-problems",
"sec_num": "3.1"
},
{
"text": "Int Int[i, j]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-problems",
"sec_num": "3.1"
},
{
"text": "[i, j], pt(x, p) / \u2208 [i, j]. N [i, j, x] could con- tain e (i,j) but disallows e (x,i) . We distinguish two sub-types. N O [i, j, x] may or may not contain e (x,j) . N C [i, j, x] implies the exis- tence of but does not contain e (x,j) . When N [i, j, x] is combined with others, e (x,j) is immediately created. L L[i, j, x] represents an interval from i to j inclusively as well as an external vertex x. \u2200p \u2208 [i, j], pt(x, p) = i. L[i, j, x] could con- tain e (i,j) but disallows e (x,i) . We distinguish sub-two types for L. L O [i, j,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-problems",
"sec_num": "3.1"
},
{
"text": "3.2 Decomposing Sub-problems Figure 7 gives a sketch of our dynamic programming algorithm. We give a detailed illustration for Int, a rough idea for L and LR, and omit other sub-problems. More details about the whole algorithm can be found in the supplementary note. ",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sub-problems",
"sec_num": "3.1"
},
{
"text": "[i, j]. Assume that k(k \u2208 (i, j))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-problems",
"sec_num": "3.1"
},
{
"text": "is the farthest vertex that is adjacent to i, and x = pt(i, k). If there is no such k (i.e. there no arc from i to some other node in this interval), then we denote k as \u2205. So it is to x. We illustrate different cases as following and give a graphical representation in Figure 8 .",
"cite_spans": [],
"ref_spans": [
{
"start": 270,
"end": 278,
"text": "Figure 8",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Sub-problems",
"sec_num": "3.1"
},
{
"text": "Case a: k = \u2205. We can directly consider interval ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-problems",
"sec_num": "3.1"
},
{
"text": "[i + 1, j].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-problems",
"sec_num": "3.1"
},
{
"text": "[i, x, k] + Int O [x, k] + L O [k, j, x] + e (i,k) + e (i,j) . For Int O [i, j]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-problems",
"sec_num": "3.1"
},
{
"text": ", because there may be e (i,j) , we add one more rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-problems",
"sec_num": "3.1"
},
{
"text": "Int O [i, j] = Int C [i, j].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-problems",
"sec_num": "3.1"
},
{
"text": "And we do not need to create e (i,j) in all cases. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-problems",
"sec_num": "3.1"
},
{
"text": "i j = i + 1 j (b) i k j = i k + k j Dashed edge exist? i k x j (c.1) i k x j = i k x + k x + k x j (c.2) i k x j = i k x + k x + x j Dashed edge exist? i x k j (d.1) i x k j = i x + i x k + x k j (d.2) i x k j = i x k + x k + x k j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing an L Sub-problem",
"sec_num": "3.2.2"
},
{
"text": "x i j = x i j (c.1) x i k j = x i k + i k j (c.2) x i k j = i k + i k j Figure 9: Decomposition for L O [i, j, x]. x i b 1 a 1 b 2 a 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing an L Sub-problem",
"sec_num": "3.2.2"
},
{
"text": "j, b 3 Figure 10 : b 3 = j, Not both e (x,b 1 ) and e (x,a 2 ) exist.",
"cite_spans": [],
"ref_spans": [
{
"start": 7,
"end": 16,
"text": "Figure 10",
"ref_id": null
}
],
"eq_spans": [],
"section": "Decomposing an L Sub-problem",
"sec_num": "3.2.2"
},
{
"text": "x ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing an L Sub-problem",
"sec_num": "3.2.2"
},
{
"text": "i b 1 a 1 b 2 a 2 b 3 j, a 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing an L Sub-problem",
"sec_num": "3.2.2"
},
{
"text": "L O [i, k, x] + R O [k, j, x].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing an L Sub-problem",
"sec_num": "3.2.2"
},
{
"text": "Case b. If there is no such vertex k, there must be edges from [i, k ) to (k , j] for every k in (i, j) without considering e (i,j) . For i + 1, we assume e (i,a 1 ) is the farthest edge that goes from i. For a 1 , we assume e (b 1 ,b 2 ) is the farthest edge from b 1 where b 1 is in (i, a 1 ) and b 2 is in (a 1 , j). For b 2 , we assume e (a 1 ,a 3 ) is the farthest edge from a 1 where a 3 is in (b 2 , j) and a 1 is the pencil point. We then get the series {a 1 , a 2 , a 3 ...a n } and {b 1 , b 2 ...b m } which guarantees b i < a i , a i < b i+1 and max(a n , b m ) = j. If b m = j, we will get a graph like Figure 10 . If e (x,b 1 ) exists, this LR subproblem degenerates to an L subproblem. If e (x,an) exists, this subproblem degenerates to an R subproblem.",
"cite_spans": [],
"ref_spans": [
{
"start": 342,
"end": 353,
"text": "(a 1 ,a 3 )",
"ref_id": "FIGREF2"
},
{
"start": 615,
"end": 624,
"text": "Figure 10",
"ref_id": null
}
],
"eq_spans": [],
"section": "Decomposing an L Sub-problem",
"sec_num": "3.2.2"
},
{
"text": "If a m = j, we will get a graph like Figure 11 . If there exists only e (x,b 1 ) or e (x,bm) , we can solve it like b m = j. If both exist, this is a typical C-structure like Figure 3 and we cannot get it through other decompostion.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 46,
"text": "Figure 11",
"ref_id": "FIGREF7"
},
{
"start": 175,
"end": 183,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Decomposing an L Sub-problem",
"sec_num": "3.2.2"
},
{
"text": "The above discussion gives the rough idea of the correctness of the following conclusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing an L Sub-problem",
"sec_num": "3.2.2"
},
{
"text": "Theorem 1. Our new algorithm is sound and complete with respect to 1EC/P2, C-free graphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing an L Sub-problem",
"sec_num": "3.2.2"
},
{
"text": "An LR, L, R or N sub-problem allows to build crossing arcs, but does not necessarily create crossing arcs. For example, L C [i, j, x] allows e (i,j) to cross with e (x,y) (y \u2208 (i, j)). Because every subgraph of a 1EC/P2 graph is also a 1EC/P2 graph, we allow an L C [i, j, x] to be directly degenerated to I O [i, j] . In this way, we can make sure that all subgraphs can be constructed by our algorithm. Figure 12 shows the rough idea. To generate the same graph, we have different derivations. The spurious ambiguity in our algorithm does not affect the correctness of first-order parsing, because scores are assigned to individual dependencies, rather than derivation processes. There is no need to distinguish one special derivation here.",
"cite_spans": [
{
"start": 310,
"end": 316,
"text": "[i, j]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 405,
"end": 414,
"text": "Figure 12",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Spurious Ambiguity",
"sec_num": "3.3"
},
{
"text": "We propose a second-order extension of our new algorithm. We focus on factorizations introduced in Section 2.1. Especially, the two arcs in a factor should not cross other arcs. Formally, we introduce a new algorithm to solve the optimization problem with the following objective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quasi-Second-Order Extension",
"sec_num": "4"
},
{
"text": "d\u2208ARC(G * ) s arc (d) + s\u2208SIB(G * ) max(s sib (s), 0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quasi-Second-Order Extension",
"sec_num": "4"
},
{
"text": "In the first-order algorithm, all noncrossing edges can be constructed as the frontier edge of an Int C . e (a,c) . Assuming that a pair of crossing arcs may exist yields another derivation: Int C [a, e] \u21d2 e (a,e) + LR [a, c, d] e (a,c) ;",
"cite_spans": [
{
"start": 219,
"end": 228,
"text": "[a, c, d]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 106,
"end": 113,
"text": "e (a,c)",
"ref_id": null
},
{
"start": 229,
"end": 236,
"text": "e (a,c)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quasi-Second-Order Extension",
"sec_num": "4"
},
{
"text": "+ Int O [k, d] + L O [d, e, c] +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quasi-Second-Order Extension",
"sec_num": "4"
},
{
"text": "Then LR[a, c, d] \u21d2 L O [a, b, d] + R O [b, c, d] \u21d2 Int O [a, b] + Int O [b, c].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quasi-Second-Order Extension",
"sec_num": "4"
},
{
"text": "So we can develop an exact decoding algorithm by modifying the composition for Int C while keeping intact the decomposition for LR, N, L, R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quasi-Second-Order Extension",
"sec_num": "4"
},
{
"text": "In order to capture the second-order features from noncrossing neighbors, we need to find the rightmost node adjacent to i, denoted as r i , and the leftmost node adjacent to j, denoted as l j ,while i < r i \u2264 l j < j. To do this, we split Int C [i, j] into at most three parts to capture the sibling factors. Denote the score of adjacent edges e (i,j 1 ) and e (i,j 2 ) as s 2 (i, j 1 , j 2 ). When j is the inner most node adjacent to i, we denote the score as s 2 (i, \u2205, j). We give a sketch of the decomposition in Figure 14 and a graphical representation in Figure 13. The following is a rough illustration.",
"cite_spans": [
{
"start": 246,
"end": 252,
"text": "[i, j]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 519,
"end": 528,
"text": "Figure 14",
"ref_id": "FIGREF10"
},
{
"start": 563,
"end": 569,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "New Decomposition for Int C",
"sec_num": "4.1"
},
{
"text": "Case a: r i = \u2205. We further distinguish three sub-problems: a.1 If l j = \u2205 too, both sides are the inner most second-order factor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New Decomposition for Int C",
"sec_num": "4.1"
},
{
"text": "a.2 There is a crossing arc from j. This case is handled in the same way as the first-order algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New Decomposition for Int C",
"sec_num": "4.1"
},
{
"text": "a.3 l j = \u2205. We introduce a new decomposition rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New Decomposition for Int C",
"sec_num": "4.1"
},
{
"text": "Case b: There is a crossing arc from i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New Decomposition for Int C",
"sec_num": "4.1"
},
{
"text": "b.1 l j = \u2205. Similar case to (a.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New Decomposition for Int C",
"sec_num": "4.1"
},
{
"text": "b.2 There is a crossing arc from j. Similar case to (a.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New Decomposition for Int C",
"sec_num": "4.1"
},
{
"text": "b.3 There is a noncrossing arc from j. We introduce a new rule to calculate SIB(j, l j , i).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New Decomposition for Int C",
"sec_num": "4.1"
},
{
"text": "Case c: There is a noncrossing arc from i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New Decomposition for Int C",
"sec_num": "4.1"
},
{
"text": "c.1 l j = \u2205. Similar to (a.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New Decomposition for Int C",
"sec_num": "4.1"
},
{
"text": "c.2 There is a crossing arc from j. Similar to (b.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New Decomposition for Int C",
"sec_num": "4.1"
},
{
"text": "c.3 There is a noncrossing arc from j too. We introduce a new rule to calculate SIB(i, r i , j) and SIB(j, l j , i).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "New Decomposition for Int C",
"sec_num": "4.1"
},
{
"text": "The complexity of both first-and second-order algorithms can be analyzed in the same way. The sub-problem Int is of size O(n 2 ), with a calculating time of order O(n 2 ) at most. For sub-problems L, R, LR, and N, each has O(n 3 ) elements, with a unit calculating time O(n). Therefore both algorithms run in time of O(n 4 ) with a space requirement of O(n 3 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity",
"sec_num": "4.2"
},
{
"text": "A traditional second-order model takes as the objective function s\u2208SIB(G * ) s sib (s). Our model instead tries to optimize",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "s\u2208SIB(G * ) max(s sib (s), 0).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "This model is somehow inadequate given that the second-order score function cannot penalize a bad factor. When a negative score is assigned to a second-order factor, it will be taken as 0 by our algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "This inadequacy is due to the spurious ambiguity problem that is illustrated in Section 3.3. Take the two derivations in Figure 12 for example. The derivation that starts from Int C [a, e] \u21d2 Int C [a, c] + Int O [c, e] incorporates the second-order score s sib (a, c, e). This is different when we consider the derivation that starts from",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 130,
"text": "Figure 12",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "Int C [a, e] \u21d2 LR[a, c, d] + Int O [k, d] + L O [d, e, c].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "Because we assume temporarily that e (a,c) crosses others, we do not consider s sib (a, c, e). We can see from this example that second-order scores not only depend on the derived graphs but also sensitive to the derivation processes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "If a second-order score is greater than 0, our algorithm selects the derivation that takes it into account since it increases the total score. If a secondorder score is negative, our algorithm avoids including it by selecting other paths. In other words, our algorithm treats this score as 0. Figure 13 : Decomposition for Int C [i, j] in the second-order parsing algorithm. 5 Practical Parsing",
"cite_spans": [
{
"start": 329,
"end": 335,
"text": "[i, j]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 293,
"end": 302,
"text": "Figure 13",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "i j = i + 1 j \u2212 1 (a.2) i j = i + 1 j (a.3) i j = i + 1 l j + l j j (b.1) i j = i j \u2212 1 (b.3) i j = i r j + r j j (c.1) i j = i r i + r i j \u2212 1 (c.2) i j = i r i + r i j (c.3) i j = i r i + r i l j + l j j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "Int C (i, j) \u2190 s arc (i, j) + max \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 IntO(i + 1, j \u2212 1) + ssib(i, \u2205, j) + ssib(j, \u2205, i) IntO(i + 1, j) + ssib(i, \u2205, j) IntO(i + 1, lj) + IntC (lj, j) + ssib(i, \u2205, j)+ ssib(j, lj, i) IntO(i, j \u2212 1) + ssib(j, \u2205, i) IntO(i, lj) + IntC (lj, j) + ssib(j, lj, i) IntC (i, ri) + IntO[ri, j \u2212 1] + ssib(i, ri, j)+ ssib(j, \u2205, i) IntC (i, ri) + IntO[ri, j] + ssib(i, ri, j) IntC (i, ri) + IntO[ri, lj] + IntC (lj, j)+ ssib(i, ri, j) + ssib(j, lj, i) RC (i, k, x) + IntO(k, x) + LO(x, j, k) + e (i,k) LR(i, k, x) + IntO(k, x) + IntO(x, j, k) + e (i,k) IntO[i, x] + LC [x, k, i] + NO[k, j, x] + e (i,k) RO[i, x, k] + IntO[x, k] + LO[k, j, x] + e (i,k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "We extend our quartic-time parsing algorithm into a practical parser. In the context of data-driven parsing, this requires an extra disambiguation model. As with many other parsers, we employ a global linear model. Following Zhang et al. (2016) 's experience, we define rich features extracted from word, POS-tags and pseudo trees. To estimate parameters, we utilize the averaged perceptron algorithm (Collins, 2002) .",
"cite_spans": [
{
"start": 225,
"end": 244,
"text": "Zhang et al. (2016)",
"ref_id": "BIBREF20"
},
{
"start": 401,
"end": 416,
"text": "(Collins, 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation-Sensitive Training",
"sec_num": "5.1"
},
{
"text": "Our training proceudre is sensitive to derivation rather then derived graphs. For each sentence, we first apply our algorithm to find the optimal prediction derivation. The we collect all first-and second-order factors from this derivation to update parameters. To train a first-order model, because our algorithm includes all factors, viz. depencies, there is no difference between our derivationbased method and a traditional derived structurebased method. For the second-order model, our method increases the second-order scores somehow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation-Sensitive Training",
"sec_num": "5.1"
},
{
"text": "We evaluate first-and second-order models on four representative data sets: CCGBank (Hockenmaier and Steedman, 2007) , DeepBank (Flickinger et al., 2012) , Enju HPSGBank (Miyao et al., 2005) and Prague Dependency TreeBank (Hajic et al., 2012) . We use \"standard\" training, validation, and test splits to facilitate comparisons.",
"cite_spans": [
{
"start": 84,
"end": 116,
"text": "(Hockenmaier and Steedman, 2007)",
"ref_id": "BIBREF7"
},
{
"start": 128,
"end": 153,
"text": "(Flickinger et al., 2012)",
"ref_id": "BIBREF5"
},
{
"start": 170,
"end": 190,
"text": "(Miyao et al., 2005)",
"ref_id": "BIBREF13"
},
{
"start": 222,
"end": 242,
"text": "(Hajic et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "5.2"
},
{
"text": "English CCG parsing, we use section 02-21 as training data, section 00 as the development data, and section 23 for testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Following previous experimental setup for",
"sec_num": null
},
{
"text": "\u2022 The DeepBank, Enju HPSGBank and Prague Dependency TreeBank are from SemEval 2014 Task 8 (Oepen et al., 2014) , and the data splitting policy follows the shared task.",
"cite_spans": [
{
"start": 90,
"end": 110,
"text": "(Oepen et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Following previous experimental setup for",
"sec_num": null
},
{
"text": "Experiments for CCG-grounded analysis were performed using automatically assigned POS-tags that are generated by a symbol-refined HMM tagger (Huang et al., 2010) . Experiments for the other three data sets used POS-tags provided by the shared task. We also use features extracted from pseudo trees. We utilize the Mate parser (Bohnet, 2010) to generate pseudo trees. All experimental results consider directed dependencies in a standard way. We report Unlabeled Precision (UP), Recall (UR) and F-score (UF), which are calculated using the official evaluation tool provided by SDP2014 shared task. Table 1 lists the accuracy of our system. The output of our parser was evaluated against each dependency in the corpus. We can see that the firstorder parser obtains a considerably good accuracy, with rich syntactic features. Furthermore, we can see that the introduction of higher-order features improves parsing substantially for all data sets, as expected. When syntactic trees are utilized, the Table 2 : Parsing accuracy evaluated on the test sets. \"SJW\" denotes the book embedding parser introduced in . improvement is smaller but still significant on the three SemEval data sets. Table 2 lists the parsing results on the test data together with the result obtained by Sun et al. (SJW; 2017)'s system. The building architectures of both systems are comparable.",
"cite_spans": [
{
"start": 141,
"end": 161,
"text": "(Huang et al., 2010)",
"ref_id": "BIBREF8"
},
{
"start": 326,
"end": 340,
"text": "(Bohnet, 2010)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 597,
"end": 604,
"text": "Table 1",
"ref_id": "TABREF5"
},
{
"start": 996,
"end": 1003,
"text": "Table 2",
"ref_id": null
},
{
"start": 1184,
"end": 1191,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "\u2022 Following previous experimental setup for",
"sec_num": null
},
{
"text": "1. Both systems have explicit control of the output structures. While Sun et al.'s system constrain the output graph to be P2 only, our system adds an additional 1EC restriction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy",
"sec_num": "5.3"
},
{
"text": "2. Their system's second-order features also includes both-side neighboring features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy",
"sec_num": "5.3"
},
{
"text": "3. Their system uses beam search and dual decomposition and therefore approximate, while ours perform exact decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy",
"sec_num": "5.3"
},
{
"text": "We can see that while our purely Maximum Subgraph parser obtains better results on DeepBank and CCGBank; while the book embedding parser is better on the other two data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy",
"sec_num": "5.3"
},
{
"text": "Our algorithm is sensitive to the derivation process and may exclude a couple of negative secondorder scores by selecting misleading derivations. Neverthess, our algorithm works in an exact way to include all positive second-order scores. Table 3 shows the coverage of all second-order factors. On average, 99.67% second-order factors are calculated by our algorithm. This relatively satisfactory coverage suggests that our algorithm is very effective to include second-order features. Only a very small portion is dropped. ",
"cite_spans": [],
"ref_spans": [
{
"start": 239,
"end": 247,
"text": "Table 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.4"
},
{
"text": "This paper proposed two exact, graph-based algorithms for 1EC/P2 parsing with first-order and quasi-second-order scores. The resulting parser has the same asymptotic run time as Cao et al. (2017)'s algorithm. An exploration of other factorizations that facilitate semantic dependency parsing may be an interesting avenue for future work. Recent work has investigated faster decoding for higher-order graph-based projective parsing e.g. vine pruning (Rush and Petrov, 2012) and cube pruning (Zhang and McDonald, 2012) . It would be interesting to extend these lines of work to decrease the complexity of our quartic algorithm.",
"cite_spans": [
{
"start": 449,
"end": 472,
"text": "(Rush and Petrov, 2012)",
"ref_id": "BIBREF17"
},
{
"start": 490,
"end": 516,
"text": "(Zhang and McDonald, 2012)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Top accuracy and fast dependency parsing is not a contradiction",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "89--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernd Bohnet. 2010. Top accuracy and fast depen- dency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computa- tional Linguistics (Coling 2010). Coling 2010 Or- ganizing Committee, Beijing, China, pages 89-97. http://www.aclweb.org/anthology/C10-1011.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Parsing to 1-endpoint-crossing, pagenumber-2 graphs",
"authors": [
{
"first": "Junjie",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junjie Cao, Sheng Huang, Weiwei Sun, and Xiao- jun Wan. 2017. Parsing to 1-endpoint-crossing, pagenumber-2 graphs. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Experiments with a higherorder projective dependency parser",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Carreras. 2007. Experiments with a higher- order projective dependency parser. In In Proc. EMNLP-CoNLL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {
"DOI": [
"10.3115/1118693.1118694"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and ex- periments with perceptron algorithms. In Pro- ceedings of the 2002 Conference on Empirical Methods in Natural Language Processing. Asso- ciation for Computational Linguistics, pages 1-8. https://doi.org/10.3115/1118693.1118694.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Three new probabilistic models for dependency parsing: an exploration",
"authors": [
{
"first": "Jason",
"middle": [
"M"
],
"last": "Eisner",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th conference on Computational",
"volume": "1",
"issue": "",
"pages": "340--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: an exploration. In Proceed- ings of the 16th conference on Computational lin- guistics -Volume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, pages 340-345.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Deepbank: A dynamically annotated treebank of the wall street journal",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Valia",
"middle": [],
"last": "Kordoni",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eleventh International Workshop on Treebanks and Linguistic Theories",
"volume": "",
"issue": "",
"pages": "85--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Flickinger, Yi Zhang, and Valia Kordoni. 2012. Deepbank: A dynamically annotated treebank of the wall street journal. In Proceedings of the Eleventh International Workshop on Treebanks and Linguistic Theories. pages 85-96.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Announcing prague czech-english dependency treebank 2.0",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Hajicov\u00e1",
"suffix": ""
},
{
"first": "Jarmila",
"middle": [],
"last": "Panevov\u00e1",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Sgall",
"suffix": ""
},
{
"first": "Ondej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Silvie",
"middle": [],
"last": "Cinkov\u00e1",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Fuc\u00edkov\u00e1",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Mikulov\u00e1",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Pajas",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Popelka",
"suffix": ""
},
{
"first": "Jir\u00ed",
"middle": [],
"last": "Semeck\u00fd",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Sindlerov\u00e1",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Step\u00e1nek",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Toman",
"suffix": ""
},
{
"first": "Zdenka",
"middle": [],
"last": "Uresov\u00e1",
"suffix": ""
},
{
"first": "Zdenek",
"middle": [],
"last": "Zabokrtsk\u00fd",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Hajic, Eva Hajicov\u00e1, Jarmila Panevov\u00e1, Petr Sgall, Ondej Bojar, Silvie Cinkov\u00e1, Eva Fuc\u00edkov\u00e1, Marie Mikulov\u00e1, Petr Pajas, Jan Popelka, Jir\u00ed Se- meck\u00fd, Jana Sindlerov\u00e1, Jan Step\u00e1nek, Josef Toman, Zdenka Uresov\u00e1, and Zdenek Zabokrtsk\u00fd. 2012. Announcing prague czech-english dependency tree- bank 2.0. In Proceedings of the 8th International Conference on Language Resources and Evaluation. Istanbul, Turkey.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "CCGbank: A corpus of CCG derivations and dependency structures extracted from the penn treebank",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "3",
"pages": "355--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Hockenmaier and Mark Steedman. 2007. CCG- bank: A corpus of CCG derivations and dependency structures extracted from the penn treebank. Com- putational Linguistics 33(3):355-396.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Self-training with products of latent variable grammars",
"authors": [
{
"first": "Zhongqiang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Harper",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongqiang Huang, Mary Harper, and Slav Petrov. 2010. Self-training with products of latent vari- able grammars. In Proceedings of the 2010",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "12--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference on Empirical Methods in Natural Language Processing. Association for Computa- tional Linguistics, Cambridge, MA, pages 12-22. http://www.aclweb.org/anthology/D10-1002.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Efficient thirdorder dependency parsers",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo and Michael Collins. 2010. Efficient third- order dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Com- putational Linguistics. Association for Computa- tional Linguistics, Uppsala, Sweden, pages 1-11. http://www.aclweb.org/anthology/P10-1001.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Parsing to noncrossing dependency graphs",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Jonsson",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "559--570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Kuhlmann and Peter Jonsson. 2015. Parsing to noncrossing dependency graphs. Transactions of the Association for Computational Linguistics 3:559- 570.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Online learning of approximate dependency parsing algorithms",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of 11th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algo- rithms. In Proceedings of 11th Conference of the European Chapter of the Association for Computa- tional Linguistics (EACL-2006)). volume 6, pages 81-88.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Corpus-oriented grammar development for acquiring a head-driven phrase structure grammar from the penn treebank",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Ninomiya",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2005,
"venue": "IJCNLP",
"volume": "",
"issue": "",
"pages": "684--693",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Miyao, Takashi Ninomiya, and Jun'ichi Tsujii. 2005. Corpus-oriented grammar development for acquiring a head-driven phrase structure grammar from the penn treebank. In IJCNLP. pages 684-693.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semeval 2014 task 8: Broad-coverage semantic dependency parsing",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "63--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Dan Flickinger, Jan Hajic, An- gelina Ivanova, and Yi Zhang. 2014. Semeval 2014 task 8: Broad-coverage semantic dependency pars- ing. In Proceedings of the 8th International Work- shop on Semantic Evaluation (SemEval 2014). As- sociation for Computational Linguistics and Dublin City University, Dublin, Ireland, pages 63-72. http://www.aclweb.org/anthology/S14-2008.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A crossing-sensitive thirdorder factorization for dependency parsing",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "2",
"issue": "",
"pages": "41--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler. 2014. A crossing-sensitive third- order factorization for dependency parsing. TACL 2:41-54. http://www.transacl.org/wp- content/uploads/2014/02/39.pdf.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Finding optimal 1-endpoint-crossing trees",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Sampath",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2013,
"venue": "TACL",
"volume": "1",
"issue": "",
"pages": "13--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler, Sampath Kannan, and Mitchell Mar- cus. 2013. Finding optimal 1-endpoint-crossing trees. TACL 1:13-24. http://www.transacl.org/wp- content/uploads/2013/03/paper13.pdf.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Vine pruning for efficient multi-pass dependency parsing",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "498--507",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Rush and Slav Petrov. 2012. Vine pruning for efficient multi-pass dependency pars- ing. In Proceedings of the 2012 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Lan- guage Technologies. Association for Computational Linguistics, Montr\u00e9al, Canada, pages 498-507. http://www.aclweb.org/anthology/N12-1054.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semantic dependency parsing via book embedding",
"authors": [
{
"first": "Weiwei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiwei Sun, Junjie Cao, and Xiaojun Wan. 2017. Se- mantic dependency parsing via book embedding. In Proceedings of the 55th Annual Meeting of the Asso- ciation for Computational Linguistics. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Generalized higher-order dependency parsing with cube pruning",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "320--331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhang and Ryan McDonald. 2012. General- ized higher-order dependency parsing with cube pruning. In Proceedings of the 2012 Joint Con- ference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning. Association for Computational Linguistics, Jeju Island, Korea, pages 320-331. http://www.aclweb.org/anthology/D12-1030.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Transition-based parsing for deep dependency structures",
"authors": [
{
"first": "Xun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yantao",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2016,
"venue": "Computational Linguistics",
"volume": "42",
"issue": "3",
"pages": "353--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xun Zhang, Yantao Du, Weiwei Sun, and Xiaojun Wan. 2016. Transition-based parsing for deep de- pendency structures. Computational Linguistics 42(3):353-389.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "e (a,c) 's crossing edges e (b,d) and e (b,e) share an endpoint b. A pagenumber-2 graph. The upper and the lower figures represent two half-planes respectively.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "C structure has two crossing chains.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Without loss of generality, we show the decompo-sition of L O [i, j, x] as follows. For L C [i, j, x],we ignore Case b but follow the others.Case a. If there is no more edge from x to (i, j], then it will degenerate to Int O [i, j]. Case b. If there exists e (x,j) , then it will degenerate to L C [i, j, x] + e (x,j) . Case c. Assume that there are edges from x to (i, j) and e (x,k) is the farthest one. It divides [i, j] into [i, k] and [k, j]. c.1 If there is an edge from x to (i, k), [i, k] and [k, j] are L C [i, k, x] and N O [k, j, i]. c.2 If there is no edge from x to (i, k), [i, k] and [k, j] are Int O [i, k] and L O [k, j, i].",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "is a graphical representation.3.2.3 Decomposing an LR Sub-problemLR[i, j, x] means i or j is the pencil point of edges from x to (i, j). We show the decomposition of LR[i, j, x] as follows:(a)",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF6": {
"text": "Decomposition for Int C [i, j] in the first-order parsing algorithm. pt(i, k) = x.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF7": {
"text": "a 3 = j. Both e (x,b 1 ) and e (x,b 3 ) exist. Case a. If there is a vertex k within (i, j), which divides [i, j] into [i, k] and [k, j]. And it guarantees no edge from [i, k) to (k, j]. i is the pencil point of edges from x to (i, k] because no edge from j to (i, k) can cross these edges. Similarly j has to be the pencil point of edges from x to (k, j). Obviously, [i, k] is an L O and [k, j] is an R O with external x. Thus the problem is decomposed as",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF8": {
"text": "Illustration of spurious ambiguity. The two solid curves represent two arcs in the target graph, but not the dashed one. Excluding crossing edges leads to the first derivation: Int C [a, e] \u21d2 e (a,e) + Int C [a, c] + Int O [c, e] +",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF10": {
"text": "Decomposition for Int C [i, j, x].",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "represents an interval from i to j inclusively. And there is no edge e (i ,j ) such that i \u2208 [i, j] and j / \u2208 [i, j]. We distinguish two sub-types for Int. Int O [i, j] may or may not contain e (i,j) , while Int C [i, j] contains e (i,j) . LR LR[i, j, x] represents an interval from i to j inclusively and an external vertex x. \u2200p \u2208 [i, j], pt(x, p) = i or j. LR[i, j, x] implies the existence of e (i,j) , but does not contain e (i,j) . When LR[i, j, x] is combined with",
"num": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "R[i, j, x] represents an interval from i to j inclusively as well as an external vertex x. \u2200p \u2208 [i, j], pt(x, p) = j. R[i, j, x] disallows e (x,j) and e (x,i) . We distinguish two sub-types for R. R O [i, j, x] may or may not contain e (i,j) . R C[i, j, x] implies the existence of but does not contain e (i,j)",
"num": null
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Case d: x \u2208 (i, k).</td></tr><tr><td>d.1 Assume that there exist edges from i to</td></tr><tr><td>(x, k), so the pencil point of edges from x to</td></tr><tr><td>(k, j] is i. Therefore [k, j] is an N. Because x</td></tr><tr><td>is pencil point of edges from i to (x, k], [x, k]</td></tr><tr><td>is an L. Furthmore, it is an L C because we</td></tr><tr><td>generate e (i,k) in this step. It is obvious that</td></tr><tr><td>[i, x] is an Int.</td></tr></table>",
"text": "Assume that there exists an edge from k to some node r in (x, j], so x can only be k and pencil point of edges from k to (x, j] is x. We assume no edge from k to any node in [x, j], x thus can be i or k. As a result, [x, j] is an Int and [i, k, x] is an LR.",
"num": null
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td>DeepBank</td><td/><td/><td>EnjuBank</td><td/><td/><td>CCGBank</td><td/><td/><td>PCEDT</td><td/></tr><tr><td>Tree</td><td>UP</td><td>UR</td><td>UF</td><td>UP</td><td>UR</td><td>UF</td><td>UP</td><td>UR</td><td>UF</td><td>UP</td><td>UR</td><td>UF</td></tr><tr><td>No</td><td>1or 89</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"text": ".43 83.03 86.11 90.10 87.10 88.58 91.63 88.07 89.82 88.13 81.53 84.70 2or 89.23 85.98 87.57 90.88 89.90 90.39 91.96 89.54 90.74 88.56 84.57 86.52 Syn 1or 91.24 87.14 89.14 92.72 90.96 91.83 94.28 91.79 93.02 91.53 86.95 89.18 2or 90.93 88.79 89.85 92.73 92.11 92.42 93.99 92.27 93.13 91.02 88.20 89.59",
"num": null
},
"TABREF5": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td>DeepBank</td><td/><td/><td>EnjuBank</td><td/><td/><td>CCGBank</td><td/><td/><td>PCEDT</td><td/></tr><tr><td>Tree</td><td>UP</td><td>UR</td><td>UF</td><td>UP</td><td>UR</td><td>UF</td><td>UP</td><td>UR</td><td>UF</td><td>UP</td><td>UR</td><td>UF</td></tr><tr><td>No</td><td>1or 88.87</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"text": "Parsing accuracy evaluated on the development sets. 82.50 85.57 90.12 86.76 88.41 91.95 88.29 90.08 86.87 80.45 83.54 2or 88.77 85.61 87.16 91.06 89.50 90.27 92.25 89.80 91.01 87.07 83.45 85.22 Syn 1or 90.68 86.57 88.58 92.82 90.62 91.71 94.32 91.88 93.09 90.11 85.83 87.97 2or 90.13 88.21 89.16 92.84 91.50 92.17 94.09 92.27 93.17 89.73 87.13 88.41 SJW (2or) 89.99 87.77 88.87 92.87 92.04 92.46 93.45 92.51 92.98 89.58 87.73 88.65",
"num": null
},
"TABREF7": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Coverage of second-order factors on the developmenet data.",
"num": null
}
}
}
}