ACL-OCL / Base_JSON /prefixP /json /P14 /P14-1021.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P14-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:06:55.871716Z"
},
"title": "Shift-Reduce CCG Parsing with a Dependency Model",
"authors": [
{
"first": "Wenduan",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge Computer Laboratory",
"location": {}
},
"email": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {}
},
"email": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Singapore University of Technology",
"location": {}
},
"email": "zhang@sutd.edu.sg"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the first dependency model for a shift-reduce CCG parser. Modelling dependencies is desirable for a number of reasons, including handling the \"spurious\" ambiguity of CCG; fitting well with the theory of CCG; and optimizing for structures which are evaluated at test time. We develop a novel training technique using a dependency oracle, in which all derivations are hidden. A challenge arises from the fact that the oracle needs to keep track of exponentially many goldstandard derivations, which is solved by integrating a packed parse forest with the beam-search decoder. Standard CCGBank tests show the model achieves up to 1.05 labeled F-score improvements over three existing, competitive CCG parsing models.",
"pdf_parse": {
"paper_id": "P14-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the first dependency model for a shift-reduce CCG parser. Modelling dependencies is desirable for a number of reasons, including handling the \"spurious\" ambiguity of CCG; fitting well with the theory of CCG; and optimizing for structures which are evaluated at test time. We develop a novel training technique using a dependency oracle, in which all derivations are hidden. A challenge arises from the fact that the oracle needs to keep track of exponentially many goldstandard derivations, which is solved by integrating a packed parse forest with the beam-search decoder. Standard CCGBank tests show the model achieves up to 1.05 labeled F-score improvements over three existing, competitive CCG parsing models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Combinatory Categorial Grammar (CCG; Steedman (2000)) is able to derive typed dependency structures (Hockenmaier, 2003; Clark and Curran, 2007) , providing a useful approximation to the underlying predicate-argument relations of \"who did what to whom\". To date, CCG remains the most competitive formalism for recovering \"deep\" dependencies arising from many linguistic phenomena such as raising, control, extraction and coordination (Rimell et al., 2009; Nivre et al., 2010) .",
"cite_spans": [
{
"start": 100,
"end": 119,
"text": "(Hockenmaier, 2003;",
"ref_id": "BIBREF15"
},
{
"start": 120,
"end": 143,
"text": "Clark and Curran, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 433,
"end": 454,
"text": "(Rimell et al., 2009;",
"ref_id": "BIBREF23"
},
{
"start": 455,
"end": 474,
"text": "Nivre et al., 2010)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To achieve its expressiveness, CCG exhibits so-called \"spurious\" ambiguity, permitting many non-standard surface derivations which ease the recovery of certain dependencies, especially those arising from type-raising and composition. But this raises the question of what is the most suitable model for CCG: should we model the derivations, the dependencies, or both? The choice for some existing parsers (Hockenmaier, 2003; Clark and Curran, 2007) is to model derivations directly, restricting the gold-standard to be the normal-form derivations (Eisner, 1996) from CCGBank (Hockenmaier and Steedman, 2007) .",
"cite_spans": [
{
"start": 404,
"end": 423,
"text": "(Hockenmaier, 2003;",
"ref_id": "BIBREF15"
},
{
"start": 424,
"end": 447,
"text": "Clark and Curran, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 546,
"end": 560,
"text": "(Eisner, 1996)",
"ref_id": "BIBREF9"
},
{
"start": 591,
"end": 606,
"text": "Steedman, 2007)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Modelling dependencies, as a proxy for the semantic interpretation, fits well with the theory of CCG, in which Steedman (2000) argues that the derivation is merely a \"trace\" of the underlying syntactic process, and that the structure which is built, and predicated over when applying constraints on grammaticality, is the semantic interpretation. The early dependency model of , in which model features were defined over only dependency structures, was partly motivated by these theoretical observations.",
"cite_spans": [
{
"start": 111,
"end": 126,
"text": "Steedman (2000)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "More generally, dependency models are desirable for a number of reasons. First, modelling dependencies provides an elegant solution to the spurious ambiguity problem (Clark and Curran, 2007) . Second, obtaining training data for dependencies is likely to be easier than for syntactic derivations, especially for incomplete data (Schneider et al., 2013) . Clark and Curran (2006) show how the dependency model from Clark and Curran (2007) extends naturally to the partialtraining case, and also how to obtain dependency data cheaply from gold-standard lexical category sequences alone. And third, it has been argued that dependencies are an ideal representation for parser evaluation, especially for CCG (Briscoe and Carroll, 2006; , and so optimizing for dependency recovery makes sense from an evaluation perspective.",
"cite_spans": [
{
"start": 166,
"end": 190,
"text": "(Clark and Curran, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 328,
"end": 352,
"text": "(Schneider et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 355,
"end": 378,
"text": "Clark and Curran (2006)",
"ref_id": "BIBREF3"
},
{
"start": 414,
"end": 437,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF4"
},
{
"start": 703,
"end": 730,
"text": "(Briscoe and Carroll, 2006;",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we fill a gap in the literature by developing the first dependency model for a shiftreduce CCG parser. Shift-reduce parsing applies naturally to CCG (Zhang and Clark, 2011) , and the left-to-right, incremental nature of the decoding fits with CCG's cognitive claims. The discriminative model is global and trained with the structured perceptron. The decoder is based on beam-search (Zhang and Clark, 2008) with the advantage of linear-time decoding (Goldberg et al., 2013) .",
"cite_spans": [
{
"start": 164,
"end": 187,
"text": "(Zhang and Clark, 2011)",
"ref_id": "BIBREF29"
},
{
"start": 397,
"end": 420,
"text": "(Zhang and Clark, 2008)",
"ref_id": "BIBREF28"
},
{
"start": 464,
"end": 487,
"text": "(Goldberg et al., 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A main contribution of the paper is a novel technique for training the parser using a dependency oracle, in which all derivations are hidden. A challenge arises from the potentially exponential number of derivations leading to a gold-standard dependency structure, which the oracle needs to keep track of. Our solution is an integration of a packed parse forest, which efficiently stores all the derivations, with the beam-search decoder at training time. The derivations are not explicitly part of the data, since the forest is built from the gold-standard dependencies. We also show how perceptron learning with beam-search (Collins and Roark, 2004) can be extended to handle the additional ambiguity, by adapting the \"violationfixing\" perceptron of Huang et al. (2012) .",
"cite_spans": [
{
"start": 626,
"end": 651,
"text": "(Collins and Roark, 2004)",
"ref_id": "BIBREF7"
},
{
"start": 752,
"end": 771,
"text": "Huang et al. (2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Results on the standard CCGBank tests show that our parser achieves absolute labeled F-score gains of up to 0.5 over the shift-reduce parser of Zhang and Clark (2011) ; and up to 1.05 and 0.64 over the normal-form and hybrid models of Clark and Curran (2007) , respectively.",
"cite_spans": [
{
"start": 144,
"end": 166,
"text": "Zhang and Clark (2011)",
"ref_id": "BIBREF29"
},
{
"start": 235,
"end": 258,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section describes how shift-reduce techniques can be applied to CCG, following Zhang and Clark (2011) . First we describe the deterministic process which a parser would follow when tracing out a single, correct derivation; then we describe how a model of normal-form derivations -or, more accurately, a sequence of shift-reduce actions leading to a normal-form derivationcan be used with beam-search to develop a nondeterministic parser which selects the highest scoring sequence of actions. Note this section only describes a normal-form derivation model for shiftreduce parsing. Section 3 explains how we extend the approach to dependency models.",
"cite_spans": [
{
"start": 84,
"end": 106,
"text": "Zhang and Clark (2011)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce with Beam-Search",
"sec_num": "2"
},
{
"text": "The shift-reduce algorithm adapted to CCG is similar to that of shift-reduce dependency parsing (Yamada and Matsumoto, 2003; Nivre and Mc-Donald, 2008; Zhang and Clark, 2008; Huang and Sagae, 2010) . Following Zhang and Clark (2011) , we define each item in the parser as a pair s, q , where q is a queue of remaining input, consisting of words and a set of possible lexical categories for each word (with q 0 being the front word), and s is the stack that holds subtrees s 0 , s 1 , ... (with s 0 at the top). Subtrees on the stack are partial deriva- tions that have been built as part of the shift-reduce process. SHIFT, REDUCE and UNARY are the three types of actions that can be applied to an item. A SHIFT action shifts one of the lexical categories of q 0 onto the stack. A REDUCE action combines s 0 and s 1 according to a CCG combinatory rule, producing a new category on the top of the stack. A UNARY action applies either a type-raising or type-changing rule to the stack-top category s 0 . 1 Figure 1 shows a deterministic example for the sentence Mr. President visited Paris, giving a single sequence of shift-reduce actions which produces a correct derivation (i.e. one producing the correct set of dependencies). Starting with the initial item s, q 0 (row 0), which has an empty stack and a full queue, a total of nine actions are applied to produce the complete derivation.",
"cite_spans": [
{
"start": 96,
"end": 124,
"text": "(Yamada and Matsumoto, 2003;",
"ref_id": "BIBREF26"
},
{
"start": 125,
"end": 151,
"text": "Nivre and Mc-Donald, 2008;",
"ref_id": null
},
{
"start": 152,
"end": 174,
"text": "Zhang and Clark, 2008;",
"ref_id": "BIBREF28"
},
{
"start": 175,
"end": 197,
"text": "Huang and Sagae, 2010)",
"ref_id": "BIBREF17"
},
{
"start": 210,
"end": 232,
"text": "Zhang and Clark (2011)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 1004,
"end": 1012,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Shift-Reduce with Beam-Search",
"sec_num": "2"
},
{
"text": "step stack (sn, ..., s 1 , s 0 ) queue (q 0 , q 1 , ..., qm) action 0 Mr. President visited Paris 1 N /N President visited Paris SHIFT 2 N /N N visited Paris SHIFT 3 N visited Paris REDUCE 4 NP visited Paris UNARY 5 NP (S [dcl]\\NP)/NP Paris SHIFT 6 NP (S [dcl]\\NP)/NP N SHIFT 7 NP (S [dcl]\\NP)/NP NP UNARY 8 NP S [dcl]\\NP REDUCE 9 S [dcl]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce with Beam-Search",
"sec_num": "2"
},
{
"text": "Applying beam-search to a statistical shiftreduce parser is a straightforward extension to the deterministic example. At each step, a beam is used to store the top-k highest-scoring items, resulting from expanding all items in the previous beam. An item becomes a candidate output once it has an empty queue, and the parser keeps track of the highest scored candidate output and returns the best one as the final output. Compared with greedy local-search (Nivre and Scholz, 2004) , the use of a beam allows the parser to explore a larger search space and delay difficult ambiguity-resolving decisions by considering multiple items in parallel.",
"cite_spans": [
{
"start": 455,
"end": 479,
"text": "(Nivre and Scholz, 2004)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce with Beam-Search",
"sec_num": "2"
},
{
"text": "We refer to the shift-reduce model of Zhang and Clark (2011) as the normal-form model, where the oracle for each sentence specifies a unique sequence of gold-standard actions which produces the corresponding normal-form derivation. No dependency structures are involved at training and test time, except for evaluation. In the next section, we describe a dependency oracle which considers all sequences of actions producing a goldstandard dependency structure to be correct.",
"cite_spans": [
{
"start": 38,
"end": 60,
"text": "Zhang and Clark (2011)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce with Beam-Search",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Mr. President visited Paris N /N N (S [dcl ]\\NP )/NP NP > > N S [dcl ]\\NP >TC NP < S [dcl ] (a) Mr. President visited Paris N /N N (S [dcl ]\\NP )/NP NP > N >TC NP >T S [dcl ]/(S [dcl ]\\NP ) >B S [dcl ]/NP > S [dcl ]",
"eq_num": "(b)"
}
],
"section": "Shift-Reduce with Beam-Search",
"sec_num": "2"
},
{
"text": "Figure 2: Two derivations leading to the same dependency structure. TC denotes type-changing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shift-Reduce with Beam-Search",
"sec_num": "2"
},
{
"text": "Categories in CCG are either basic (such as NP and PP ) or complex (such as (S [dcl ]\\NP )/NP ). Each complex category in the lexicon defines one or more predicate-argument relations, which can be realized as a predicate-argument dependency when the corresponding argument slot is consumed. For example, the transitive verb category above defines two relations: one for the subject NP and one for the object NP . In this paper a CCG predicate-argument dependency is a 4-tuple:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Model",
"sec_num": "3"
},
{
"text": "h f , f, s, h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Model",
"sec_num": "3"
},
{
"text": "a where h f is the lexical item of the lexical category expressing the relation; f is the lexical category; s is the argument slot; and h a is the head word of the argument. Since the lexical items in a dependency are indexed by their sentence positions, all dependencies for a sentence form a set, which is referred to as a CCG dependency structure. Clark and Curran (2007) contains a detailed description of dependency structures. Fig. 2 shows an example demonstrating spurious ambiguity in relation to a CCG dependency structure. In both derivations, the first two lexical categories are combined using forward application (>) and the following dependency is realized: Mr., N /N1 , 1, President . In the normal-form derivation (a), the dependency visited, (S \\NP1 )/NP2 , 2, Paris is created by combining the transitive verb category with the object NP using forward application. One final dependency, visited, (S \\NP1 )/NP2 , 1, President , is realized when the root node S [dcl ] is produced through backward application (<). Fig. 2(b) shows a non-normal-form derivation which uses type-raising (T) and composition (B) (which are not required to derive the correct dependency structure). In this alternative derivation, the dependency visited, (S \\NP1 )/NP2 , 1, President is realized using forward composition (B), and visited, (S \\NP1 )/NP2 , 2, Paris is realized when the",
"cite_spans": [
{
"start": 351,
"end": 374,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 433,
"end": 439,
"text": "Fig. 2",
"ref_id": "FIGREF1"
},
{
"start": 1031,
"end": 1040,
"text": "Fig. 2(b)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The Dependency Model",
"sec_num": "3"
},
{
"text": "S [dcl ] root is produced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Model",
"sec_num": "3"
},
{
"text": "The chart-based dependency model of Clark and Curran (2007) treats all derivations as hidden, and defines a probabilistic model for a dependency structure by summing probabilities of all derivations leading to a particular structure. Features are defined over both derivations and CCG predicate-argument dependencies. We follow a similar approach, but rather than define a probabilistic model (which requires summing), we define a linear model over sequences of shiftreduce actions, as for the normal-form shift-reduce model. However, the difference compared to the normal-form model is that we do not assume a single gold-standard sequence of actions.",
"cite_spans": [
{
"start": 36,
"end": 59,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Model",
"sec_num": "3"
},
{
"text": "Similar to Goldberg and Nivre (2012), we define an oracle which determines, for a goldstandard dependency structure, G, what the valid transition sequences are (i.e. those sequences corresponding to derivations leading to G). More specifically, the oracle can determine, given G and an item s, q , what the valid actions are for that item (i.e. what actions can potentially lead to G, starting with s, q and the dependencies already built on s). However, there can be exponentially many valid action sequences for G, which we represent efficiently using a packed parse forest. We show how the forest can be used, during beamsearch decoding, to determine the valid actions for a parse item (Section 3.2). We also show, in Section 3.3, how perceptron training with earlyupdate (Collins and Roark, 2004) can be used in this setting.",
"cite_spans": [
{
"start": 775,
"end": 800,
"text": "(Collins and Roark, 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Model",
"sec_num": "3"
},
{
"text": "A CCG parse forest efficiently represents an exponential number of derivations. Following Clark and Curran (2007) (which builds on Miyao and Tsujii (2002) ), and using the same notation, we define a CCG parse forest \u03a6 as a tuple C, D, R, \u03b3, \u03b4 , where C is a set of conjunctive Algorithm 1 (Clark and Curran, 2007) Input: A packed forest C, D, R, \u03b3, \u03b4 , with dmax(c) and dmax(d) already computed 1: function MAIN 2: for each dr \u2208 R s.t. dmax . (dr) = |G| do 3:",
"cite_spans": [
{
"start": 90,
"end": 113,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF4"
},
{
"start": 131,
"end": 154,
"text": "Miyao and Tsujii (2002)",
"ref_id": "BIBREF19"
},
{
"start": 289,
"end": 313,
"text": "(Clark and Curran, 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Oracle Forest",
"sec_num": "3.1"
},
{
"text": "MARK(dr) 4: procedure MARK(d) 5: mark d as a correct node 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Oracle Forest",
"sec_num": "3.1"
},
{
"text": "for each c \u2208 \u03b3(d) do 7: if dmax(c) == dmax(d) then 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Oracle Forest",
"sec_num": "3.1"
},
{
"text": "mark c as a correct node 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Oracle Forest",
"sec_num": "3.1"
},
{
"text": "for each d \u2208 \u03b4(c) do 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Oracle Forest",
"sec_num": "3.1"
},
{
"text": "if d has not been visited then 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Oracle Forest",
"sec_num": "3.1"
},
{
"text": "MARK dnodes and D is a set of disjunctive nodes. 2 Conjunctive nodes are individual CCG categories in \u03a6, and are either obtained from the lexicon, or by combining two disjunctive nodes using a CCG rule, or by applying a unary rule to a disjunctive node. Disjunctive nodes are equivalence classes of conjunctive nodes. Two conjunctive nodes are equivalent iff they have the same category, head and unfilled dependencies (i.e. they will lead to the same derivation, and produce the same dependencies, in any future parsing). R \u2286 D is a set of root disjunctive nodes. \u03b3 : D \u2192 2 C is the conjunctive child function and \u03b4 : C \u2192 2 D is the disjunctive child function. The former returns the set of all conjunctive nodes of a disjunctive node, and the latter returns the disjunctive child nodes of a conjunctive node. The dependency model requires all the conjunctive and disjunctive nodes of \u03a6 that are part of the derivations leading to a gold-standard dependency structure G. We refer to such derivations as correct derivations and the packed forest containing all these derivations as the oracle forest, denoted as \u03a6 G , which is a subset of \u03a6. It is prohibitive to enumerate all correct derivations, but it is possible to identify, from \u03a6, all the conjunctive and disjunctive nodes that are part of \u03a6 G . Clark and Curran (2007) gives an algorithm for doing so, which we use here. The main intuition behind the algorithm is that a gold-standard dependency structure decomposes over derivations; thus gold-standard dependencies realized at conjunctive nodes can be counted when \u03a6 is built, and all nodes that are part of \u03a6 G can then be marked out of \u03a6 by traversing it top-down. A key idea in understanding the algo-rithm is that dependencies are created when disjunctive nodes are combined, and hence are associated with, or \"live on\", conjunctive nodes in the forest.",
"cite_spans": [
{
"start": 1303,
"end": 1326,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Oracle Forest",
"sec_num": "3.1"
},
{
"text": "Following Clark and Curran (2007) , we also define the following three values, where the first decomposes only over local rule productions, while the other two decompose over derivations:",
"cite_spans": [
{
"start": 10,
"end": 33,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Oracle Forest",
"sec_num": "3.1"
},
{
"text": "cdeps(c) = * if \u2203 \u03c4 \u2208 deps(c), \u03c4 / \u2208 G |deps(c)| otherwise dmax(c) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 * if cdeps(c) == * * if dmax(d) == * for some d \u2208 \u03b4(c) d\u2208\u03b4(c) dmax(d) + cdeps(c) otherwise dmax(d) = max{dmax(c) | c \u2208 \u03b3(d)}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Oracle Forest",
"sec_num": "3.1"
},
{
"text": "deps(c) is the set of all dependencies on conjunctive node c, and cdeps(c) counts the number of correct dependencies on c. dmax(c) is the maximum number of correct dependencies over any sub-derivation headed by c and is calculated recursively; dmax(d) returns the same value for a disjunctive node. In all cases, a special value * indicates the presence of incorrect dependencies. To obtain the oracle forest, we first pre-compute dmax(c) and dmax(d) for all d and c in \u03a6 when \u03a6 is built using CKY, which are then used by Algorithm 1 to identify all the conjunctive and disjunctive nodes in \u03a6 G .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Oracle Forest",
"sec_num": "3.1"
},
{
"text": "We observe that the canonical shift-reduce algorithm (as demonstrated in Fig. 1 ) applied to a single parse tree exactly resembles bottom-up postorder traversal of that tree. As an example, consider the derivation in Fig. 2a , where the corresponding sequence of actions is:",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 79,
"text": "Fig. 1",
"ref_id": "FIGREF0"
},
{
"start": 217,
"end": 224,
"text": "Fig. 2a",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "sh N /N , sh N , re N , un NP , sh (S [dcl ]\\NP )/NP , sh NP , re S [dcl ]\\NP , re S [dcl ]. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "The order of traversal is left-child, right-child and parent. For a single parse, the corresponding shift-reduce action sequence is unique, and for a given item this canonical order restricts the possible derivations that can be formed using further actions. We now extend this observation to the more general case of an oracle forest, where there may be more than one gold-standard action for a given item.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "Definition 1. Given a gold-standard dependency",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "Mr. President visited Paris N /N N (S [dcl ]\\NP )/NP NP > > N S[dcl]\\NP (a) Mr. President visited Paris N/N N (S [dcl ]\\NP )/NP NP > S[dcl]\\NP (b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "Figure 3: Example subtrees on two stacks, with two subtrees in (a) and three in (b); roots of subtrees are in bold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "structure G, an oracle forest \u03a6 G , and an item s, q , we say s is a realization of G, denoted s G, if |s| = 1, q is empty and the single derivation on s is correct. If |s| > 0 and the subtrees on s can lead to a correct derivation in \u03a6 G using further actions, we say s is a partial-realization of G, denoted as s \u223c G. And we define s \u223c G for |s| = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "As an example, assume that \u03a6 G contains only the derivation in Fig. 2a ; then a stack containing the two subtrees in Fig. 3a is a partial-realization, while a stack containing the three subtrees in Fig. 3b is not. Note that each of the three subtrees in Fig. 3b is present in \u03a6 G ; however, these subtrees cannot be combined into the single correct derivation, since the correct sequence of shiftreduce actions must first combine the lexical categories for Mr. and President before shifting the lexical category for visited.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Fig. 2a",
"ref_id": "FIGREF1"
},
{
"start": 117,
"end": 124,
"text": "Fig. 3a",
"ref_id": null
},
{
"start": 198,
"end": 205,
"text": "Fig. 3b",
"ref_id": null
},
{
"start": 254,
"end": 261,
"text": "Fig. 3b",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "We denote an action as a pair (x, c), where x \u2208 {SHIFT, REDUCE, UNARY} and c is the root of the subtree resulting from that action. For all three types of actions, c also corresponds to a unique conjunctive node in the complete forest \u03a6; and we use c s i to denote the conjunctive node in \u03a6 corresponding to subtree s i on the stack. Let s , q = s, q \u2022 (x, c) be the resulting item from applying the action (x, c) to s, q ; and let the set of all possible actions for s, q be X s,q = {(x, c) | (x, c) is applicable to s, q }. Definition 3. Given \u03a6 G , the dependency oracle function f d is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "f d ( s, q , (x, c), \u03a6 G ) = true if s \u223c G or s G false otherwise where (x, c) \u2208 X s,q and s , q = s, q \u2022 (x, c).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "The pseudocode in Algorithm 2 implements f d . It determines, for a given item, whether an applicable action is valid in \u03a6 G .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "It is trivial to determine the validity of a SHIFT action for the initial item, s, q 0 , since the SHIFT action is valid iff its category matches the goldstandard lexical category of the first word in the sentence. For any subsequent SHIFT action (SHIFT, c) to be valid, the necessary condition is c \u2261 c lex 0 , where c lex 0 denotes the gold-standard lexical category of the front word in the queue, q 0 (line 3). However, this condition is not sufficient; a counterexample is the case where all the goldstandard lexical categories for the sentence in Figure 2 are shifted in succession. Hence, in general, the conditions under which an action is valid are more complex than the trivial case above.",
"cite_spans": [],
"ref_spans": [
{
"start": 553,
"end": 559,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "First, suppose there is only one correct derivation in \u03a6 G . A SHIFT action (SHIFT, c lex 0 ) is valid whenever c s 0 (the conjunctive node in \u03a6 G corresponding to the subtree s 0 on the stack) and c lex 0 (the conjunctive node in \u03a6 G corresponding to the next gold-standard lexical category from the queue) are both dominated by the conjunctive node parent p of c s 0 in \u03a6 G . 4 A REDUCE action (REDUCE, c) is valid if c matches the category of the conjunctive node parent of c s 0 and c s 1 in \u03a6 G . A UNARY action (UNARY, c) is valid if c matches the conjunctive node parent of c s 0 in \u03a6 G . We now generalize the case where \u03a6 G contains a single correct parse to the case of an oracle forest, where each parent p is replaced by a set of conjunctive nodes in \u03a6 G . if c \u2261 c lex 0 then c not gold lexical category 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "return false 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "else if c \u2261 c lex 0 and |s| = 0 then the initial item 6: return true 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "else if c \u2261 c lex 0 and |s| = 0 then 8: if |s| = 1 and c \u2208 \u03a6G then s is non-frontier 19:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "compute R(c s 1 , c s 0 ) 9: return R(c s 1 , c s 0 ) = \u2205 10: if x is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "compute R(c s 1 , c s 0 ) 20: return R(c s 1 , c s 0 ) = \u2205",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "A key to defining the dependency oracle function is the notion of a shared ancestor set. Intuitively, shared ancestor sets are built up through shift actions, and contain sets of nodes which can potentially become the results of reduce or unary actions. A further intuition is that shared ancestor sets define the space of possible correct derivations, and nodes in these sets are \"ticked off\" when reduce and unary actions are applied, as a single correct derivation is built through the shift-reduce process (corresponding to a bottom-up post-order traversal of the derivation). The following definition shows how the dependency oracle function builds shared ancestor sets for each action type. The base case for Definition 7 is when the goldstandard lexical category of the first word in the sentence has been shifted, which creates an empty shared ancestor set. Furthermore, the shared ancestor set is always empty when the stack is a frontier stack.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "The dependency oracle algorithm checks the validity of applicable actions. A SHIFT action is valid if R(c s 1 , c s 0 ) = \u2205 for the resulting stack s . A valid REDUCE action consumes s 1 and s 0 . For the new node, its shared ancestor set is the subset of the conjunctive nodes in R(c s 2 , c s 1 ) which dominate the resulting conjunctive node of a valid REDUCE action. The UNARY case for a frontier stack is trivial: any UNARY action applicable to s in \u03a6 G is valid. For a non-frontier stack, the UNARY case is similar to REDUCE except the resulting shared ancestor set is a subset of R(c s 1 , c s 0 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "We now turn to the problem of finding the shared ancestor sets. In practice, we do not do this by traversing \u03a6 G top-down from the conjunctive nodes in p L (c s 0 ) on-the-fly to find each member of R. Instead, when we build \u03a6 G in bottom-up topological order, we pre-compute the set of reachable disjunctive nodes of each conjunctive node c in \u03a6 G as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "D(c) = \u03b4(c) \u222a (\u222a c \u2208\u03b3(d),d\u2208\u03b4(c) (D(c )))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "Each D is implemented as a hash map, which allows us to test the membership of one potential conjunctive node in O(1) time. For example, a conjunctive node c \u2208 p L (c s 0 ) is reachable from c lex 0 if there is a disjunctive node d \u2208 D(c) s.t. c lex 0 \u2208 \u03b3(d). With this implementation, the complexity of checking each valid SHIFT action is then O(|p L (c s 0 )|).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dependency Oracle Algorithm",
"sec_num": "3.2"
},
{
"text": "We use the averaged perceptron (Collins, 2002) to train a global linear model and score each action. The normal-form model of Zhang and Clark (2011) uses an early update mechanism (Collins and Roark, 2004) , where decoding is stopped to update model weights whenever the single gold action falls outside the beam. In our parser, there can be multiple gold items in a beam. One option would be to apply early update whenever at least Algorithm 3 Dependency Model Training Input: (y, G) and beam size k 1: w \u2190 0; B0 \u2190 \u2205; i \u2190 0 2: B0.push( s, q 0) the initial item 3: cand \u2190 \u2205 candidate output priority queue 4: gold \u2190 \u2205 gold output priority queue 5: while Bi = \u2205 do 6: for each s, q \u2208 Bi do 7: if |q| = 0 then candidate output 8:",
"cite_spans": [
{
"start": 31,
"end": 46,
"text": "(Collins, 2002)",
"ref_id": "BIBREF8"
},
{
"start": 126,
"end": 148,
"text": "Zhang and Clark (2011)",
"ref_id": "BIBREF29"
},
{
"start": 180,
"end": 205,
"text": "(Collins and Roark, 2004)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "cand.push( s, q ) 9: if s G then s is a realization of G 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "gold.push( s, q ) 11: expand s, q into Bi+1 12: Bi+1 \u2190 Bi+1[1 : k] apply beam 13: if \u03a0G = \u2205, \u03a0G \u2229 Bi+1 = \u2205 and cand[0] G then 14:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "w \u2190 w + \u03c6(\u03a0G[0]) \u2212 \u03c6(Bi+1[0])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "early update 15:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "return 16: i \u2190 i + 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "continue to next step 17: if cand[0] G then final update 18:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "w \u2190 w + \u03c6(gold[0]) \u2212 \u03c6(cand[0])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "one of these gold items falls outside the beam. However, this may not be a true violation of the gold-standard (Huang et al., 2012) . Thus, we use a relaxed version of early update, in which all goldstandard actions must fall outside the beam before an update is performed. This update mechanism is provably correct under the violation-fixing framework of Huang et al. (2012) .",
"cite_spans": [
{
"start": 111,
"end": 131,
"text": "(Huang et al., 2012)",
"ref_id": "BIBREF18"
},
{
"start": 356,
"end": 375,
"text": "Huang et al. (2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "Let (y, G) be a training sentence paired with its gold-standard dependency structure and let \u03a0 s,q be the following set for an item s, q : s, q , (x, c) , \u03a6 G ) = true} \u03a0 s,q contains all correct items at step i + 1 obtained by expanding s, q . Let the set of all correct items at a step i + 1 be: 5",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 152,
"text": "s, q , (x, c)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "{ s, q \u2022 (x, c) | f d (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "\u03a0 G = s,q \u2208B i \u03a0 s,q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "Algorithm 3 shows the pseudocode for training the dependency model with early update for one input (y, G). The score of an item s, q is calculated as w \u2022 \u03c6( s, q ) with respect to the current model w, where \u03c6( s, q ) is the feature vector for the item. At step i, all items are expanded and added onto the next beam B i+1 , and the top-k retained. Early update is applied when all gold items first fall outside the beam, and any candidate output is incorrect (line 14). Since there are potentially many gold items, and one gold item is required for the perceptron update, a decision needs to be made regarding which gold item to update against. We choose to reward the highest scoring gold item, in line with the violation-fixing framework; and penalize the highest scoring incorrect item, using the standard perceptron update. A final update is performed if no more expansions are possible but the final output is incorrect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.3"
},
{
"text": "We implement our shift-reduce parser on top of the core C&C code base (Clark and Curran, 2007) and evaluate it against the shift-reduce parser of Zhang and Clark (2011) (henceforth Z&C) and the chartbased normal-form and hybrid models of Clark and Curran (2007) . For all experiments, we use CCGBank with the standard split: sections 2-21 for training (39,604 sentences), section 00 for development (1,913 sentences) and section 23 (2,407 sentences) for testing.",
"cite_spans": [
{
"start": 70,
"end": 94,
"text": "(Clark and Curran, 2007)",
"ref_id": "BIBREF4"
},
{
"start": 238,
"end": 261,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The way that the CCG grammar is implemented in C&C has some implications for our parser. First, unlike Z&C, which uses a context-free cover (Fowler and Penn, 2010) and hence is able to use all sentences in the training data, we are only able to use 36,036 sentences. The reason is that the grammar in C&C does not have complete coverage of CCGBank, due to the fact that e.g. not all rules in CCGBank conform to the combinatory rules of CCG. Second, our parser uses the unification mechanism from C&C to output dependencies directly, and hence does not need a separate postprocessing step to convert derivations into CCG dependencies, as required by Z&C.",
"cite_spans": [
{
"start": 140,
"end": 163,
"text": "(Fowler and Penn, 2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The feature templates of our model consist of all of those in Z&C, except the ones which require lexical heads to come from either the left or right child, as such features are incompatible with the head passing mechanism used by C&C. Each Z&C template is defined over a parse item, and captures various aspects of the stack and queue context. For example, one template returns the top category on the stack plus its head word, together with the first word and its POS tag on the queue. Another template returns the second category on the stack, together with the POS tag of its head word. Every Z&C feature is defined as a pair, consisting of an instantiated context template and a parse action. In addition, we use all the CCG predicate-argument dependency features from Clark and Curran (2007) , which contribute to the score of a REDUCE action when dependencies are realized. Detailed descriptions of all the templates in our model can be found in the respective papers. We run 20 training iterations and the resulting model contains 16.5M features with a nonzero weight.",
"cite_spans": [
{
"start": 773,
"end": 796,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We use 10-fold cross validation for POS tagging and supertagging the training data, and automatically assigned POS tags for all experiments. A probability cut-off value of 0.0001 for the \u03b2 parameter in the supertagger is used for both training and testing. The \u03b2 parameter determines how many lexical categories are assigned to each word; \u03b2 = 0.0001 is a relatively small value which allows in a large number of categories, compared to the default value used in Clark and Curran (2007) . For training only, if the gold-standard lexical category is not supplied by the supertagger for a particular word, it is added to the list of categories.",
"cite_spans": [
{
"start": 462,
"end": 485,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The beam size was tuned on the development set, and a value of 128 was found to achieve a reasonable balance of accuracy and speed; hence this value was used for all experiments. Since C&C always enforces non-fragmentary output (i.e. it can only produce spanning analyses), it fails on some sentences in the development and test sets, and thus we also evaluate on the reduced sets, follow-ing Clark and Curran (2007) . Our parser does not fail on any sentences because it permits fragmentary output (those cases where there is more than one subtree left on the final stack). The results for Z&C, and the C&C normal-form and hybrid models, are taken from Zhang and Clark (2011) . Table 1 shows the accuracies of all parsers on the development set, in terms of labeled precision and recall over the predicate-argument dependencies in CCGBank. On both the full and reduced sets, our parser achieves the highest F-score. In comparison with C&C, our parser shows significant increases across all metrics, with 0.57% and 1.06% absolute F-score improvements over the hybrid and normal-form models, respectively. Another major improvement over the other two parsers is in sentence level accuracy, LSent, which measures the number of sentences for which the dependency structure is completely correct. Table 1 also shows that our parser has improved recall over Z&C at some expense of precision. To probe this further we compare labeled precision and recall relative to dependency length, as measured by the distance between the two words in a dependency, grouped into bins of 5 values. Fig. 4 shows clearly that Z&C favors precision over recall, giving higher precision scores for almost all dependency lengths compared to our parser. In terms of recall (Fig. 4b ), our parser outperforms Z&C over all dependency lengths, especially for longer dependencies (x \u2265 20). When compared with C&C, the recall of the Z&C parser drops quickly for dependency lengths over 10. While our parser also suffers from this problem, it is less severe and is able to achieve higher recall at x \u2265 30. Table 2 compares our parser with Z&C and the C&C hybrid model, for the most frequent dependency relations. While our parser achieved lower precision than Z&C, it is more balanced and gives higher recall for all of the dependency relations except the last one, and higher F-score for over half of them. Table 3 presents the final test results on Section 23. Again, our parser achieves the highest scores across all metrics (for both the full and reduced test sets), except for precision and lexical category assignment, where Z&C performed better.",
"cite_spans": [
{
"start": 393,
"end": 416,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF4"
},
{
"start": 654,
"end": 676,
"text": "Zhang and Clark (2011)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 679,
"end": 686,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 1293,
"end": 1300,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 1578,
"end": 1584,
"text": "Fig. 4",
"ref_id": "FIGREF2"
},
{
"start": 1746,
"end": 1754,
"text": "(Fig. 4b",
"ref_id": "FIGREF2"
},
{
"start": 2073,
"end": 2080,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 2375,
"end": 2382,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "4.1"
},
{
"text": "We have presented a dependency model for a shiftreduce CCG parser, which fully aligns CCG parsing with the left-to-right, incremental nature of a shiftreduce parser. Our work is in part inspired by the dependency models of Clark and Curran (2007) and, in the use of a dependency oracle, is close in spirit to that of Goldberg and Nivre (2012) . The difference is that the Goldberg and Nivre parser builds, and scores, dependency structures directly, whereas our parser uses a unification mechanism to create dependencies, and scores the CCG derivations, allowing great flexibility in terms of what dependencies can be realized. Another related work is Yu et al. (2013) , which introduced a similar technique to deal with spurious ambiguity in MT. Finally, there may be potential to integrate the techniques of Auli and Lopez (2011), which currently represents the state-of-the-art in CCGBank parsing, into our parser.",
"cite_spans": [
{
"start": 223,
"end": 246,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF4"
},
{
"start": 317,
"end": 342,
"text": "Goldberg and Nivre (2012)",
"ref_id": "BIBREF12"
},
{
"start": 652,
"end": 668,
"text": "Yu et al. (2013)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "See Hockenmaier (2003) andClark and Curran (2007) for a description of CCG rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Under the hypergraph framework(Gallo et al., 1993;Huang and Chiang, 2005), a conjunctive node corresponds to a hyperedge and a disjunctive node corresponds to the head of a hyperedge or hyperedge bundle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The derivation is \"upside down\", following the convention used for CCG, where the root is S [dcl ]. We use sh, re and un to denote the three types of shift-reduce action.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Strictly speaking, the conjunctive node parent is a parent of the disjunctive node containing the conjunctive node cs 0 . We will continue to use this shorthand for parents of conjunctive nodes throughout the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In Algorithm 3 we abuse notation by using \u03a0G[0] to denote the highest scoring gold item in the set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their helpful comments. Wenduan Xu is fully supported by the Carnegie Trust and receives additional funding from the Cambridge Trusts. Stephen Clark is supported by ERC Starting Grant DisCoTex (306920) and EPSRC grant EP/I037512/1. Yue Zhang is supported by Singapore MOE Tier2 grant T2MOE201301.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "LP % (o) LP % (z) LP % (c) LR % (o) LR % (z) LR % (c) LF % (o) LF % (z) LF % (c) freq",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LP % (o) LP % (z) LP % (c) LR % (o) LR % (z) LR % (c) LF % (o) LF % (z) LF % (c) freq.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A comparison of loopy belief propagation and dual decomposition for integrated CCG supertagging and parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "References",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. ACL 2011",
"volume": "",
"issue": "",
"pages": "470--480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References Michael Auli and Adam Lopez. 2011. A compari- son of loopy belief propagation and dual decompo- sition for integrated CCG supertagging and parsing. In Proc. ACL 2011, pages 470-480, Portland, OR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Evaluating the accuracy of an unlexicalized statistical parser on the PARC DepBank",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of COLING/ACL",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Briscoe and John Carroll. 2006. Evaluating the accuracy of an unlexicalized statistical parser on the PARC DepBank. In Proc. of COLING/ACL, pages 41-48, Sydney, Australia.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Partial training for a lexicalized-grammar parser",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Curran",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. NAACL-06",
"volume": "",
"issue": "",
"pages": "144--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and James R. Curran. 2006. Partial training for a lexicalized-grammar parser. In Proc. NAACL-06, pages 144-151, New York, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Widecoverage efficient statistical parsing with CCG and log-linear models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "4",
"pages": "493--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and James R. Curran. 2007. Wide- coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493-552.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Evaluating a wide-coverage CCG parser",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the LREC 2002 Beyond Parseval Workshop",
"volume": "",
"issue": "",
"pages": "60--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark and Julia Hockenmaier. 2002. Evalu- ating a wide-coverage CCG parser. In Proc. of the LREC 2002 Beyond Parseval Workshop, pages 60- 66, Las Palmas, Spain.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Building deep dependency structures with a wide-coverage CCG parser",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "327--334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark, Julia Hockenmaier, and Mark Steed- man. 2002. Building deep dependency structures with a wide-coverage CCG parser. In Proc. ACL, pages 327-334, Philadelphia, PA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Incremental parsing with the perceptron algorithm",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "111--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proc. of ACL, pages 111-118, Barcelona, Spain.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden Markov models: Theory and ex- periments with perceptron algorithms. In Proc. of EMNLP, pages 1-8, Philadelphia, USA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Efficient normal-form parsing for Combinatory Categorial Grammar",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner. 1996. Efficient normal-form parsing for Combinatory Categorial Grammar. In Proc. ACL, pages 79-86, Santa Cruz, CA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Accurate context-free parsing with Combinatory Categorial Grammar",
"authors": [
{
"first": "A",
"middle": [
"D"
],
"last": "Timothy",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Fowler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Penn",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "335--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy AD Fowler and Gerald Penn. 2010. Accu- rate context-free parsing with Combinatory Catego- rial Grammar. In Proc. ACL, pages 335-344, Upp- sala, Sweden.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Directed hypergraphs and applications",
"authors": [
{
"first": "Giorgio",
"middle": [],
"last": "Gallo",
"suffix": ""
},
{
"first": "Giustino",
"middle": [],
"last": "Longo",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Pallottino",
"suffix": ""
},
{
"first": "Sang",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 1993,
"venue": "Discrete applied mathematics",
"volume": "42",
"issue": "2",
"pages": "177--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giorgio Gallo, Giustino Longo, Stefano Pallottino, and Sang Nguyen. 1993. Directed hypergraphs and applications. Discrete applied mathematics, 42(2):177-201.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A dynamic oracle for arc-eager dependency parsing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Joakim Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. In Proc. COLING, Mumbai, India.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Efficient implementation for beam search incremental parsers",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Short Papers of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg, Kai Zhao, and Liang Huang. 2013. Efficient implementation for beam search incremen- tal parsers. In Proceedings of the Short Papers of ACL, Sofia, Bulgaria.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "CCGbank: A corpus of CCG derivations and dependency structures extracted from the Penn Treebank",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "3",
"pages": "355--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Hockenmaier and Mark Steedman. 2007. CCG- bank: A corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Com- putational Linguistics, 33(3):355-396.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Data and Models for Statistical Parsing with Combinatory Categorial Grammar",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Hockenmaier. 2003. Data and Models for Sta- tistical Parsing with Combinatory Categorial Gram- mar. Ph.D. thesis, University of Edinburgh.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Better kbest parsing",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth International Workshop on Parsing Technology",
"volume": "",
"issue": "",
"pages": "53--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and David Chiang. 2005. Better k- best parsing. In Proceedings of the Ninth Interna- tional Workshop on Parsing Technology, pages 53- 64, Vancouver, Canada.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Dynamic programming for linear-time incremental parsing",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "1077--1086",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and Kenji Sagae. 2010. Dynamic pro- gramming for linear-time incremental parsing. In Proc. ACL, pages 1077-1086, Uppsala, Sweden.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Structured perceptron with inexact search",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Suphan",
"middle": [],
"last": "Fayong",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. NAACL",
"volume": "",
"issue": "",
"pages": "142--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Proc. NAACL, pages 142-151, Montreal, Canada.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Maximum entropy estimation for feature forests",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Human Language Technology Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Miyao and Jun'ichi Tsujii. 2002. Maximum entropy estimation for feature forests. In Proceed- ings of the Human Language Technology Confer- ence, San Diego, CA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Integrating graph-based and transition-based dependency parsers",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of ACL/HLT",
"volume": "",
"issue": "",
"pages": "950--958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre and Ryan McDonald. 2008. Integrat- ing graph-based and transition-based dependency parsers. In Proc. of ACL/HLT, pages 950-958, Columbus, Ohio.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Deterministic dependency parsing of English text",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scholz",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING 2004",
"volume": "",
"issue": "",
"pages": "64--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nivre and M Scholz. 2004. Deterministic depen- dency parsing of English text. In Proceedings of COLING 2004, pages 64-70, Geneva, Switzerland.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Evaluation of dependency parsers on unbounded dependencies",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Gomez-Rodriguez",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Laura Rimell, Ryan McDonald, and Car- los Gomez-Rodriguez. 2010. Evaluation of depen- dency parsers on unbounded dependencies. In Proc. of COLING, Beijing, China.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Unbounded dependency recovery for parser evaluation",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "813--821",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded dependency recovery for parser evaluation. In Proc. EMNLP, pages 813-821, Edin- burgh, UK.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A framework for (under)specifying dependency syntax without overloading annotators",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Naomi",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Saphra",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of the 7th Linguistic Annotation Workshop and Interoperability with Discourse",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Schneider, Brendan O'Connor, Naomi Saphra, David Bamman, Manaal Faruqui, Noah A. Smith, Chris Dyer, and Jason Baldridge. 2013. A frame- work for (under)specifying dependency syntax with- out overloading annotators. In Proc. of the 7th Lin- guistic Annotation Workshop and Interoperability with Discourse, Sofia, Bulgaria.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The Syntactic Process",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman. 2000. The Syntactic Process. The MIT Press, Cambridge, Mass.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Statistical dependency analysis using support vector machines",
"authors": [
{
"first": "H",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of IWPT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H Yamada and Y Matsumoto. 2003. Statistical depen- dency analysis using support vector machines. In Proc. of IWPT, Nancy, France.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Max-violation perceptron and forced decoding for scalable mt training",
"authors": [
{
"first": "Heng",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. EMNLP, Seattle",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heng Yu, Liang Huang, Haitao Mi, and Kai Zhao. 2013. Max-violation perceptron and forced decod- ing for scalable mt training. In Proc. EMNLP, Seat- tle, Washington, USA.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A tale of two parsers: investigating and combining graphbased and transition-based dependency parsing using beam-search",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2008. A tale of two parsers: investigating and combining graph- based and transition-based dependency parsing us- ing beam-search. In Proc. of EMNLP, Hawaii, USA.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Shift-reduce CCG parsing",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. ACL 2011",
"volume": "",
"issue": "",
"pages": "683--692",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2011. Shift-reduce CCG parsing. In Proc. ACL 2011, pages 683-692, Portland, OR.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Deterministic example of shift-reduce CCG parsing (lexical categories omitted on queue).",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Given \u03a6 G and an item s, q s.t. s \u223c G, we say an applicable action (x, c) for the item is valid iff s \u223c G or s G, where s , q = s, q \u2022 (x, c).",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "The left parent set p L (c) of conjunctive node c \u2208 \u03a6 G is the set of all parent conjunctive nodes of c in \u03a6 G , which have the disjunc-tive node d containing c (i.e. c \u2208 \u03b3(d)) as a left child. Definition 5. The ancestor set A(c) of conjunctive node c \u2208 \u03a6 G is the set of all reachable ancestor conjunctive nodes of c in \u03a6 G . Definition 6. Given an item s, q , if |s| = 1 we say s is a frontier stack. Algorithm 2 The Dependency Oracle Function f d Input: \u03a6G, an item s, q s.t. s \u223c G, (x, c) \u2208 X s,q Let s be the stack of s , q = s, q \u2022 (x, c) 1: function MAIN( s, q , (x, c), \u03a6G) 2: if x is SHIFT then 3:",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "Let s, q be an item and let s , q = s, q \u2022 (x, c). We define the shared ancestor set R(c s 1 , c s 0 ) of c s 0 , after applying action (x, c), as:\u2022 {c | c \u2208 pL(cs 0 ) \u2229 A(c)},if s is frontier and x = SHIFT \u2022 {c | c \u2208 pL(cs 0 ) \u2229 A(c) and there is some c \u2208 R(cs 1 , cs 0 ) s.t. c \u2208 A(c )}, if s is non-frontier and x = SHIFT \u2022 {c | c \u2208 R(cs 2 , cs 1 ) \u2229 A(c)}, if x = REDUCE \u2022 {c | c \u2208 R(cs 1 , cs 0 ) \u2229 A(c)}, if s is non-frontier and x = UNARY \u2022 R( , c 0 s 0 ) = \u2205 where c 0 s 0 is the conjunctive node corresponding to the gold-standard lexical category of the first word in the sentence ( is a dummy symbol indicating the bottom of stack).",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF4": {
"text": "recall vs. dependency length",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF5": {
"text": "Labeled precision and recall relative to dependency length on the development set. C&C normal-form model is used.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF2": {
"text": "Accuracy comparison on Section 00 (auto POS).",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF3": {
"text": "Accuracy comparison on most frequent dependency types, for our parser (o), Z&C (z) and C&C hybrid model (c). Categories in bold indicate the argument slot in the relation.",
"content": "<table><tr><td/><td colspan=\"3\">LP % LR % LF % LSent. % CatAcc. % coverage %</td></tr><tr><td>our parser</td><td>87.03 85.08 86.04 35.69</td><td>93.10</td><td>100</td></tr><tr><td>Z&amp;C</td><td>87.43 83.61 85.48 35.19</td><td>93.12</td><td>100</td></tr><tr><td colspan=\"2\">C&amp;C (normal-form) 85.58 82.85 84.20 32.90</td><td>92.84</td><td>100</td></tr><tr><td>our parser</td><td>87.04 85.16 86.09 35.84</td><td>93.13</td><td>99.58 (C&amp;C coverage)</td></tr><tr><td>Z&amp;C</td><td>87.43 83.71 85.53 35.34</td><td>93.15</td><td>99.58 (C&amp;C coverage)</td></tr><tr><td>C&amp;C (hybrid)</td><td>86.17 84.74 85.45 32.92</td><td>92.98</td><td>99.58 (C&amp;C coverage)</td></tr><tr><td colspan=\"2\">C&amp;C (normal-form) 85.48 84.60 85.04 33.08</td><td>92.86</td><td>99.58 (C&amp;C coverage)</td></tr></table>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF4": {
"text": "Accuracy comparison on section 23 (auto POS).",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
}
}
}
}