| { |
| "paper_id": "N19-1020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:02:18.746444Z" |
| }, |
| "title": "CCG Parsing Algorithm with Incremental Tree Rotation", |
| "authors": [ |
| { |
| "first": "Milo\u0161", |
| "middle": [], |
| "last": "Stanojevi\u0107", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Edinburgh", |
| "location": {} |
| }, |
| "email": "m.stanojevic@ed.ac.uk" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Edinburgh", |
| "location": {} |
| }, |
| "email": "steedman@inf.ed.ac.uk" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "The main obstacle to incremental sentence processing arises from right-branching constituent structures, which are present in the majority of English sentences, as well as from optional constituents that adjoin on the right, such as right adjuncts and right conjuncts. In CCG, many right-branching derivations can be replaced by semantically equivalent leftbranching incremental derivations. The problem of right-adjunction is more resistant to solution, and has been tackled in the past using revealing-based approaches that often rely either on the higher-order unification over lambda terms (Pareschi and Steedman, 1987) or heuristics over dependency representations that do not cover the whole CCGbank (Ambati et al., 2015). We propose a new incremental parsing algorithm for CCG following the same revealing tradition of work but having a purely syntactic approach that does not depend on access to a distinct level of semantic representation. This algorithm can cover the whole CCGbank, with greater incrementality and accuracy than previous proposals.", |
| "pdf_parse": { |
| "paper_id": "N19-1020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "The main obstacle to incremental sentence processing arises from right-branching constituent structures, which are present in the majority of English sentences, as well as from optional constituents that adjoin on the right, such as right adjuncts and right conjuncts. In CCG, many right-branching derivations can be replaced by semantically equivalent leftbranching incremental derivations. The problem of right-adjunction is more resistant to solution, and has been tackled in the past using revealing-based approaches that often rely either on the higher-order unification over lambda terms (Pareschi and Steedman, 1987) or heuristics over dependency representations that do not cover the whole CCGbank (Ambati et al., 2015). We propose a new incremental parsing algorithm for CCG following the same revealing tradition of work but having a purely syntactic approach that does not depend on access to a distinct level of semantic representation. This algorithm can cover the whole CCGbank, with greater incrementality and accuracy than previous proposals.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Combinatory Categorial Grammar (CCG) (Ades and Steedman, 1982; Steedman, 2000) is a mildly context sensitive grammar formalism that is attractive both from a cognitive and an engineering perspective. Compared to other grammar formalisms, the aspect in which CCG excels is incremental sentence processing. CCG has a very flexible notion of constituent structure which allows (mostly) left-branching derivation trees that are easier to process incrementally. Take for instance the derivation tree in Figure 1a . If we use a nonincremental shift-reduce parser (as done in the majority of transition-based parsers for CCG (Zhang and Clark, 2011; Xu et al., 2014 ; Xu, 2016)) we will be able to establish the semantic connection between the subject \"Nada\" and the verb \"eats\" only when we reach the end of the sentence. This is undesirable for several reasons. First, human sentence processing is much more incremental, so that the meaning of the prefix \"Nada eats\" is available as soon as it is read (Marslen-Wilson, 1973) . Second, if we want a predictive model-either for better parsing or language modelling-it is crucial to establish relations between the words in the prefix as early as possible.", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 62, |
| "text": "(Ades and Steedman, 1982;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 63, |
| "end": 78, |
| "text": "Steedman, 2000)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 618, |
| "end": 641, |
| "text": "(Zhang and Clark, 2011;", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 642, |
| "end": 657, |
| "text": "Xu et al., 2014", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 996, |
| "end": 1018, |
| "text": "(Marslen-Wilson, 1973)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 498, |
| "end": 507, |
| "text": "Figure 1a", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To address this problem, a syntactic theory needs to be able to represent partial constituents like \"Nada eats\" and have mechanisms to build them just by observing the prefix. In CCG solutions for these problems come out of the theory naturally. CCG categories can represent partial structures and these partial structures can combine into bigger (partial) structures using CCG combinators recursively. Figure 1b shows how CCG can incrementally process the example sentence via a different derivation tree that generates the same semantics more incrementally by being leftbranching.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 403, |
| "end": 412, |
| "text": "Figure 1b", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This way of doing incremental processing seems straightforward except for one obstacle: optional constituents that attach from the right, i.e. right adjuncts. Because they are optional, it is impossible to predict them with certainty. This forces an eager incremental processor to make an uninformed decision very early and, if later that decision turns out to be wrong, to backtrack to repair the mistake. This behaviour would imply that human processors have difficulty in processing right adjuncts, but that does not seem to be the case. For instance, let's say that after incrementally processing \"Nada eats apples\" we encounter right adjunct \"regularly\" as in Figure 2a . The parser will be stuck at this point because there is no way to at- tach the right adjunct of a verb phrase to a sentence constituent. A simple solution would be some sort of limited back-tracking where we would look if we could extract the verb-phrase, attach its right adjunct, and then put the derivation back together. But how do we do the extraction of the verb-phrase \"eats apples\" when that constituent was never built during the incremental left-branching derivation? Pareschi and Steedman (1987) proposed to reveal the constituent that is needed, the verb-phrase in our example, by having an elegant way of reanalysing the derivation. This reanalysis does not repeat parsing from scratch but instead runs a single CCG combinatory rule backwards. In the example at hand, first we recognise that right adjunction needs to take place because we have a category of shape X\\X (concretely (S\\NP)\\(S\\NP) but in the present CCG notation slashes \"associate to the left\", so we drop the first pair of brackets). Thanks to the type of the adjunct we know that the constituent that needs to be revealed is of type X, in our case S\\NP. Now, we take the constituent on the left of the right adjunct, in our example constituent S, and look for CCG category Y and combinatory rule C that satisfies the following relation: C(Y, S\\NP) = S. The solution to this type equation is Y=NP and C=<.", |
| "cite_spans": [ |
| { |
| "start": 1155, |
| "end": 1183, |
| "text": "Pareschi and Steedman (1987)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 665, |
| "end": 674, |
| "text": "Figure 2a", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To confine revealing to delivering constituents that the parser could have built if it had been less greedy for incrementality, and exclude revelation of unsupported types, such as PP in Figure 2a , the process must be constrained by the actual derivation. Pareschi and Steedman proposed to do so by accessing the semantic representation in parallel, using higher-order unification, which is in general undecidable and may be unsound unless defined over a specific semantic representation. Ambati et al. (2015) propose an alternative method for revealing where dependencies are used as a semantic representation (instead of first-order logic) and special heuristics are used for revealing (instead of higher order unification). This is computationally a much more efficient approach and appears sound, but requires distinct revealing rules for each constituent type and has specific difficulties with punctuation.", |
| "cite_spans": [ |
| { |
| "start": 490, |
| "end": 510, |
| "text": "Ambati et al. (2015)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 187, |
| "end": 196, |
| "text": "Figure 2a", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper we propose a method of revealing that does not depend on any specific choice of semantic representation, can discover multiple possible revealing options if they are available, is sound and complete and computationally efficient, and gives state-of-the-art parsing results. The algorithm works by building leftbranching derivations incrementally, but, following Niv (1993 Niv ( , 1994 , as soon as a left branching derivation is built, its derivation tree is rebalanced to be right-branching. When all such constituents' derivation trees are right-branching, revealing becomes a trivial operation where we just traverse the right spine looking for the constituent(s) of the right type to be modified by the right adjunct.", |
| "cite_spans": [ |
| { |
| "start": 376, |
| "end": 385, |
| "text": "Niv (1993", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 386, |
| "end": 398, |
| "text": "Niv ( , 1994", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We call this rebalancing operation tree rota-tion since it is a technical term established in the field of data structures for similar operation of balanced binary search trees (Adelson-Velskii and Landis, 1962; Guibas and Sedgewick, 1978; Okasaki, 1999; Cormen et al., 2009) . Figure 2b shows the right rotated derivation \"Nada eats apples\" next to the adjunct. Here we can just look up the required S\\NP and attach the right adjunct to it as in Figure 2c .", |
| "cite_spans": [ |
| { |
| "start": 177, |
| "end": 211, |
| "text": "(Adelson-Velskii and Landis, 1962;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 212, |
| "end": 239, |
| "text": "Guibas and Sedgewick, 1978;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 240, |
| "end": 254, |
| "text": "Okasaki, 1999;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 255, |
| "end": 275, |
| "text": "Cormen et al., 2009)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 278, |
| "end": 287, |
| "text": "Figure 2b", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 447, |
| "end": 456, |
| "text": "Figure 2c", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "CCG is a lexicalized grammar formalism where each lexical item in a derivation has a category assigned to it which expresses the ways in which the lexical item can be used in the derivation. These categories are put together using combinatory rules.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combinatory Categorial Grammar", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The binary combinatory rules we use are:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combinatory Categorial Grammar", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "X/Y Y \u21d2 X (>) Y X\\Y \u21d2 X (<) X/Y Y /Z \u21d2 X/Z (>B) Y \\Z X\\Y \u21d2 X\\Z (<B) Y /Z X\\Y \u21d2 X/Z (<B \u00d7 ) Y /Z|W X\\Y \u21d2 X/Z|W (<B 2 \u00d7 ) X/Y Y /Z|W \u21d2 X/Z|W (", |
| "eq_num": ">B 2" |
| } |
| ], |
| "section": "Combinatory Categorial Grammar", |
| "sec_num": "2" |
| }, |
| { |
| "text": ") Each binary combinatory rule has one primary and one secondary category as its inputs. The primary functor is the one that selects; while the secondary category is the one that is selected. In forward combinatory rules the primary functor is always the left argument, while in the backward combinatory rules it is always the right.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combinatory Categorial Grammar", |
| "sec_num": "2" |
| }, |
| { |
| "text": "It is useful to look at the mentioned combinatory rules in a generalised way. For instance, if we look at forward combinatory rules we can see that they all follow the same pattern of combining X/Y with a category that starts with Y . The only difference among them is how many subcategories follow Y in the secondary category. In case of forward function application there will be nothing following Y so we can treat forward function application as a generalised forward composition combinator of the zeroth order >B0. Standard forward function composition >B will be a generalised composition of first order >B1 while >B 2 will be >B2. Same generalisation can be applied to backward combinators. There is a low bound on the order of combinatory rules, around 2 or 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combinatory Categorial Grammar", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Following Hockenmaier and Steedman (2007) , the proclitic character of conjunctions is captured in a syncategorematic rule combining them with the right conjunct, with the result later combining with the left conjunct 1 :", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 41, |
| "text": "Hockenmaier and Steedman (2007)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combinatory Categorial Grammar", |
| "sec_num": "2" |
| }, |
| { |
| "text": "conj X \u21d2 X[conj] (>\u03a6) X X[conj] \u21d2 X", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combinatory Categorial Grammar", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(<\u03a6) Some additional unary and binary type-changing rules are also needed to process the derivations in CCGbank (Hockenmaier and Steedman, 2007) . We use the same type-changing rules as those described in (Clark and Curran, 2007) .", |
| "cite_spans": [ |
| { |
| "start": 112, |
| "end": 144, |
| "text": "(Hockenmaier and Steedman, 2007)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 205, |
| "end": 229, |
| "text": "(Clark and Curran, 2007)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combinatory Categorial Grammar", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Among the unary combinatory rules the most important one is type-raising. The first reason for that is that it allows CCG to handle constructions like argument cluster coordination in a straightforward way. Second, it allows CCG to be much more incremental as seen from the example in Figure 1b. Type-raising rules are expressed in the following way:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 285, |
| "end": 291, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Combinatory Categorial Grammar", |
| "sec_num": "2" |
| }, |
| { |
| "text": "X \u21d2 Y /(Y \\X) (>T) X \u21d2 Y \\(Y /X) (<T)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combinatory Categorial Grammar", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Type-raising, is strictly limited to applying to category types that are arguments, such as NP, PP, etc., making it analogous to grammatical case in languages like Latin and Japanese, in spite of the lack of morphological case in English.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combinatory Categorial Grammar", |
| "sec_num": "2" |
| }, |
| { |
| "text": "CCG derivations can be parsed with the same shift-reduce mechanism used for CFG parsing (Steedman, 2000) . In the context of CFG parsing, the shift-reduce algorithm is not incremental, because CFG structures are mostly right-branching, but in CCG by changing the derivation via the combinatory rules we also change the level of incrementality of the algorithm.", |
| "cite_spans": [ |
| { |
| "start": 88, |
| "end": 104, |
| "text": "(Steedman, 2000)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing", |
| "sec_num": "3" |
| }, |
| { |
| "text": "As usual, the shift-reduce algorithm consists of a stack of the constituents built so far and a buffer with words that are yet to be processed. Parsing starts with the stack empty and the buffer containing the whole sentence. The end state is a stack with only one element and an empty buffer. Transitions between parser states are:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2022 shift(X) -moves the first word from the buffer to the stack and labels it with category X, \u2022 reduceUnary(C) -applies a unary combinatory rule C to the topmost constituent on the stack, \u2022 reduceBinary(C) -applies a binary combinatory rule C to the two topmost constituents on the stack. CCG shift-reduce parsers are often built over right-branching derivations that obey Eisner normal form (Eisner, 1996) .", |
| "cite_spans": [ |
| { |
| "start": 391, |
| "end": 405, |
| "text": "(Eisner, 1996)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Processing leftbranching derivations is not any different except that it requires an opposite normal form.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our revealing algorithm adds a couple of modifications to this default shift-reduce algorithm. First, it guarantees that all the trees stored on the stack are right-branching -this still allows leftbranching parsing and only adds the requirement of adjusting newly reduced trees on the stack to be right leaning. Second, it adds revealing transitions that exploit the right-branching guarantee to apply right adjunction. Both tree rotation and revealing are performed efficiently as described in the following subsections.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing", |
| "sec_num": "3" |
| }, |
| { |
| "text": "A na\u00efve way of enforcing right-branching guarantee is to do a complete transformation of the subtree on the stack into a right-branching one. However, that would be unnecessarily expensive. Instead we do incremental tree rotation to right. If we assume that all the elements on the stack are respecting this right-branching form (our inductive case), this state can be disturbed only by reduceBinary transition (shift just adds a single word which is trivially right-branching and reduceUnary does not influence the direction of branching). The re-duceBinary transition will take two topmost elements on the stack that are already right-branching and put them as children of some new binary node. We need to repair that potential \"imperfection\" on top of the tree. This is done by recursively rotating the nodes as in Figure 3a . 2 This figure shows one of the sources of CCG's spurious ambiguity: parent-child relation of the combinatory rules with the same directionality. Here we concentrate on forward combinators because they are the most frequent in our datamost backward combinators disappear with the addition of forward type-raising and the addition of special right adjunct transitions-but the same method can be applied to backward combinatory rules as a mirror image. Having two combinatory rules of the same directionality is necessary 2 Although we do not discuss the operations on the semantic predicate-argument structure that correspond to treerotation, the combinatory semantics of the rules themselves guarantees that such operations can be done uniformly and in parallel. but not sufficient condition for spurious ambiguity. As visible on the Figure 3a side condition, the lower combinator must not be >B0. The tree rotation function assumes that both of the children are \"perfect\"-meaning right-branching 3 -and that the only imperfection is on the root node. The method repairs this imperfection on the root by applying the tree rotation transformation, but it also creates a new node as a right child and that node might be imperfect. That is why the method goes down the right node recursively until all the imperfections are removed and the whole tree becomes fully right-branching. In the worst case the method will reach the bottom of the tree, but often only 3 or 4 nodes need to be transformed to make the tree perfectly the right branching The worst case complexity of repairing the imperfection is O(n) which makes the complexity of the whole parsing algorithm O(n 2 ) for building a single derivation. As a running example we will use a derivation tree in Figure 4a for which a transition sequence is given in Figure 4b . Here tree rotation is used in transitions 6 and 8 that introduce imperfections. In transition 6 a single tree rotation at the top was enough to correct the imperfection, while in transition 8 recursive tree rotation function went to depth two.", |
| "cite_spans": [ |
| { |
| "start": 830, |
| "end": 831, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 818, |
| "end": 827, |
| "text": "Figure 3a", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 1663, |
| "end": 1672, |
| "text": "Figure 3a", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 2588, |
| "end": 2597, |
| "text": "Figure 4a", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 2642, |
| "end": 2651, |
| "text": "Figure 4b", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tree rotation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "If the upper and lower combinators are both >B2 the topmost combinator on the right will be- come >B3, a combinatory rule that may be unnecessary for defining the competence grammar of human languages, but which is required if parsing performance is to be as incremental as possible. Fortunately, the configuration with two connected >B2 combinatory rules appears very rarely in CCGbank. Many papers have been published on using leftbranching CCG derivations but, to the best of our knowledge, none of them explains how are they constructed from right-branching CCGbank trees. A very simple algorithm for that can be made using our tree rotation function. Here we use rotation in the opposite direction i.e. rotation to left (Figure 3b) . We cannot apply this operation from the top node of the CCGbank tree because that tree does not satisfy the assumption of the algorithm: immediate children are not \"perfect\" (here perfect means being left-branching). That is why we start from the bottom of the tree with terminal nodes that are trivially \"perfect\" and apply tree transformation to each node in post-order traversal.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 725, |
| "end": 736, |
| "text": "(Figure 3b)", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tree rotation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "This incremental tree rotation algorithm is in-spired by the AVL self-balancing binary search trees (Adelson-Velskii and Landis, 1962) and Red-Black trees (Guibas and Sedgewick, 1978; Okasaki, 1999) . The main difference is that here we are trying to do the opposite of AVLs: instead of making the tree perfectly balanced we are trying to make it perfectly unbalanced, i.e. leaning to the right (or left). Also, our imperfections start at the top and are pushed to the bottom of the tree which is in contrast to AVLs trees where imperfections start at the bottom and get pushed to the top.", |
| "cite_spans": [ |
| { |
| "start": 155, |
| "end": 183, |
| "text": "(Guibas and Sedgewick, 1978;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 184, |
| "end": 198, |
| "text": "Okasaki, 1999)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree rotation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The last important point about tree rotation concerns punctuation rules. All punctuation is attached to the left of the highest possible node in case of left-branching derivations (Hockenmaier and Bisk, 2010) , while in the right-branching derivations we lower the punctuation to the bottom left neighbouring node. Punctuation has no influence on the predicate-argument structure so it is safe to apply this transformation.", |
| "cite_spans": [ |
| { |
| "start": 180, |
| "end": 208, |
| "text": "(Hockenmaier and Bisk, 2010)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree rotation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "If the topmost element on the stack is of the form X\\X and the second topmost element on the stack has on its right edge one or more constituents of a type X|$ we allow reveal transition. 4 This is a more general way of revealing than approaches of Pareschi and Steedman (1987) and Ambati et al. (2015) who attempt to reveal only constituents of type X while we reveal any type that has X as its prime element (that is the meaning of X|$ notation).", |
| "cite_spans": [ |
| { |
| "start": 249, |
| "end": 277, |
| "text": "Pareschi and Steedman (1987)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 282, |
| "end": 302, |
| "text": "Ambati et al. (2015)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Revealing transitions", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We also treat X[conj] as right adjuncts of the left conjunct. Similarly to the previous case, if the topmost element on the stack is X[conj] and the right edge of the second topmost element on the stack has constituent(s) of type X, they are revealed for possible combination via <\u03a6 combinator.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Revealing transitions", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "If reveal transition is selected, as in transition 14 in Figure 4b , the parser enters into a mode of choosing among different constituents labelled X|$ that could be modified by the right adjunct X\\X. After particular X|$ node is chosen X\\X is combined with it and the rest of the tree above X node is rebuilt in the same way. This rebuild is fully deterministic and is done quickly even though in principle it could take O(n) to compute. Even in the worst case scenario, it does not make the complexity of the algorithm go higher than O(n 2 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 57, |
| "end": 66, |
| "text": "Figure 4b", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Revealing transitions", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The ability of our algorithm to choose among different possible revealing options is unique among all the proposals for revealing. For transition 15 in Figure 4b the parser can choose whether to adjoin (coordinate) to a verb phrase that already contains a left modifier or without. This is similar to Selective Modifier Placement strategy from older Augmented Transition Network (ATN) systems (Woods, 1973) which finds all the attachment options that are syntactically legal and then allows the parser to choose among those using some criteria. Woods (1973) suggests using lexical semantic information for this selection, but in his ATN system only handwritten semantic selection rules were used. Here we will also use selection based on the lexical content but it will be broad coverage and learned from the data. This ability to semantically select the modifier's attachment point is essential for good parsing results as will be shown.", |
| "cite_spans": [ |
| { |
| "start": 393, |
| "end": 406, |
| "text": "(Woods, 1973)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 545, |
| "end": 557, |
| "text": "Woods (1973)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 152, |
| "end": 161, |
| "text": "Figure 4b", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Revealing transitions", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The neural probabilistic model that chooses which transition should be taken next conditions on the whole state of the configuration in a similar way to RNNG parser (Dyer et al., 2016) . The words in the sentence are first embedded using the concatenation of top layers of ELMo embeddings (Peters et al., 2018) that are normalised to L2 norm and then refined with two layers of bi-LSTM (Graves et al., 2005) . The neural representation of the terminal is composed of concatenated ELMo embedding and supertag embedding.", |
| "cite_spans": [ |
| { |
| "start": 165, |
| "end": 184, |
| "text": "(Dyer et al., 2016)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 289, |
| "end": 310, |
| "text": "(Peters et al., 2018)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 386, |
| "end": 407, |
| "text": "(Graves et al., 2005)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The representation of a subtree combines:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 span representation -we subtract representation of the leftmost terminal from the representation of the rightmost terminal as done in LSTM-Minus architecture (Wang and Chang, 2016) , \u2022 combinator and category embeddings, \u2022 head words encoding -because each constituent can have a set of heads, for instance arising from coordination, we model representation of heads with DeepSet architecture (Zaheer et al., 2017) over representations of head terminals. We do not use recursive neural networks like Tree-LSTM (Tai et al., 2015) to encode subtrees because of the frequency of tree rotation. These operations are fast, but they would trigger frequent recomputation of the neural tree representation, so we opted for a mechanism that is invariant to rebranching.", |
| "cite_spans": [ |
| { |
| "start": 160, |
| "end": 182, |
| "text": "(Wang and Chang, 2016)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 395, |
| "end": 416, |
| "text": "(Zaheer et al., 2017)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 512, |
| "end": 530, |
| "text": "(Tai et al., 2015)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The stack representation is encoded using Stack-LSTM (Dyer et al., 2015) . The configuration representation is the concatenation of the stack representation and the representation of the rightmost terminal in the stack. The next nonrevealing transition is chosen by a two-layer feedforward network.", |
| "cite_spans": [ |
| { |
| "start": 53, |
| "end": 72, |
| "text": "(Dyer et al., 2015)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "If the reveal transition is triggered, the system needs to choose which among the candidate nodes X|$ to adjoin the right modifier X\\X to. The number of these modifiers can vary so we cannot use a simple feed-forward network to choose among them. Instead, we use the mechanism of Pointer networks (Vinyals et al., 2015) , which works in a similar way to attention (Bahdanau et al., 2014) except that attention weights are interpreted as probabilities of selecting any particular node. Attention is computed over representations of each candidate node. Because we expect that there (Ambati et al., 2015) could be some preference for attaching adjuncts high or low in the tree we add to the context representation of each node two position embeddings (Vaswani et al., 2017) that encode the candidate node's height and depth in the current tree. We optimize for maximum log-likelihood on the training set, using only the most frequent supertags and the most important combinators. To avoid discarding sentences with rare supertags and type-changing rules we use all supertags and combinatory rules during training but do not add their probability to the loss function. The number of supertags used is 425, as in the Easy-CCG parser, and the combinatory rules that are used are the same as in C&C parser. The loss is minimised for 15 epochs on the training portion of CCGbank (Hockenmaier and Steedman, 2007) using Adam with learning rate 0.001. Dimensionality is set to 128 in all cases, except for ELMo set at 300. Dropout is applied only to the ELMo input with a rate of 0.2. The parser is implemented in Scala using the DyNet toolkit (Neubig et al., 2017) and is available at https://github.com/stanojevic/Rotating-CCG.", |
| "cite_spans": [ |
| { |
| "start": 297, |
| "end": 319, |
| "text": "(Vinyals et al., 2015)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 364, |
| "end": 387, |
| "text": "(Bahdanau et al., 2014)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 581, |
| "end": 602, |
| "text": "(Ambati et al., 2015)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 749, |
| "end": 771, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 1372, |
| "end": 1404, |
| "text": "(Hockenmaier and Steedman, 2007)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1634, |
| "end": 1655, |
| "text": "(Neubig et al., 2017)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To measure the incrementality of the proposed algorithm we use two evaluation metrics: waiting time and connectedness. Waiting time is the average number of nodes that need to be shifted before the dependency between two nodes is established. The minimal value for a fully incremental algorithm is 0 (the single shift that is always necessary is not counted). Connectedness is defined as the average stack size before a shift operation is performed (the initial two shifts are forced so they are not taken in the average). The minimal value for connectedness is 1. We have computed these measures on the training portion of the CCGbank for standard non-incremental right-branching deriva- tions, the more incremental left-branching derivations and our revealing derivations. We also put in the results numbers for the previous proposal of revealing by Ambati et al. (2015) taken from their paper but these numbers should be taken with caution, because it is not clear from the paper whether the authors computed them in the same way and on the same portion of the dataset as we did. Table 1 results shows that our revealing derivations are significantly more incremental even in comparison to previous revealing proposals, and barely use more than the minimal amount of stack memory.", |
| "cite_spans": [ |
| { |
| "start": 852, |
| "end": 872, |
| "text": "Ambati et al. (2015)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "How incremental is the Revealing algorithm?", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We have tested on the development set which of the parsing algorithms gives best parsing accuracy. All the algorithms use the same neural architecture and training method except for the revealing operations that require additional mechanisms to choose the node for revealing. This allows us to isolate machine learning factors and see which of the parsing strategies works the best. There are two methods that are often used for evaluating CCG parsers. They are both based on \"deep\" dependencies extracted from the derivation trees. The first is from (Clark et al., 2002) and is closer to categorial grammar view of dependencies. The second is from (Clark and Curran, 2007) and is meant to be more formalism independent and closer to standard dependencies (Caroll et al., 1998) . We opt for the first option for development as we find it more robust and reliable but we report both types on the test set. on the position embeddings or also on the node's lexical content. First we can see that Revealing approach that uses head representation and does selective modifier placement outperforms all the models both on labelled and unlabelled dependencies. Ablation experiments show that SMP was the crucial component: without it the Revealing model is much worse. This is a clear evidence that attachment heuristics are not enough and also that previous approaches that extract only single revealing option are sub-optimal.", |
| "cite_spans": [ |
| { |
| "start": 551, |
| "end": 571, |
| "text": "(Clark et al., 2002)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 649, |
| "end": 673, |
| "text": "(Clark and Curran, 2007)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 756, |
| "end": 777, |
| "text": "(Caroll et al., 1998)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Which algorithm gives the best parsing results?", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "A possible reason why Revealing model works better than Left and Right branching models is that Left and Right models need to commit early on whether there will be a right adjunct in the future or not. If they make a mistake during greedy decoding there will be no way to repair that mistake. This is not an issue for the Revealing model because it can attach right adjuncts at any point and does not need to forecast them. A natural question then is if these improvements of Revealing model will stay if we use a bigger beam. Figure 5 shows exactly that experiment. We see that the model that gains the most from the biggest beam is for the Left-branching condition, which is expected since that is the model that commits to its predictions the most -it commits with type-raising, unlike Right model, and it commits with predicting right adjunction, unlike Revealing model. With an increased beam Left model equals the Revealing greedy model. But if all the models use the same beam the Revealing model remains the best. An interesting result is that the small beam of size 4 is enough to get the maximal improvement. This probably reflects the low degree of lexical ambiguity that is unresolved at each point during parsing.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 527, |
| "end": 535, |
| "text": "Figure 5", |
| "ref_id": "FIGREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Which algorithm gives the best parsing results?", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Tag UF LF Steedman (2014) 93.0 88.6 81.3 Ambati et al. (2015) 91.2 89.0 81.4 Hockenmaier (2003) 92.2 92.0 84.4 Zhang and Clark (2011) 93.1 -85.5 Clark and Curran (2007) 94 (Clark and Curran, 2007) .", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 61, |
| "text": "Steedman (2014) 93.0 88.6 81.3 Ambati et al. (2015)", |
| "ref_id": null |
| }, |
| { |
| "start": 77, |
| "end": 95, |
| "text": "Hockenmaier (2003)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 111, |
| "end": 133, |
| "text": "Zhang and Clark (2011)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 145, |
| "end": 168, |
| "text": "Clark and Curran (2007)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 172, |
| "end": 196, |
| "text": "(Clark and Curran, 2007)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Which algorithm gives the best parsing results?", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We compute test set results for our Revealing model and compare it to most of the previous results on CCGbank using both types of dependencies. Table 3 shows results with (Clark et al., 2002) style dependencies. Here we get state-ofthe-art results by a large margin, probably mostly thanks to the machine learning component of our parser. An interesting comparison to be made is against EasyCCG parser of Lewis and Steedman (2014) . This parser uses a neural supertagger of accuracy that is not too far from ours, but the dependencies extracted by our parser are much more accurate. This shows that a richer probabilistic model that we use contributes more to the good results than the exact A search that EasyCCG does with a more simplistic model. Another comparison of relevance would be with the revealing model of Ambati et al. (2015) but the comparison of the algorithms is difficult since the machine learning component is very different: Ambati uses a structured perceptron while our model is a heavily parametrized neural network.", |
| "cite_spans": [ |
| { |
| "start": 171, |
| "end": 191, |
| "text": "(Clark et al., 2002)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 405, |
| "end": 430, |
| "text": "Lewis and Steedman (2014)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 818, |
| "end": 838, |
| "text": "Ambati et al. (2015)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 144, |
| "end": 151, |
| "text": "Table 3", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison to other published models", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "In Table 4 we show results with the second type of dependencies used for CCG evaluation. All the models, except Clark and Curran (2007) , are neural and use external embeddings. From the presented models only Revealing and are transition based. All other models have a global search either via CKY or A* search. Our revealing-based parser that does only greedy search is outperforming all of them including those trained on large amounts of unlabelled data using semi-supervised techniques like tri-training Yoshikawa et al., 2017) .", |
| "cite_spans": [ |
| { |
| "start": 112, |
| "end": 135, |
| "text": "Clark and Curran (2007)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 508, |
| "end": 531, |
| "text": "Yoshikawa et al., 2017)", |
| "ref_id": "BIBREF43" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 10, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison to other published models", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "In some sense, all the neural models in Table 4 are implicitly trained in semi-supervised way because they use pretrained embeddings that are estimated on unlabelled data. The quality of ELMo embeddings is probably one of the reasons why our parser achieves such good results. However, another semi-supervised training method, namely tri-training, is particularly attractive because, unlike ELMo, it is trained on a CCG parsing objective which is more closely aligned to what we want to do. All tri-training models are trained on much larger dataset that in addition to CCGbank also includes 43 million word corpus automatically annotated with silver CCG derivations by . It is likely that incorporating tri-training into our training setup will further increase the improvement over other models.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 40, |
| "end": 47, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison to other published models", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Recurrent Neural Network Grammar (RNNG) (Dyer et al., 2016 ) is a fully incremental top-down parsing model. Because it is top-down it has no issues with right branching structures, but right adjuncts would still make parsing more difficult for RNNG because they will have to be predicted even earlier than in Left-and Right-branching derivations in CCG.", |
| "cite_spans": [ |
| { |
| "start": 40, |
| "end": 58, |
| "text": "(Dyer et al., 2016", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Other relevant work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Left-corner parsers (which can be seen as a more constrained version of CCG Left-branching parsing strategy) seem more psychologically realistic than top-down parsers (Abney and Johnson, 1991; Resnik, 1992; Stanojevi\u0107 and Stabler, 2018) . Some proposals about handling right adjunction in left-corner parsing are based on extension to generalized left-corner parsers (Demers, 1977; Hale, 2014 ) that can force some grammar rules (in particular right-adjunction rules) to be less incremental. Our approach does not decrease incrementality of the parser in this way. On the contrary, having a special mechanism for right adjunction makes parser both more incremental and more accurate.", |
| "cite_spans": [ |
| { |
| "start": 167, |
| "end": 192, |
| "text": "(Abney and Johnson, 1991;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 193, |
| "end": 206, |
| "text": "Resnik, 1992;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 207, |
| "end": 236, |
| "text": "Stanojevi\u0107 and Stabler, 2018)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 367, |
| "end": 381, |
| "text": "(Demers, 1977;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 382, |
| "end": 392, |
| "text": "Hale, 2014", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Other relevant work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Revealing based on higher order unification by Pareschi and Steedman (1987) was also proposed by Steedman (1990) as the basis for CCG explanation of gapping. The present derivationbased mechanism for revealing does not extend to gapping, and is targeting to model only derivations that could be explained with a standard CCG grammar derived from CCGbank. While that guarantees that we stay in the safe zone of sound and complete \"standard\" CCG derivations, it would be good as a future work to extend support for gapping and other types of derivations not present in CCGbank. Niv (1993 Niv ( , 1994 proposed an alternative to the unification-based account of Pareschi and Steedman similar to our proposal for online tree rotation. Niv's parser is mostly a formal treatment of left-to-right rotations evaluated against psycholinguistic garden paths, but lacks the wide coverage implementation and statistical parsing model as a basis for resolving attachment ambiguities.", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 75, |
| "text": "Pareschi and Steedman (1987)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 97, |
| "end": 112, |
| "text": "Steedman (1990)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 576, |
| "end": 585, |
| "text": "Niv (1993", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 586, |
| "end": 598, |
| "text": "Niv ( , 1994", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Other relevant work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We have presented a revealing-based incremental parsing algorithm that has special transitions for handling right-adjunction. The parser is neutral with regard to the particular semantic representation used. It is computationally efficient, and can reveal all possible constituents types. It is the most incremental CCG parser yet proposed, and has state-of-the-art results against all published parsers trained on the CCGbank under both dependency recovery measures that are in use for the purpose.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "This notation differs unimportantly fromSteedman (2000) who uses a ternary coordination rule, and more recent work in which conjunctions are X\\X/X.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "By right branching we mean as right branching as it is allowed by CCG formalism and predicate-argument structure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The \"$ notation\" is from(Steedman, 2000) where $ is used as a (potentially empty) placeholder variable ranging over multiple arguments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was supported by ERC H2020 Advanced Fellowship GA 742137 SEMANTAX grant.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Memory requirements and local ambiguities of parsing strategies", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Steven", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Abney", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Journal of Psycholinguistic Research", |
| "volume": "20", |
| "issue": "", |
| "pages": "233--249", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steven P. Abney and Mark Johnson. 1991. Memory re- quirements and local ambiguities of parsing strate- gies. Journal of Psycholinguistic Research, 20:233- 249.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "An algorithm for the organization of information", |
| "authors": [ |
| { |
| "first": "E M", |
| "middle": [], |
| "last": "G M Adelson-Velskii", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Landis", |
| "suffix": "" |
| } |
| ], |
| "year": 1962, |
| "venue": "Soviet Mathematics Doklady", |
| "volume": "3", |
| "issue": "2", |
| "pages": "263--266", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G M Adelson-Velskii and E M Landis. 1962. An al- gorithm for the organization of information. Soviet Mathematics Doklady, 3(2):263-266.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "On the order of words", |
| "authors": [ |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Ades", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 1982, |
| "venue": "Linguistics and Philosophy", |
| "volume": "4", |
| "issue": "", |
| "pages": "517--558", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anthony Ades and Mark Steedman. 1982. On the order of words. Linguistics and Philosophy, 4:517-558.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "An Incremental Algorithm for Transition-based CCG Parsing", |
| "authors": [ |
| { |
| "first": "Bharat", |
| "middle": [ |
| "Ram" |
| ], |
| "last": "Ambati", |
| "suffix": "" |
| }, |
| { |
| "first": "Tejaswini", |
| "middle": [], |
| "last": "Deoskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "53--63", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/v1/N15-1006" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bharat Ram Ambati, Tejaswini Deoskar, Mark John- son, and Mark Steedman. 2015. An Incremental Al- gorithm for Transition-based CCG Parsing. In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 53-63. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Neural machine translation by jointly learning to align and translate", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1409.0473" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Parser evaluation: a survey and a new proposal", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Caroll", |
| "suffix": "" |
| }, |
| { |
| "first": "Ted", |
| "middle": [], |
| "last": "Briscoe", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonio", |
| "middle": [], |
| "last": "Sanfilippo", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "First International Conference on language resources & evaluation: Granada, Spain", |
| "volume": "", |
| "issue": "", |
| "pages": "447--456", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Caroll, Ted Briscoe, and Antonio Sanfilippo. 1998. Parser evaluation: a survey and a new pro- posal. In First International Conference on lan- guage resources & evaluation: Granada, Spain, 28- 30 May 1998, pages 447-456. European Language Resources Association.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Widecoverage efficient statistical parsing with CCG and log-linear models", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "James R Curran", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Computational Linguistics", |
| "volume": "33", |
| "issue": "4", |
| "pages": "493--552", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Clark and James R Curran. 2007. Wide- coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493-552.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Building deep dependency structures with a wide-coverage CCG parser", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Hockenmaier", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "327--334", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Clark, Julia Hockenmaier, and Mark Steed- man. 2002. Building deep dependency structures with a wide-coverage CCG parser. In Proceedings of the 40th Annual Meeting on Association for Com- putational Linguistics, pages 327-334. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Generalized left corner parsing", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [ |
| "J" |
| ], |
| "last": "Demers", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "4th Annual ACM Symposium on Principles of Programming Languages", |
| "volume": "", |
| "issue": "", |
| "pages": "170--181", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan J. Demers. 1977. Generalized left corner pars- ing. In 4th Annual ACM Symposium on Principles of Programming Languages, pages 170-181.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Transitionbased dependency parsing with stack long shortterm memory", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Wang", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "Austin", |
| "middle": [], |
| "last": "Matthews", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "334--343", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short- term memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 334-343, Beijing, China. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Recurrent neural network grammars", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Adhiguna", |
| "middle": [], |
| "last": "Kuncoro", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah A", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "199--209", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199-209.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Efficient normal-form parsing for Combinatory Categorial Grammar", |
| "authors": [ |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 34th annual meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "79--86", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jason Eisner. 1996. Efficient normal-form parsing for Combinatory Categorial Grammar. In Proceedings of the 34th annual meeting on Association for Com- putational Linguistics, pages 79-86. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Bidirectional LSTM Networks for Improved Phoneme Classification and Recognition", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Graves", |
| "suffix": "" |
| }, |
| { |
| "first": "Santiago", |
| "middle": [], |
| "last": "Fern\u00e1ndez", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 15th International Conference on Artificial Neural Networks: Formal Models and Their Applications -Volume Part II, ICANN'05", |
| "volume": "", |
| "issue": "", |
| "pages": "799--804", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Graves, Santiago Fern\u00e1ndez, and J\u00fcrgen Schmid- huber. 2005. Bidirectional LSTM Networks for Improved Phoneme Classification and Recogni- tion. In Proceedings of the 15th International Conference on Artificial Neural Networks: For- mal Models and Their Applications -Volume Part II, ICANN'05, pages 799-804, Berlin, Heidelberg. Springer-Verlag.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "A dichromatic framework for balanced trees", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Leo", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Guibas", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sedgewick", |
| "suffix": "" |
| } |
| ], |
| "year": 1978, |
| "venue": "Proceedings of the 19th Annual Symposium on Foundations of Computer Science, SFCS '78", |
| "volume": "", |
| "issue": "", |
| "pages": "8--21", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/SFCS.1978.3" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leo J. Guibas and Robert Sedgewick. 1978. A dichro- matic framework for balanced trees. In Proceedings of the 19th Annual Symposium on Foundations of Computer Science, SFCS '78, pages 8-21, Washing- ton, DC, USA. IEEE Computer Society.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Automaton Theories of Human Sentence Comprehension", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [ |
| "T" |
| ], |
| "last": "Hale", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John T. Hale. 2014. Automaton Theories of Human Sentence Comprehension. CSLI, Stanford.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Data and models for statistical parsing with Combinatory Categorial Grammar", |
| "authors": [ |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Hockenmaier", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julia Hockenmaier. 2003. Data and models for sta- tistical parsing with Combinatory Categorial Gram- mar. Ph.D. thesis, University of Edinburgh. College of Science and Engineering. School of Informatics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Normalform Parsing for Combinatory Categorial Grammars with Generalized Composition and Type-raising", |
| "authors": [ |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Hockenmaier", |
| "suffix": "" |
| }, |
| { |
| "first": "Yonatan", |
| "middle": [], |
| "last": "Bisk", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10", |
| "volume": "", |
| "issue": "", |
| "pages": "465--473", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julia Hockenmaier and Yonatan Bisk. 2010. Normal- form Parsing for Combinatory Categorial Grammars with Generalized Composition and Type-raising. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10, pages 465-473, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "CCGbank: a corpus of CCG derivations and dependency structures extracted from the Penn Treebank", |
| "authors": [ |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Hockenmaier", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Computational Linguistics", |
| "volume": "33", |
| "issue": "3", |
| "pages": "355--396", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julia Hockenmaier and Mark Steedman. 2007. CCG- bank: a corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Com- putational Linguistics, 33(3):355-396.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Global Neural CCG Parsing with Optimality Guarantees", |
| "authors": [ |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2366--2376", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D16-1262" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2016. Global Neural CCG Parsing with Optimality Guar- antees. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 2366-2376. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Lstm ccg parsing", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "221--231", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N16-1026" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Lewis, Kenton Lee, and Luke Zettlemoyer. 2016. Lstm ccg parsing. In Proceedings of the 2016 Con- ference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, pages 221-231, San Diego, California. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "A* CCG parsing with a supertag-factored model", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "990--1000", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Lewis and Mark Steedman. 2014. A* CCG pars- ing with a supertag-factored model. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 990-1000.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Linguistic structure and speech shadowing at very short latencies", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Marslen-Wilson", |
| "suffix": "" |
| } |
| ], |
| "year": 1973, |
| "venue": "Nature", |
| "volume": "244", |
| "issue": "", |
| "pages": "522--523", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William Marslen-Wilson. 1973. Linguistic structure and speech shadowing at very short latencies. Na- ture, 244:522-523.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit", |
| "authors": [ |
| { |
| "first": "Graham", |
| "middle": [], |
| "last": "Neubig", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Austin", |
| "middle": [], |
| "last": "Matthews", |
| "suffix": "" |
| }, |
| { |
| "first": "Waleed", |
| "middle": [], |
| "last": "Ammar", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonios", |
| "middle": [], |
| "last": "Anastasopoulos", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Clothiaux", |
| "suffix": "" |
| }, |
| { |
| "first": "Trevor", |
| "middle": [], |
| "last": "Cohn", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Duh", |
| "suffix": "" |
| }, |
| { |
| "first": "Manaal", |
| "middle": [], |
| "last": "Faruqui", |
| "suffix": "" |
| }, |
| { |
| "first": "Cynthia", |
| "middle": [], |
| "last": "Gan", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Garrette", |
| "suffix": "" |
| }, |
| { |
| "first": "Yangfeng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Lingpeng", |
| "middle": [], |
| "last": "Kong", |
| "suffix": "" |
| }, |
| { |
| "first": "Adhiguna", |
| "middle": [], |
| "last": "Kuncoro", |
| "suffix": "" |
| }, |
| { |
| "first": "Gaurav", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "Chaitanya", |
| "middle": [], |
| "last": "Malaviya", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Michel", |
| "suffix": "" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Oda", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Richardson", |
| "suffix": "" |
| }, |
| { |
| "first": "Naomi", |
| "middle": [], |
| "last": "Saphra", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1701.03980" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopou- los, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Ku- mar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "A Computational Model of Syntactic Processing: Ambiguity Resolution from Interpretation", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Niv", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "93--120", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Niv. 1993. A Computational Model of Syn- tactic Processing: Ambiguity Resolution from Inter- pretation. Ph.D. thesis, University of Pennsylvania. IRCS Report 93-27.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "A psycholinguistically motivated parser for CCG", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Niv", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the 32nd annual meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "125--132", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Niv. 1994. A psycholinguistically motivated parser for CCG. In Proceedings of the 32nd annual meeting on Association for Computational Linguis- tics, pages 125-132. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Red-black trees in a functional setting", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Okasaki", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Journal of functional programming", |
| "volume": "9", |
| "issue": "4", |
| "pages": "471--477", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Okasaki. 1999. Red-black trees in a func- tional setting. Journal of functional programming, 9(4):471-477.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "A Lazy Way to Chart-parse with Categorial Grammars", |
| "authors": [ |
| { |
| "first": "Remo", |
| "middle": [], |
| "last": "Pareschi", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "Proceedings of the 25th Annual Meeting on Association for Computational Linguistics, ACL '87", |
| "volume": "", |
| "issue": "", |
| "pages": "81--88", |
| "other_ids": { |
| "DOI": [ |
| "10.3115/981175.981187" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Remo Pareschi and Mark Steedman. 1987. A Lazy Way to Chart-parse with Categorial Grammars. In Proceedings of the 25th Annual Meeting on As- sociation for Computational Linguistics, ACL '87, pages 81-88, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Deep contextualized word representations", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [ |
| "E" |
| ], |
| "last": "Peters", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Neumann", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Iyyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Gardner", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proc. of NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proc. of NAACL.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Left-corner parsing and psychological plausibility", |
| "authors": [ |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the 14th International Conference on Computational Linguistics, COLING 92", |
| "volume": "", |
| "issue": "", |
| "pages": "191--197", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philip Resnik. 1992. Left-corner parsing and psycho- logical plausibility. In Proceedings of the 14th Inter- national Conference on Computational Linguistics, COLING 92, pages 191-197.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "A sound and complete left-corner parsing for minimalist grammars", |
| "authors": [ |
| { |
| "first": "Milo\u0161", |
| "middle": [], |
| "last": "Stanojevi\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Stabler", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Eight Workshop on Cognitive Aspects of Computational Language Learning and Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "65--74", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Milo\u0161 Stanojevi\u0107 and Edward Stabler. 2018. A sound and complete left-corner parsing for minimalist grammars. In Proceedings of the Eight Workshop on Cognitive Aspects of Computational Language Learning and Processing, pages 65-74. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Gapping as constituent coordination", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Linguistics and philosophy", |
| "volume": "13", |
| "issue": "2", |
| "pages": "207--263", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Steedman. 1990. Gapping as constituent coordi- nation. Linguistics and philosophy, 13(2):207-263.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "The Syntactic Process", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, MA.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Improved semantic representations from tree-structured long short-term memory networks", |
| "authors": [ |
| { |
| "first": "Kai Sheng", |
| "middle": [], |
| "last": "Tai", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "1556--1566", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1556-1566, Beijing, China. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Supertagging With LSTMs", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Yonatan", |
| "middle": [], |
| "last": "Bisk", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Musa", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "232--237", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N16-1027" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Yonatan Bisk, Kenji Sagae, and Ryan Musa. 2016. Supertagging With LSTMs. In Pro- ceedings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 232-237. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "30", |
| "issue": "", |
| "pages": "5998--6008", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran As- sociates, Inc.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Pointer networks", |
| "authors": [ |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Meire", |
| "middle": [], |
| "last": "Fortunato", |
| "suffix": "" |
| }, |
| { |
| "first": "Navdeep", |
| "middle": [], |
| "last": "Jaitly", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Advances in Neural Information Processing Systems", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "D" |
| ], |
| "last": "Lawrence", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Sugiyama", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Garnett", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "28", |
| "issue": "", |
| "pages": "2692--2700", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2692-2700. Curran Associates, Inc.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Graph-based Dependency Parsing with Bidirectional LSTM", |
| "authors": [ |
| { |
| "first": "Wenhui", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Baobao", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "2306--2315", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P16-1218" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wenhui Wang and Baobao Chang. 2016. Graph-based Dependency Parsing with Bidirectional LSTM. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2306-2315. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "An experimental parsing system for Transition Network Grammars", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Woods", |
| "suffix": "" |
| } |
| ], |
| "year": 1973, |
| "venue": "Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "111--154", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William Woods. 1973. An experimental parsing sys- tem for Transition Network Grammars. In Randall Rustin, editor, Natural Language Processing, pages 111-154. Algorithmics Press, New York.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "LSTM shift-reduce CCG parsing", |
| "authors": [ |
| { |
| "first": "Wenduan", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1754--1764", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wenduan Xu. 2016. LSTM shift-reduce CCG parsing. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 1754-1764.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Expected f-measure training for shiftreduce parsing with recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Wenduan", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [ |
| "Christopher" |
| ], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wenduan Xu, Michael Auli, and Stephen Christopher Clark. 2016. Expected f-measure training for shift- reduce parsing with recurrent neural networks. In Proceedings of the 54nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers). Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Shift-reduce CCG parsing with a dependency model", |
| "authors": [ |
| { |
| "first": "Wenduan", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "218--227", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wenduan Xu, Stephen Clark, and Yue Zhang. 2014. Shift-reduce CCG parsing with a dependency model. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 218-227.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "A* ccg parsing with a supertag and dependency factored model", |
| "authors": [ |
| { |
| "first": "Masashi", |
| "middle": [], |
| "last": "Yoshikawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Hiroshi", |
| "middle": [], |
| "last": "Noji", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuji", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "277--287", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Masashi Yoshikawa, Hiroshi Noji, and Yuji Mat- sumoto. 2017. A* ccg parsing with a supertag and dependency factored model. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 277-287. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Deep sets. In NIPS. (Accepted for oral presentation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Zaheer", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kottur", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Ravanbakhsh", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Poczos", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Smola", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Zaheer, S. Kottur, M. Ravanbakhsh, B. Poczos, R. Salakhutdinov, and A. Smola. 2017. Deep sets. In NIPS. (Accepted for oral presentation, 1.23", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Shift-reduce CCG parsing", |
| "authors": [ |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "683--692", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yue Zhang and Stephen Clark. 2011. Shift-reduce CCG parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies-Volume 1, pages 683-692. Association for Computational Lin- guistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "text": "Semantically equivalent CCG derivations. Right adjunct is attached to the revealed node.", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "text": "Right adjunction.", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF4": { |
| "text": "Tree rotation operations. The red square signifies recursion. Variables x and y represent the orders of the combinatory rules.", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF5": { |
| "text": "Example of the algorithm run over a sentence with tensed VP coordination.", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF6": { |
| "text": "Influence of beam size on the dev results.", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "TABREF0": { |
| "content": "<table><tr><td/><td>Mary</td><td/><td>might</td><td>find</td><td colspan=\"2\">happiness and</td><td>forget</td><td>me</td></tr><tr><td/><td colspan=\"2\">NP S /(S \\NP ) S >T</td><td/><td colspan=\"2\">S \\NP</td><td>>B0</td><td>S \\NP</td><td>>B0</td></tr><tr><td/><td/><td colspan=\"2\">S /(S \\NP )</td><td>>B1</td><td/><td>S \\NP [conj ]</td><td>>\u03a6</td></tr><tr><td/><td/><td/><td/><td/><td/><td>S \\NP</td><td><\u03a6</td></tr><tr><td/><td/><td/><td/><td>S</td><td/><td>>B0</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">(a) Derivation tree.</td></tr><tr><td/><td>transition</td><td/><td>stack</td><td/><td/></tr><tr><td>1</td><td>shift</td><td>=\u21d2</td><td>Mary</td><td/><td/></tr><tr><td>2</td><td>reduceUnary(>T)</td><td>=\u21d2</td><td colspan=\"2\">>T Mary</td><td/></tr><tr><td>3</td><td>shift</td><td>=\u21d2</td><td colspan=\"2\">>T Mary might</td><td/></tr><tr><td>4</td><td colspan=\"2\">reduceBinary(>B1) =\u21d2</td><td colspan=\"2\">>B1 (>T Mary) might</td><td/></tr><tr><td>5</td><td>shift</td><td>=\u21d2</td><td colspan=\"3\">>B1 (>T Mary) might find</td></tr><tr><td>6</td><td colspan=\"2\">reduceBinary(>B1) =\u21d2</td><td colspan=\"3\">>B1 (>B1 (>T Mary) might) find</td></tr><tr><td/><td>rotate to right</td><td>=\u21d2</td><td colspan=\"3\">>B1 (>T Mary) (>B1 might find)</td></tr><tr><td>7</td><td>shift</td><td>=\u21d2</td><td colspan=\"4\">>B1 (>T Mary) (>B1 might find) happiness</td></tr><tr><td>8</td><td colspan=\"2\">reduceBinary(>B1) =\u21d2</td><td colspan=\"4\">>B0 (>B1 (>T Mary) (>B1 might find)) happiness</td></tr><tr><td/><td>rotate to right</td><td>=\u21d2</td><td colspan=\"4\">>B0 (>T Mary) (>B0 (>B1 might find) happiness)</td></tr><tr><td/><td>rotate to right</td><td>=\u21d2</td><td colspan=\"4\">>B0 (>T Mary) (>B0 might (>B0 find happiness))</td></tr><tr><td>9</td><td>shift</td><td>=\u21d2</td><td colspan=\"4\">>B0 (>T Mary) (>B0 might (>B0 find happiness)) and</td></tr><tr><td colspan=\"2\">10 shift</td><td>=\u21d2</td><td colspan=\"4\">>B0 (>T Mary) (>B0 might (>B0 find happiness)) and forget</td></tr><tr><td colspan=\"2\">11 shift</td><td>=\u21d2</td><td colspan=\"4\">>B0 (>T Mary) (>B0 might (>B0 find happiness)) and forget me</td></tr><tr><td colspan=\"3\">12 reduceBinary(>B0) =\u21d2</td><td/><td/><td/></tr></table>", |
| "text": "\\NP /(S \\NP ) S \\NP /NP NP conj S \\NP /NP NP", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF2": { |
| "content": "<table/>", |
| "text": "Train set measure of incrementality. *: taken from", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF4": { |
| "content": "<table><tr><td/><td>90</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td>89.83</td></tr><tr><td/><td>89.8</td><td/><td/><td/></tr><tr><td>Labelled F1</td><td>89.4 89.6</td><td>89.58</td><td/><td/><td>89.61 89.43</td></tr><tr><td/><td>89.2</td><td>89.21 89.19</td><td/><td/><td>Revealing Left</td></tr><tr><td/><td/><td/><td/><td/><td>Right</td></tr><tr><td/><td>89</td><td>1 2 1 2</td><td>4 4</td><td>8 8</td><td>16 16</td></tr><tr><td/><td/><td/><td/><td>Beam size</td></tr></table>", |
| "text": "shows the results on development set. The heads column shows if the head words representation is used for computing the representation of the nodes in the tree. The SMP column shows if Selective Modifier Placement is used: whether we choose where to attach right adjunct based only", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF6": { |
| "content": "<table><tr><td colspan=\"4\">: Test set F1 results for prediction of supertags</td></tr><tr><td colspan=\"4\">(Tag), unlabelled (UF) and labelled (LF) CCG de-</td></tr><tr><td colspan=\"4\">pendencies extracted using scripts from Hockenmaier</td></tr><tr><td>(2003) parser.</td><td/><td/><td/></tr><tr><td/><td/><td>Dev</td><td>Test</td></tr><tr><td/><td/><td>LF</td><td>LF</td></tr><tr><td>Clark and Curran (2007)</td><td/><td>83.8</td><td>85.2</td></tr><tr><td colspan=\"2\">Lewis and Steedman (2014)</td><td>-</td><td>86.1</td></tr><tr><td>Yoshikawa et al. (2017)</td><td/><td>86.8</td><td>87.7</td></tr><tr><td>Xu et al. (2016)</td><td/><td>87.5</td><td>87.8</td></tr><tr><td>Lewis et al. (2016)</td><td>tri-train</td><td>87.5</td><td>88.1</td></tr><tr><td>Vaswani et al. (2016)</td><td/><td>87.8</td><td>88.3</td></tr><tr><td>Lee et al. (2016)</td><td>tri-train</td><td>88.4</td><td>88.7</td></tr><tr><td colspan=\"2\">Yoshikawa et al. (2017) tri-train</td><td>87.7</td><td>88.8</td></tr><tr><td>Revealing (beam=1)</td><td/><td>90.8</td><td>90.5</td></tr></table>", |
| "text": "", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF7": { |
| "content": "<table/>", |
| "text": "F1 results for labelled dependencies extracted with generate program of C&C parser", |
| "num": null, |
| "type_str": "table", |
| "html": null |
| } |
| } |
| } |
| } |