| { |
| "paper_id": "P16-1017", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:55:24.432193Z" |
| }, |
| "title": "Neural Greedy Constituent Parsing with Dynamic Oracles", |
| "authors": [ |
| { |
| "first": "Maximin", |
| "middle": [], |
| "last": "Coavoux", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Univ. Paris Diderot", |
| "location": { |
| "addrLine": "Sorbonne Paris Cit\u00e9" |
| } |
| }, |
| "email": "maximin.coavoux@inria.fr" |
| }, |
| { |
| "first": "Beno\u00eet", |
| "middle": [], |
| "last": "Crabb\u00e9", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Univ. Paris Diderot", |
| "location": { |
| "addrLine": "Sorbonne Paris Cit\u00e9" |
| } |
| }, |
| "email": "benoit.crabbe@linguist.univ-paris-diderot.fr" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Dynamic oracle training has shown substantial improvements for dependency parsing in various settings, but has not been explored for constituent parsing. The present article introduces a dynamic oracle for transition-based constituent parsing. Experiments on the 9 languages of the SPMRL dataset show that a neural greedy parser with morphological features, trained with a dynamic oracle, leads to accuracies comparable with the best non-reranking and non-ensemble parsers.", |
| "pdf_parse": { |
| "paper_id": "P16-1017", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Dynamic oracle training has shown substantial improvements for dependency parsing in various settings, but has not been explored for constituent parsing. The present article introduces a dynamic oracle for transition-based constituent parsing. Experiments on the 9 languages of the SPMRL dataset show that a neural greedy parser with morphological features, trained with a dynamic oracle, leads to accuracies comparable with the best non-reranking and non-ensemble parsers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Constituent parsing often relies on search methods such as dynamic programming or beam search, because the search space of all possible predictions is prohibitively large. In this article, we present a greedy parsing model. Our main contribution is the design of a dynamic oracle for transitionbased constituent parsing. In NLP, dynamic oracles were first proposed to improve greedy dependency parsing training without involving additional computational costs at test time (Goldberg and Nivre, 2012; .", |
| "cite_spans": [ |
| { |
| "start": 473, |
| "end": 499, |
| "text": "(Goldberg and Nivre, 2012;", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The training of a transition-based parser involves an oracle, that is a function mapping a configuration to the best transition. Transition-based parsers usually rely on a static oracle, only welldefined for gold configurations, which transforms trees into sequences of gold actions. Training against a static oracle restricts the exploration of the search space to the gold sequence of actions. At test time, due to error propagation, the parser will be in a very different situation than at training time. It will have to infer good actions from noisy configurations. To alleviate error propagation, a solution is to train the parser to predict the best action given any configuration, by allowing it to explore a greater part of the search space at train time. Dynamic oracles are non-deterministic oracles well-defined for any configuration. They give the best possible transitions for any configuration. Although dynamic oracles are widely used in dependency parsing and available for most standard transition systems Goldberg et al., 2014; G\u00f3mez-Rodr\u00edguez et al., 2014; Straka et al., 2015) , no dynamic oracle parsing model has yet been proposed for phrase structure grammars.", |
| "cite_spans": [ |
| { |
| "start": 1023, |
| "end": 1045, |
| "text": "Goldberg et al., 2014;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 1046, |
| "end": 1075, |
| "text": "G\u00f3mez-Rodr\u00edguez et al., 2014;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 1076, |
| "end": 1096, |
| "text": "Straka et al., 2015)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The model we present aims at parsing morphologically rich languages (MRL). Recent research has shown that morphological features are very important for MRL parsing (Bj\u00f6rkelund et al., 2013; Crabb\u00e9, 2015) . However, traditional linear models (such as the structured perceptron) need to define rather complex feature templates to capture interactions between features. Additional morphological features complicate this task (Crabb\u00e9, 2015) . Instead, we propose to rely on a neural network weighting function which uses a non-linear hidden layer to automatically capture interactions between variables, and embeds morphological features in a vector space, as is usual for words and other symbols (Collobert and Weston, 2008; Chen and Manning, 2014) .", |
| "cite_spans": [ |
| { |
| "start": 164, |
| "end": 189, |
| "text": "(Bj\u00f6rkelund et al., 2013;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 190, |
| "end": 203, |
| "text": "Crabb\u00e9, 2015)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 422, |
| "end": 436, |
| "text": "(Crabb\u00e9, 2015)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 693, |
| "end": 721, |
| "text": "(Collobert and Weston, 2008;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 722, |
| "end": 745, |
| "text": "Chen and Manning, 2014)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The article is structured as follows. In Section 2, we present neural transition-based parsing. Section 3 motivates learning with a dynamic oracle and presents an algorithm to do so. Section 4 introduces the dynamic oracle. Finally, we present parsing experiments in Section 5 to evaluate our proposal.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Transition-based parsers for phrase structure grammars generally derive from the work of Sagae", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-Based Constituent Parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A[h] E[e] D[d] X[h] C[c] B[b] A[h] E[e] A:[h] D[d] A:[h] A:[h] X[h] C[c] B[b]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-Based Constituent Parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Figure 1: Order-0 head markovization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-Based Constituent Parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "and Lavie (2005) . In the present paper, we extend Crabb\u00e9 (2015)'s transition system.", |
| "cite_spans": [ |
| { |
| "start": 4, |
| "end": 16, |
| "text": "Lavie (2005)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-Based Constituent Parsing", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We extract the grammar from a head-annotated preprocessed constituent treebank (cf Section 5). The preprocessing involves two steps. First, unary chains are merged, except at the preterminal level, where at most one unary production is allowed. Second, an order-0 head-markovization is performed ( Figure 1 ). This step introduces temporary symbols in the binarized grammar, which are suffixed by \":\". The resulting productions have one the following form:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 298, |
| "end": 306, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "X[h] \u2192 A[a] B[b] X[h] \u2192 A[a] b X[h] \u2192 h X[h] \u2192 a B[b]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "where X, A, B are delexicalised non-terminals, a, b and h \u2208 {a, b} are tokens, and X[h] is a lexicalized non-terminal. The purpose of lexicalization is to allow the extraction of features involving the heads of phrases together with their tags and morphological attributes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "Transition System In the transition-based framework, parsing relies on two data structures: a buffer containing the sequence of tokens to parse and a stack containing partial instantiated trees. A configuration C = j, S, b, \u03b3 is a tuple where j is the index of the next token in the buffer, S is the current stack, b is a boolean, and \u03b3 is the set of constituents constructed so far. 1 Constituents are instantiated non-terminals, i.e. tuples (X, i, j) such that X is a non-terminal and (i, j) are two integers denoting its span. Although the content of \u03b3 could be retrieved from the stack, we make it explicit because it will be useful for the design of the oracle in Section 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "From an initial configuration C 0 = 0, , \u22a5, \u2205 , the parser incrementally derives new configurations by performing actions until a final configuration is reached. S(HIFT) pops an element from the 1 The introduction of \u03b3 is the main difference with Crabb\u00e9 (2015)'s transition system. Table 1 : Constraints to ensure that binary trees can be unbinarized. n is the sentence length. buffer and pushes it on the stack. R(EDUCE)(X) pops two elements from the stack, and pushes a new non-terminal X on the stack with the two elements as its children. There are two kinds of binary reductions, left (RL) or right (RR), depending on the position of the head. Finally, unary reductions (RU(X)) pops only one element from the stack and pushes a new non-terminal X. A derivation", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 282, |
| "end": 289, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "Stack: S|(C, l, i)|(B, i, k)|(A, k, j) Action Constraints RL(X) or RR(X), X\u2208 N A / \u2208 N tmp and B / \u2208 N tmp RL(X:) or RR(X:), X:\u2208 N tmp C / \u2208 N tmp or j < n RR(X) B / \u2208 N tmp RL(X) A / \u2208 N tmp", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "Input w 0 w 1 . . . w n\u22121 Axiom 0, , \u22a5, \u2205 S j, S, \u22a5, \u03b3 j + 1, S|(t j , j, j + 1), , \u03b3 RL(X) j, S|(A, i, k)|(B, k, j), \u22a5, \u03b3 j, S|(X, i, j), \u22a5, \u03b3 \u222a {(X, i, j)} RU(X) j, S|(t j\u22121 , j \u2212 1, j), , \u03b3 j, S|(X, j \u2212 1, j), \u22a5, \u03b3 \u222a {(X, j \u2212 1, j)} GR j, S, , \u03b3 j, S, \u22a5, \u03b3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "C 0\u21d2\u03c4 = C 0 a 0 \u21d2 . . . a \u03c4 \u22121", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "\u21d2 C \u03c4 is a sequence of configurations linked by actions and leading to a final configuration. Figure 2 presents the algorithm as a deductive system. G(HOST)R(EDUCE) actions and boolean b ( or \u22a5) are used to ensure that unary reductions (RU) can only take place once after a SHIFT action. 2 Constraints on the transitions make sure that predicted trees can be unbinarized. Figure 3 shows two examples of trees that could not have been obtained by the binarization process. In the first tree, a temporary symbol rewrites as two tempo-", |
| "cite_spans": [ |
| { |
| "start": 288, |
| "end": 289, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 94, |
| "end": 102, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 372, |
| "end": 380, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "A[h] C:[c] A:[h] A[h] C[h] A:[a]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "Figure 3: Examples of ill-formed binary trees rary symbols. In the second one, the head of a temporary symbol is not the head of its direct parent. Table 1 shows a summary of the constraints used to ensure that any predicted tree is a wellformed binarized tree. 3 In this table, N is the set of non-terminals and N tmp \u2282 N is the set of temporary non-terminals.", |
| "cite_spans": [ |
| { |
| "start": 262, |
| "end": 263, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 148, |
| "end": 155, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "Weighted Parsing The deductive system is inherently non-deterministic. Determinism is provided by a scoring function", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "s(C 0\u21d2\u03c4 ) = \u03c4 i=1 f \u03b8 (C i\u22121 , a i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "where \u03b8 is a set of parameters. The score of a derivation decomposes as a sum of scores of actions. In practice, we used a feed-forward neural network very similar to the scoring model of Chen and Manning (2014) . The input of the network is a sequence of typed symbols. We consider three main types (non-terminals, tags and terminals) plus a language-dependent set of morphological attribute types, for example, gender, number, or case (Crabb\u00e9, 2015). The first layer h (0) is a lookup layer which concatenates the embeddings of each typed symbol extracted from a configuration. The second layer h (1) is a non-linear layer with a rectifier activation (ReLU). Finally, the last layer h (2) is a softmax layer giving a distribution over possible actions, given a configuration. The score of an action is its log probability.", |
| "cite_spans": [ |
| { |
| "start": 188, |
| "end": 211, |
| "text": "Chen and Manning (2014)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "Assuming", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "v 1 , v 2 . . . , v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03b1 are the embeddings of the sequence of symbols extracted from a configuration, the forward pass is summed up by the following equations:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "h (0) = [v 1 ; v 2 ; . . . ; v \u03b1 ] h (1) = max{0, W (h) \u2022 h (0) + b (h) } h (2) = Softmax(W (o) \u2022 h (1) + b (o) ) f \u03b8 (C, a) = log(h (2) a )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "3 There are additional constraints which are not presented here. For example, SHIFT assumes that the buffer is not empty. A full description of constraints typically used in a slightly different transition system can be found in Zhang and Clark (2009)'s appendix section. Thus, \u03b8 includes the weights and biases for each layer (W (h) , W (o) , b (h) , b (o) ), and the embedding lookup table for each symbol type. We perform greedy search to infer the bestscoring derivation. Note that this is not an exact inference. Most propositions in phrase structure parsing rely on dynamic programming (Durrett and Klein, 2015; Mi and Huang, 2015) or beam search (Crabb\u00e9, 2015; Watanabe and Sumita, 2015; Zhu et al., 2013 ). However we found that with a scoring function expressive enough and a rich feature set, greedy decoding can be surprisingly accurate (see Section 5).", |
| "cite_spans": [ |
| { |
| "start": 338, |
| "end": 341, |
| "text": "(o)", |
| "ref_id": null |
| }, |
| { |
| "start": 346, |
| "end": 349, |
| "text": "(h)", |
| "ref_id": null |
| }, |
| { |
| "start": 354, |
| "end": 357, |
| "text": "(o)", |
| "ref_id": null |
| }, |
| { |
| "start": 592, |
| "end": 617, |
| "text": "(Durrett and Klein, 2015;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 618, |
| "end": 637, |
| "text": "Mi and Huang, 2015)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 653, |
| "end": 667, |
| "text": "(Crabb\u00e9, 2015;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 668, |
| "end": 694, |
| "text": "Watanabe and Sumita, 2015;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 695, |
| "end": 711, |
| "text": "Zhu et al., 2013", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "Features Each terminal is a tuple containing the word form, its part-of-speech tag and an arbitrary number of language-specific morphological attributes, such as CASE, GENDER, NUMBER, ASPECT and others (Seddah et al., 2013; Crabb\u00e9, 2015) . The representation of a configuration depends on symbols at the top of the two data structures, including the first tokens in the buffer, the first lexicalised non-terminals in the stack and possibly their immediate descendants ( Figure 4 ). The full set of templates is specified in Table 6 of Annex A. The sequence of symbols that forms the input of the network is the instanciation of each position described in this table with a discrete symbol.", |
| "cite_spans": [ |
| { |
| "start": 202, |
| "end": 223, |
| "text": "(Seddah et al., 2013;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 224, |
| "end": 237, |
| "text": "Crabb\u00e9, 2015)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 470, |
| "end": 478, |
| "text": "Figure 4", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 524, |
| "end": 531, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Grammar form", |
| "sec_num": null |
| }, |
| { |
| "text": "An important component for the training of a parser is an oracle, that is a function mapping a gold tree and a configuration to an action. The oracle is used to generate local training examples from trees, and feed them to the local classifier.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Greedy Parser with an Oracle", |
| "sec_num": "3" |
| }, |
| { |
| "text": "A static oracle (Goldberg and Nivre, 2012) is an incomplete and deterministic oracle. It is only well-defined for gold configurations (the configurations derived by the gold action sequence) and returns the unique gold action. Usually, parsers use a static oracle to transform the set of binarized trees into a set D = {C (i) , a (i) } 1\u2264i\u2264T of training examples. Training consists in minimiz-ing the negative log likelihood of these examples. The limitation of this training method is that only gold configurations are seen during training. At test time, due to error propagation, the parser will have to predict good actions from noisy configurations, and will have much difficulty to recover after mistakes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Greedy Parser with an Oracle", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To alleviate this problem, a line of work (Daum\u00e9 III et al., 2006; Ross et al., 2011) has cast the problem of structured prediction as a search problem and developed training algorithms aiming at exploring a greater part of the search space. These methods require an oracle well-defined for every search state, that is, for every parsing configuration.", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 66, |
| "text": "(Daum\u00e9 III et al., 2006;", |
| "ref_id": null |
| }, |
| { |
| "start": 67, |
| "end": 85, |
| "text": "Ross et al., 2011)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Greedy Parser with an Oracle", |
| "sec_num": "3" |
| }, |
| { |
| "text": "A dynamic oracle is a complete and nondeterministic oracle (Goldberg and Nivre, 2012) . It returns the non-empty set of the best transitions given a configuration and a gold tree. In dependency parsing, starting from Goldberg and Nivre (2012) , dynamic oracle algorithms and training methods have been proposed for a variety of transition systems and led to substantial improvements in accuracy Goldberg et al., 2014; G\u00f3mez-Rodr\u00edguez et al., 2014; Straka et al., 2015; G\u00f3mez-Rodr\u00edguez and Fern\u00e1ndez-Gonz\u00e1lez, 2015) .", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 85, |
| "text": "(Goldberg and Nivre, 2012)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 217, |
| "end": 242, |
| "text": "Goldberg and Nivre (2012)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 395, |
| "end": 417, |
| "text": "Goldberg et al., 2014;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 418, |
| "end": 447, |
| "text": "G\u00f3mez-Rodr\u00edguez et al., 2014;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 448, |
| "end": 468, |
| "text": "Straka et al., 2015;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 469, |
| "end": 514, |
| "text": "G\u00f3mez-Rodr\u00edguez and Fern\u00e1ndez-Gonz\u00e1lez, 2015)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Greedy Parser with an Oracle", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Online training An online trainer iterates several times over each sentence in the treebank, and updates its parameters until convergence. When a static oracle is used, the training examples can be pregenerated from the sentences. When we use a dynamic oracle instead, we generate training examples on the fly, by following the prediction of the parser (given the current parameters) instead of the gold action, with probability p, where p is a hyperparameter which controls the degree of exploration. The online training algorithm for a single sentence s, with an oracle function o is shown in Figure 5 . It is a slightly modified version of Goldberg and Nivre (2013)'s algorithm 3, an approach they called learning with exploration.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 595, |
| "end": 603, |
| "text": "Figure 5", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training a Greedy Parser with an Oracle", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In particular, as our neural network uses a crossentropy loss, and not the perceptron loss used in , updates are performed even when the prediction is correct. When p = 0, the algorithm acts identically to a static oracle trainer, as the parser always follows the gold transition. When the set of actions predicted by the oracle has more than one element, the best scoring element among them is chosen as the reference", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Greedy Parser with an Oracle", |
| "sec_num": "3" |
| }, |
| { |
| "text": "function TRAINONESENTENCE(s, \u03b8, p, o) C \u2190 INITIAL(s) while C is not a final configuration do A \u2190 o(C, s) set of best action\u015d a \u2190 argmax a f \u03b8 (C) a if\u00e2 \u2208 A then t \u2190\u00e2 t: target else t \u2190 argmax a\u2208A f \u03b8 (C) a \u03b8 \u2190 UPDATE(\u03b8, C, t) backprop if RANDOM() < p then C \u2190\u00e2(C) Follow prediction else C \u2190 t(C)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Greedy Parser with an Oracle", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Follow best action return \u03b8 action to update the parameters of the neural network.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training a Greedy Parser with an Oracle", |
| "sec_num": "3" |
| }, |
| { |
| "text": "This section introduces a dynamic oracle algorithm for the parsing model presented in the previous 2 sections, that is the function o used in the algorithm in Figure 5 . The dynamic oracle must minimize a cost function L(c; t, T ) computing the cost of applying transition t in configuration c, with respect to a gold parse T . As is shown by , the oracle's correctness depends on the cost function. A correct dynamic oracle o will have the following general formulation:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 159, |
| "end": 167, |
| "text": "Figure 5", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Dynamic Oracle for Transition-Based Parsing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "o(c, T ) = {t|L(c; t, T ) = min t L(c; t , T )} (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Dynamic Oracle for Transition-Based Parsing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The correctness of the oracle is not necessary to improve training. The oracle needs only to be good enough (Daum\u00e9 et al., 2009) , which is confirmed by empirical results (Straka et al., 2015) . identified arc-decomposability, a powerful property of certain dependency parsing transition systems for which we can easily derive correct efficient oracles. When this property holds, we can infer whether a tree is reachable from the reachability of individual arcs. This simplifies the calculation of each transition cost. We rely on an analogue property we call constituent decomposition. A set of constituents is tree-consistent if it is a subset of a set corresponding to a well-formed tree. A phrase structure transition system is constituentdecomposable iff for any configuration C and any tree-consistent set of constituents \u03b3, if every constituent in \u03b3 is reachable from C, then the whole set is reachable from C (constituent reachability will be formally defined in Section 4.1).", |
| "cite_spans": [ |
| { |
| "start": 108, |
| "end": 128, |
| "text": "(Daum\u00e9 et al., 2009)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 171, |
| "end": 192, |
| "text": "(Straka et al., 2015)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Dynamic Oracle for Transition-Based Parsing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The following subsections are structured as follows. First of all, we present a cost function (Section 4.1). Then, we derive a correct dynamic oracle algorithm for an ideal case where we assume that there is no temporary symbols in the grammar (Section 4.2). Finally, we present some heuristics to define a dynamic oracle for the general case (Section 4.3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Dynamic Oracle for Transition-Based Parsing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The cost function we use ignores the lexicalization of the symbols. For the sake of simplicity, we momentarily leave apart the headedness of the binary reductions (until the last paragraph of Section 4) and assume a unique binary REDUCE action.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cost Function", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For the purpose of defining a cost function for transitions, we adopt a representation of trees as sets of constituents. For example,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cost Function", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(S (NP (D the) (N cat)) (VP (V sleeps))) corresponds to the set {(S, 0, 3), (NP, 0, 2), (VP, 2, 3)}. As is shown in Figure 2 , every reduction action (unary or binary) adds a new constituent to the set \u03b3 of already predicted constituents, which was introduced in Section 2. We define the cost of a predicted set of constituents\u03b3 with respect to a gold set \u03b3 * as the number of constituents in \u03b3 * which are not in\u03b3 penalized by the number of predicted unary constituents which are not in the gold set:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 116, |
| "end": 124, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Cost Function", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "L r (\u03b3, \u03b3 * ) = |\u03b3 * \u2212\u03b3| + |{(X, i, i + 1) \u2208\u03b3|(X, i, i + 1) / \u2208 \u03b3 * }| (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cost Function", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The first term penalizes false negatives and the second one penalizes unary false positives. The number of binary constituents in \u03b3 * and\u03b3 depends only on the sentence length n, thus binary false positives are implicitly taken into account by the fist term. The cost of a transition and that of a configuration are based on constituent reachability. The relation C C holds iff C can be deduced from C by performing a transition. Let * denote the reflexive transitive closure of . A set of constituents \u03b3 (possibly a singleton) is reachable from a configuration C iff there is a configuration C = j, S, b, \u03b3 such that C * C and \u03b3 \u2286 \u03b3 , which we write C ; \u03b3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cost Function", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Then, the cost of an action t for a configuration C is the cost difference between the best tree reachable from t(C) and the best tree reachable from C:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cost Function", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "L r (t; C, \u03b3 * ) = min \u03b3:t(C);\u03b3 L(\u03b3, \u03b3 * )\u2212 min \u03b3:C;\u03b3 L(\u03b3, \u03b3 * )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cost Function", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "This cost function is easily decomposable (as a sum of costs of transitions) whereas F1 measure is not.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cost Function", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "By definition, for each configuration, there is at least one transition with cost 0 with respect to the gold parse. Otherwise, it would entail that there is a tree reachable from C but unreachable from t(C), for any t. Therefore, we reformulate equation 1:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cost Function", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "o(C, \u03b3 * ) = {t|L r (C; t, \u03b3 * ) = 0}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cost Function", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cost Function", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In the transition system, the grammar is left implicit: any reduction is allowed (even if the corresponding grammar rule has never been seen in the training corpus). However, due to the introduction of temporary symbols during binarization, there are constraints to ensure that any derivation corresponds to a well-formed unbinarized tree. These constraints make it difficult to test the reachability of constituents. For this reason, we instantiate two transition systems. We call SR-TMP the transition system in Figure 2 which enforces the constraints in Table 1 , and SR-BIN, the same transition system without any of such constraints. SR-BIN assumes an idealized case where the grammar contains no temporary symbols, whereas SR-TMP is the actual system we use in our experiments.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 514, |
| "end": 522, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 557, |
| "end": 564, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Cost Function", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "SR-BIN transition system provides no guarantees that predicted trees are unbinarisable. The only condition for a binary reduction to be allowed is that the stack contains at least two symbols. If so, any non-terminal in the grammar could be used. In such a case, we can define a simple necessary and sufficient condition for constituent reachability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Constituent reachability Let \u03b3 * be a treeconsistent constituent set, and C = j, S, b, \u03b3 a parsing configuration, such that:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "S = (X 1 , i 0 , i 1 ) . . . (X p , i p\u22121 , i)|(A, i, k)|(B, k, j)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "A binary constituent (X, m, n) is reachable iff it satisfies one of the three following properties :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "1. (X, m, n) \u2208 \u03b3 2. j < m < n 3. m \u2208 {i 0 , . . . i p\u22121 , i, k}, n \u2265 j and (m, n) = (k, j)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The first two cases are trivial and correspond respectively to a constituent already constructed and to a constituent spanning words which are still in the buffer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In the third case, (X, m, n) can be constructed by performing n \u2212 j times the transitions SHIFT and GHOST-REDUCE (or REDUCE-UNARY), and then a sequence of binary reductions ended by an X reduction. Note that as the index j in the configuration is non-decreasing during a derivation, the constituents whose span end is inferior to j are not reachable if they are not already constructed. For a unary constituent, the condition for reachability is straightforward:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "a constituent (X, i \u2212 1, i) is reachable from configuration C = j, S, b, \u03b3 iff (X, i \u2212 1, i) \u2208 \u03b3 or i > j or i = j \u2227 b = .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Constituent decomposability SR-BIN is constituent decomposable. In this paragraph, we give some intuition about why this holds. Reasoning by contradiction, let's assume that every constituent of a tree-consistent set \u03b3 * is reachable from C = j, S|(A, i, k)|(B, k, j), b, \u03b3 and that \u03b3 * is not reachable (contraposition). This entails that at some point during a derivation, there is no possible transition which maintains reachability for all constituents of \u03b3 * . Let's assume C is in such a case. If some constituent of \u03b3 * is reachable from C, but not from SHIFT(C), its span must have the form (m, j), where m \u2264 i. If some constituent of \u03b3 * is reachable from C, but not from REDUCE(X)(C), for any label X, its span must have the form (k, n), where n > j. If both conditions hold, \u03b3 * contains incompatible constituents (crossing brackets), which contradicts the assumption that \u03b3 * is tree-consistent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Computing the cost of a transition The conditions on constituent reachability makes it easy to compute the cost of a transition t for a given configuration C = j, S|(A, i, k)|(B, k, j), b, \u03b3 and a gold set \u03b3 * :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "1: function O( j, S|(A, i, k)|(B, k, j), b, \u03b3 , \u03b3 * ) 2:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "if b = then Last action was SHIFT 3:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "if (X, j \u2212 1, j) \u2208 \u03b3 * then 4:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "return {REDUCEUNARY(X)} 5:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "else 6: return {GHOSTREDUCE} 7: if \u2203n > j, (X, k, n) \u2208 \u03b3 * then 8: return {SHIFT} 9:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "if (X, i, j) \u2208 \u03b3 * then 10:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "return {REDUCE(X)} 11: if \u2203m < i, (X, m, j) \u2208 \u03b3 * then 12:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "return {REDUCE(Y), \u2200Y }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "13:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "return {a \u2208 A|a is a possible action} Figure 6 : Oracle algorithm for SR-BIN.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 38, |
| "end": 46, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 The cost of a SHIFT is the number of constituents not in \u03b3, reachable from C and whose span ends in j.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 The cost of a binary reduction REDUCE(X) is a sum of two terms. The first one is the number of constituents of \u03b3 * whose span has the form (k, n) with n > j. These are no longer compatible with (X, i, j) in a tree. The second one is one if (Y, i, j) \u2208 \u03b3 * and Y = X and zero otherwise. It is the cost of mislabelling a constituent with a gold span.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 The cost of a unary reduction or that of a ghost reduction can be computed straightforwardly by looking at the gold set of constituents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We present in Figure 6 an oracle algorithm derived from these observations.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 14, |
| "end": 22, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Correct Oracle for SR-BIN Transition System", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The conditions for constituent reachability for SR-BIN do not hold any longer for SR-TMP. In particular, constituent reachability depends crucially on the distinction between temporary and nontemporary symbols. The algorithm in Figure 6 is not correct for this transition system. In Figure 7 , we give an illustration of a prototypical case in which the algorithm in Figure 6 will fail. The constituent (C:, i, j) is in the gold set of constituents and could be constructed with REDUCE(C:). The third symbol on the stack being temporary symbol D:, the reduction to a temporary symbol will jeopardize the reachability of (C, m, j) because reduc-tions are not possible when the two symbols at the top of the stack are temporary symbols. The best course of action is then a reduction to any non-temporary symbol, so as to keep (C, m, j) reachable. Note that in this case, the cost of RE-DUCE(C:) cannot be smaller than that of a single mislabelled constituent.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 228, |
| "end": 236, |
| "text": "Figure 6", |
| "ref_id": null |
| }, |
| { |
| "start": 283, |
| "end": 292, |
| "text": "Figure 7", |
| "ref_id": null |
| }, |
| { |
| "start": 368, |
| "end": 376, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Heuristic-based Dynamic Oracle for SR-TMP transition system", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In fact, this example shows that the constraints inherent to SR-TMP makes it non constituentdecomposable. In the example in Figure 7 , both constituents in the set {(C, m, j), (C:, i, j)}, a tree-consistent constituent set, is reachable. However, the whole set is not reachable, as RE-DUCE(C:) would make (C, m, j) not reachable.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 124, |
| "end": 132, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Heuristic-based Dynamic Oracle for SR-TMP transition system", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In dependency parsing, several exact dynamic oracles have been proposed for non arcdecomposable transition systems (Goldberg et al., 2014) , including systems for non-projective parsing (G\u00f3mez-Rodr\u00edguez et al., 2014) . These oracles rely on tabular methods to compute the cost of transitions and have (high-degree) polynomial worst case running time. Instead, to avoid resorting to more computationally expensive exact methods, we adapt the algorithm in Figure 6 to the constraints involving temporary symbols using the following heuristics:", |
| "cite_spans": [ |
| { |
| "start": 115, |
| "end": 138, |
| "text": "(Goldberg et al., 2014)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 186, |
| "end": 216, |
| "text": "(G\u00f3mez-Rodr\u00edguez et al., 2014)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 454, |
| "end": 462, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Heuristic-based Dynamic Oracle for SR-TMP transition system", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 If the standard oracle predicts a reduction, make sure to choose its label so that every reachable constituent (X, m, j) \u2208 \u03b3 * (m < i) is still reachable after the transition. Practically, if such constituent exists and if the third symbol on the stack is a temporary symbol, then do not predict a temporary symbol.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Heuristic-based Dynamic Oracle for SR-TMP transition system", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2022 When reductions to both temporary symbols and non-temporary symbols have cost zero, only predict temporary symbols. This should not harm training and improve precision for the unbinarized tree, as any non temporary Configuration stack Gold tree", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Heuristic-based Dynamic Oracle for SR-TMP transition system", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "D: m,i A i,k B k,j C m,j D m,i C: i,j A i,k B k,j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Heuristic-based Dynamic Oracle for SR-TMP transition system", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Figure 7: Problematic case. Due to the temporary symbol constraints enforced by SR-TMP, the algorithm in Figure 6 will fail on this example. (Petrov et al., 2006) symbol in the binarized tree corresponds to a constituent in the n-ary tree.", |
| "cite_spans": [ |
| { |
| "start": 141, |
| "end": 162, |
| "text": "(Petrov et al., 2006)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 105, |
| "end": 113, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Heuristic-based Dynamic Oracle for SR-TMP transition system", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Head choice In some cases, namely when reducing two non-temporary symbols to a new constituent (X, i, j), the oracle must determine the head position in the reduction (REDUCE-RIGHT or REDUCE-LEFT). We used the following heuristic: if (X, i, j) is in the gold set, choose the same head position, otherwise, predict both RR(X) and RL(X) to keep the non-determinism.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Heuristic-based Dynamic Oracle for SR-TMP transition system", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We conducted parsing experiments to evaluate our proposal. We compare two experimental settings. In the 'static' setting, the parser is trained only on gold configurations; in the 'dynamic' setting, we use the dynamic oracle and the training method in Figure 5 to explore non-gold configurations. We used both the SPMRL dataset (Seddah et al., 2013) in the 'predicted tag' scenario, and the Penn Treebank (Marcus et al., 1993) , to compare our proposal to existing systems. The tags and morphological attributes were predicted using Marmot , by 10-fold jackknifing for the train and development sets. For the SPMRL dataset, the head annotation was carried out with the procedures described in Crabb\u00e9 (2015), using the alignment between dependency treebanks and constituent treebanks. For English, we used Collins' head annotation rules (Collins, 2003) . Our system is entirely supervised and uses no external data. Every embedding was initialised randomly (uniformly) in the interval [\u22120.01, 0.01]. Word embeddings have 32 dimensions, tags and non-terminal embeddings have 16 dimensions. The dimensions of the morphological attributes depend on the number of values they can have (Table 4 ). The hidden layer has 512 units. 4 For the 'dynamic' setting, we trained every other k sentence with the dynamic oracle and the other sentences with the static oracle. This method, used by Straka et al. (2015) , allows for high values of p, without slowing or preventing convergence. We used several hyperparameters combinations (see Table 5 of Annex A). For each language, we present the model with the combination which maximizes the developement set Fscore. We used Averaged Stochastic Gradient Descent (Polyak and Juditsky, 1992) to minimize the negative log likelihood of the training examples. We shuffled the sentences in the training set before each iteration.", |
| "cite_spans": [ |
| { |
| "start": 328, |
| "end": 349, |
| "text": "(Seddah et al., 2013)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 405, |
| "end": 426, |
| "text": "(Marcus et al., 1993)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 836, |
| "end": 851, |
| "text": "(Collins, 2003)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 1224, |
| "end": 1225, |
| "text": "4", |
| "ref_id": null |
| }, |
| { |
| "start": 1380, |
| "end": 1400, |
| "text": "Straka et al. (2015)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 1697, |
| "end": 1724, |
| "text": "(Polyak and Juditsky, 1992)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 252, |
| "end": 260, |
| "text": "Figure 5", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 1180, |
| "end": 1188, |
| "text": "(Table 4", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 1525, |
| "end": 1532, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Results Results for English are shown in Table 3 . The use of the dynamic oracle improves F-score by 0.4 on the development set and 0.6 on the test set. The resulting parser, despite using greedy decoding and no additional data, is quite accurate. For example, it compares well with Hall et al. (2014) 's span based model and is much faster.", |
| "cite_spans": [ |
| { |
| "start": 284, |
| "end": 302, |
| "text": "Hall et al. (2014)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 41, |
| "end": 49, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For the SPMRL dataset, we report results on the development sets and test sets in Table 2 . The metrics take punctuation and unparsed sentences into account (Seddah et al., 2013) . We compare our results with the SPMRL shared task baselines (Seddah et al., 2013) and several other parsing models. The model of Bj\u00f6rkelund et al. (2014) obtained the best results on this dataset. It is based on a product grammar and a discriminative reranker, together with morphological features and word clusters learned on unannotated data. Durrett and Klein (2015) use a neural CRF based on CKY decoding algorithm, with word embeddings pretrained on unannotated data. Fern\u00e1ndez-Gonz\u00e1lez and Martins (2015) use a parsing-as-reduction approach, based on a dependency parser with a label set rich enough to reconstruct constituent trees from dependency trees. Finally, Crabb\u00e9 (2015) uses a structured perceptron with rich features and beam-search decoding. Both Crabb\u00e9 (2015) and Bj\u00f6rkelund et al. (2014) use MARMOT-predicted morphological tags , as is done in our experiments.", |
| "cite_spans": [ |
| { |
| "start": 157, |
| "end": 178, |
| "text": "(Seddah et al., 2013)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 241, |
| "end": 262, |
| "text": "(Seddah et al., 2013)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 310, |
| "end": 334, |
| "text": "Bj\u00f6rkelund et al. (2014)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 963, |
| "end": 987, |
| "text": "Bj\u00f6rkelund et al. (2014)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 82, |
| "end": 89, |
| "text": "Table 2", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Our results show that, despite using a very simple greedy inference and being strictly supervised, our base model (static oracle training) is competitive with the best single parsers on this dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We hypothesize that these surprising results come both from the neural scoring model and the morphological attribute embeddings (especially for Basque, Hebrew, Polish and Swedish). We did not test these hypotheses systematically and leave this investigation for future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Furthermore, we observe that the dynamic oracle improves training by up to 0.6 F-score (averaged over all languages). The improvement depends on the language. For example, Swedish, Arabic, Basque and German are the languages with the most important improvement. In terms of absolute score, the parser also achieves very good results on Korean and Basque, and even outperforms Bj\u00f6rkelund et al. (2014) 's reranker on Korean.", |
| "cite_spans": [ |
| { |
| "start": 376, |
| "end": 400, |
| "text": "Bj\u00f6rkelund et al. (2014)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Combined effect of beam and dynamic oracle Although initially, dynamic oracle training was designed to improve parsing without relying on more complex search methods (Goldberg and Nivre, 2012) , we tested the combined effects of dynamic oracle training and beam search decoding. In Table 2 , we provide results for beam decoding with the already trained local models in the 'dynamic' setting. The transition from greedy search to a beam of size two brings an improvement comparable to that of the dynamic oracle. Further increase in beam size does not seem to have any noticeable effect, except for Arabic. These results show that effects of the dynamic oracle and beam decoding are complementary and suggest that a good tradeoff between speed and accuracy is already achieved in a greedy setting or with a very small beam size", |
| "cite_spans": [ |
| { |
| "start": 166, |
| "end": 192, |
| "text": "(Goldberg and Nivre, 2012)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 282, |
| "end": 289, |
| "text": "Table 2", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We have described a dynamic oracle for constituent parsing. Experiments show that training a parser against this oracle leads to an improvement in accuracy over a static oracle. Together with morphological features, we obtain a greedy parser as accurate as state-of-the-art (non reranking) parsers for morphologically-rich languages. Table 5 : Hyperparameters. \u03b1 is the decrease constant used for the learning rate (Bottou, 2010) .", |
| "cite_spans": [ |
| { |
| "start": 415, |
| "end": 429, |
| "text": "(Bottou, 2010)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 334, |
| "end": 341, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "s 0 .c t s 0 .w t .tag s 0 .w t .form q 1 .tag s 0 .c l s 0 .w l .tag s 0 .w l .form q 2 .tag s 0 .c r s 0 .w r .tag s 0 .w r .form q 3 .tag s 1 .c t s 1 .w t .tag s 1 .w t .form q 4 .tag s 1 .c l s 1 .w l .tag s 1 .w l .form q 1 .form s 1 .c r s 1 .w r .tag s 1 .w r .form q 2 .form s 2 .c t s 2 .w t .tag s 2 .w t .form q 3 .form q 4 .form s 0 .w t .m\u2200m \u2208 M q 0 .m\u2200m \u2208 M s 1 .w t .m\u2200m \u2208 M q 1 .m\u2200m \u2208 M Table 6 : These templates specify a list of addresses in a configuration. The input of the neural network is the instanciation of each address by a discrete typed symbol. Each v i (Section 2) is the embedding of the i th instantiated symbol of this list. M is the set of all available morphological attributes for a given language. We use the following notations (cf Figure 4 ): s i is the i th item in the stack, c denotes non-terminals, top, left and right, indicate the position of an element in the subtree. Finally, w and q are respectively stack and buffer tokens.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 404, |
| "end": 411, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 771, |
| "end": 779, |
| "text": "Figure 4", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "This transition system is similar to the extended system of Zhu et al. (2013). The main difference is the strategy used to deal with unary reductions. Our strategy ensures that derivations for a sentence all have the same number of steps, which can have an effect when using beam search. We use a GHOST-REDUCE action, whereas they use a padding strategy with an IDLE action.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We did not tune these hyperparameters for each language. Instead, we chose a set of hyperparameters which achieved a tradeoff between training time and model accuracy. The effect of the morphological features and their dimensionality are left to future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the anonymous reviewers, along with H\u00e9ctor Mart\u00ednez Alonso and Olga Seminck for valuable suggestions to improve prior versions of this article.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "(re)ranking meets morphosyntax: State-of-the-art results from the SPMRL 2013 shared task", |
| "authors": [ |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "Bj\u00f6rkelund", |
| "suffix": "" |
| }, |
| { |
| "first": "Rich\u00e1rd", |
| "middle": [], |
| "last": "\u00d6zlem \u00c7 Etinoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Farkas", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Mueller", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Seeker", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages", |
| "volume": "", |
| "issue": "", |
| "pages": "135--145", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anders Bj\u00f6rkelund,\u00d6zlem \u00c7 etinoglu, Rich\u00e1rd Farkas, Thomas Mueller, and Wolfgang Seeker. 2013. (re)ranking meets morphosyntax: State-of-the-art results from the SPMRL 2013 shared task. In Pro- ceedings of the Fourth Workshop on Statistical Pars- ing of Morphologically-Rich Languages, pages 135- 145, Seattle, Washington, USA, October. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Introducing the ims-wroc\u0142aw-szeged-cis entry at the spmrl 2014 shared task: Reranking and morpho-syntax meet unlabeled data", |
| "authors": [ |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "Bj\u00f6rkelund", |
| "suffix": "" |
| }, |
| { |
| "first": "Agnieszka", |
| "middle": [], |
| "last": "\u00d6zlem \u00c7 Etinoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Rich\u00e1rd", |
| "middle": [], |
| "last": "Fale\u0144ska", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Farkas", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mueller", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages", |
| "volume": "", |
| "issue": "", |
| "pages": "97--102", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anders Bj\u00f6rkelund,\u00d6zlem \u00c7 etinoglu, Agnieszka Fale\u0144ska, Rich\u00e1rd Farkas, Thomas Mueller, Wolf- gang Seeker, and Zsolt Sz\u00e1nt\u00f3. 2014. Introduc- ing the ims-wroc\u0142aw-szeged-cis entry at the spmrl 2014 shared task: Reranking and morpho-syntax meet unlabeled data. In Proceedings of the First Joint Workshop on Statistical Parsing of Morpho- logically Rich Languages and Syntactic Analysis of Non-Canonical Languages, pages 97-102, Dublin, Ireland, August. Dublin City University.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Large-Scale Machine Learning with Stochastic Gradient Descent", |
| "authors": [ |
| { |
| "first": "L\u00e9on", |
| "middle": [], |
| "last": "Bottou", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of COMPSTAT'2010", |
| "volume": "", |
| "issue": "", |
| "pages": "177--186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L\u00e9on Bottou. 2010. Large-Scale Machine Learn- ing with Stochastic Gradient Descent. In Yves Lechevallier and Gilbert Saporta, editors, Proceed- ings of COMPSTAT'2010, pages 177-186. Physica- Verlag HD.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A fast and accurate dependency parser using neural networks", |
| "authors": [ |
| { |
| "first": "Danqi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural net- works. In Empirical Methods in Natural Language Processing (EMNLP).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Head-driven statistical models for natural language parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Comput. Linguist", |
| "volume": "29", |
| "issue": "4", |
| "pages": "589--637", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 2003. Head-driven statistical mod- els for natural language parsing. Comput. Linguist., 29(4):589-637, December.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", |
| "authors": [ |
| { |
| "first": "Ronan", |
| "middle": [], |
| "last": "Collobert", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 25th International Conference on Machine Learning, ICML '08", |
| "volume": "", |
| "issue": "", |
| "pages": "160--167", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Pro- ceedings of the 25th International Conference on Machine Learning, ICML '08, pages 160-167, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Multilingual discriminative lexicalized phrase structure parsing", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Benoit Crabb\u00e9", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1847--1856", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Benoit Crabb\u00e9. 2015. Multilingual discriminative lex- icalized phrase structure parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1847-1856, Lisbon, Portugal, September. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Search-based structured prediction", |
| "authors": [ |
| { |
| "first": "Hal", |
| "middle": [], |
| "last": "Daum\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Langford", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Machine Learning", |
| "volume": "75", |
| "issue": "3", |
| "pages": "297--325", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hal Daum\u00e9, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine Learn- ing, 75(3):297-325.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Neural crf parsing", |
| "authors": [ |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Durrett", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Greg Durrett and Dan Klein. 2015. Neural crf pars- ing. In Proceedings of the Association for Computa- tional Linguistics, Beijing, China, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Parsing as reduction", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Fern\u00e1ndez", |
| "suffix": "" |
| }, |
| { |
| "first": "-Gonz\u00e1lez", |
| "middle": [], |
| "last": "Andr\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "T" |
| ], |
| "last": "Martins", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1523--1533", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Fern\u00e1ndez-Gonz\u00e1lez and Andr\u00e9 F. T. Martins. 2015. Parsing as reduction. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 1523-1533, Beijing, China, July. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A dynamic oracle for arc-eager dependency parsing", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "The COLING 2012 Organizing Committee", |
| "volume": "", |
| "issue": "", |
| "pages": "959--976", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Goldberg and Joakim Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. In Pro- ceedings of COLING 2012, pages 959-976, Mum- bai, India, December. The COLING 2012 Organiz- ing Committee.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Training deterministic parsers with non-deterministic oracles", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "403--414", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. Transactions of the Association for Computational Linguistics, 1:403-414.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A tabular method for dynamic oracles in transition-based parsing", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Francesco", |
| "middle": [], |
| "last": "Sartorio", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "TACL", |
| "volume": "2", |
| "issue": "", |
| "pages": "119--130", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Goldberg, Francesco Sartorio, and Giorgio Satta. 2014. A tabular method for dynamic oracles in transition-based parsing. TACL, 2:119-130.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "An efficient dynamic oracle for unrestricted non-projective parsing", |
| "authors": [ |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "G\u00f3mez", |
| "suffix": "" |
| }, |
| { |
| "first": "-Rodr\u00edguez", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Fern\u00e1ndez-Gonz\u00e1lez", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "2", |
| "issue": "", |
| "pages": "256--261", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carlos G\u00f3mez-Rodr\u00edguez and Daniel Fern\u00e1ndez- Gonz\u00e1lez. 2015. An efficient dynamic oracle for unrestricted non-projective parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 256-261, Beijing, China, July. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "A polynomial-time dynamic oracle for non-projective dependency parsing", |
| "authors": [ |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "G\u00f3mez-Rodr\u00edguez", |
| "suffix": "" |
| }, |
| { |
| "first": "Francesco", |
| "middle": [], |
| "last": "Sartorio", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "917--927", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carlos G\u00f3mez-Rodr\u00edguez, Francesco Sartorio, and Giorgio Satta. 2014. A polynomial-time dy- namic oracle for non-projective dependency pars- ing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 917-927, Doha, Qatar, Octo- ber. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Less grammar, more features", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Durrett", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Hall, Greg Durrett, and Dan Klein. 2014. Less grammar, more features. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), Bal- timore, Maryland, June. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Building a large annotated corpus of english: The penn treebank", |
| "authors": [ |
| { |
| "first": "Mitchell", |
| "middle": [ |
| "P" |
| ], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Beatrice", |
| "middle": [], |
| "last": "Santorini", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [ |
| "Ann" |
| ], |
| "last": "Marcinkiewicz", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "313--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computa- tional Linguistics, 19(2):313-330.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Shift-reduce constituency parsing with dynamic programming and pos tag lattice", |
| "authors": [ |
| { |
| "first": "Haitao", |
| "middle": [], |
| "last": "Mi", |
| "suffix": "" |
| }, |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1030--1035", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Haitao Mi and Liang Huang. 2015. Shift-reduce con- stituency parsing with dynamic programming and pos tag lattice. In Proceedings of the 2015 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 1030-1035, Denver, Col- orado, May-June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Efficient higher-order CRFs for morphological tagging", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Mueller", |
| "suffix": "" |
| }, |
| { |
| "first": "Helmut", |
| "middle": [], |
| "last": "Schmid", |
| "suffix": "" |
| }, |
| { |
| "first": "Hinrich", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Mueller, Helmut Schmid, and Hinrich Sch\u00fctze. 2013. Efficient higher-order CRFs for morphological tagging. In Proceedings of the 2013", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Seattle, Washington, USA, October. Association for Computational Linguistics", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "322--332", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Conference on Empirical Methods in Natural Lan- guage Processing, pages 322-332, Seattle, Wash- ington, USA, October. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Learning accurate, compact, and interpretable tree annotation", |
| "authors": [ |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Leon", |
| "middle": [], |
| "last": "Barrett", |
| "suffix": "" |
| }, |
| { |
| "first": "Romain", |
| "middle": [], |
| "last": "Thibaux", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "433--440", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Associa- tion for Computational Linguistics, pages 433-440, Sydney, Australia, July. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Acceleration of stochastic approximation by averaging", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [ |
| "T" |
| ], |
| "last": "Polyak", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "B" |
| ], |
| "last": "Juditsky", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "SIAM J. Control Optim", |
| "volume": "30", |
| "issue": "4", |
| "pages": "838--855", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. T. Polyak and A. B. Juditsky. 1992. Acceleration of stochastic approximation by averaging. SIAM J. Control Optim., 30(4):838-855, July.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "A reduction of imitation learning and structured prediction to no-regret online learning", |
| "authors": [ |
| { |
| "first": "St\u00e9phane", |
| "middle": [], |
| "last": "Ross", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [ |
| "J" |
| ], |
| "last": "Gordon", |
| "suffix": "" |
| }, |
| { |
| "first": "Drew", |
| "middle": [], |
| "last": "Bagnell", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "JMLR Proceedings", |
| "volume": "15", |
| "issue": "", |
| "pages": "627--635", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "St\u00e9phane Ross, Geoffrey J. Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and struc- tured prediction to no-regret online learning. In Ge- offrey J. Gordon, David B. Dunson, and Miroslav Dud\u00edk, editors, AISTATS, volume 15 of JMLR Pro- ceedings, pages 627-635. JMLR.org.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "A classifier-based parser with linear run-time complexity", |
| "authors": [ |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| }, |
| { |
| "first": "Alon", |
| "middle": [], |
| "last": "Lavie", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the Ninth International Workshop on Parsing Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "125--132", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenji Sagae and Alon Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceed- ings of the Ninth International Workshop on Parsing Technology, pages 125-132. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "A best-first probabilistic shift-reduce parser", |
| "authors": [ |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| }, |
| { |
| "first": "Alon", |
| "middle": [], |
| "last": "Lavie", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the COLING/ACL on Main conference poster sessions", |
| "volume": "", |
| "issue": "", |
| "pages": "691--698", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenji Sagae and Alon Lavie. 2006. A best-first prob- abilistic shift-reduce parser. In Proceedings of the COLING/ACL on Main conference poster sessions, pages 691-698. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages", |
| "authors": [ |
| { |
| "first": "Djam\u00e9", |
| "middle": [], |
| "last": "Seddah", |
| "suffix": "" |
| }, |
| { |
| "first": "Reut", |
| "middle": [], |
| "last": "Tsarfaty", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie", |
| "middle": [], |
| "last": "Candito", |
| "suffix": "" |
| }, |
| { |
| "first": "Jinho", |
| "middle": [ |
| "D" |
| ], |
| "last": "Choi", |
| "suffix": "" |
| }, |
| { |
| "first": "Rich\u00e1rd", |
| "middle": [], |
| "last": "Farkas", |
| "suffix": "" |
| }, |
| { |
| "first": "Jennifer", |
| "middle": [], |
| "last": "Foster", |
| "suffix": "" |
| }, |
| { |
| "first": "Iakes", |
| "middle": [], |
| "last": "Goenaga", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Koldo Gojenola Galletebeitia", |
| "suffix": "" |
| }, |
| { |
| "first": "Spence", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Nizar", |
| "middle": [], |
| "last": "Green", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Habash", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Kuhlmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Maier", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Przepi\u00f3rkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "Yannick", |
| "middle": [], |
| "last": "Seeker", |
| "suffix": "" |
| }, |
| { |
| "first": "Veronika", |
| "middle": [], |
| "last": "Versley", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcin", |
| "middle": [], |
| "last": "Vincze", |
| "suffix": "" |
| }, |
| { |
| "first": "Alina", |
| "middle": [], |
| "last": "Woli\u0144ski", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Wr\u00f3blewska", |
| "suffix": "" |
| }, |
| { |
| "first": "Clergerie", |
| "middle": [], |
| "last": "Villemonte De La", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages", |
| "volume": "", |
| "issue": "", |
| "pages": "146--182", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Djam\u00e9 Seddah, Reut Tsarfaty, Sandra K\u00fcbler, Marie Candito, Jinho D. Choi, Rich\u00e1rd Farkas, Jen- nifer Foster, Iakes Goenaga, Koldo Gojenola Gal- letebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi\u00f3rkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woli\u0144ski, Alina Wr\u00f3blewska, and Eric Villemonte de la Clergerie. 2013. Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. In Pro- ceedings of the Fourth Workshop on Statistical Pars- ing of Morphologically-Rich Languages, pages 146- 182, Seattle, Washington, USA, October. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Parsing universal dependency treebanks using neural networks and search-based oracle", |
| "authors": [ |
| { |
| "first": "Milan", |
| "middle": [], |
| "last": "Straka", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| }, |
| { |
| "first": "Jana", |
| "middle": [], |
| "last": "Strakov\u00e1", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of Fourteenth International Workshop on Treebanks and Linguistic Theories", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Milan Straka, Jan Haji\u010d, Jana Strakov\u00e1, and Jan Haji\u010d jr. 2015. Parsing universal dependency tree- banks using neural networks and search-based or- acle. In Proceedings of Fourteenth International Workshop on Treebanks and Linguistic Theories (TLT 14), December.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Transitionbased neural constituent parsing", |
| "authors": [ |
| { |
| "first": "Taro", |
| "middle": [], |
| "last": "Watanabe", |
| "suffix": "" |
| }, |
| { |
| "first": "Eiichiro", |
| "middle": [], |
| "last": "Sumita", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taro Watanabe and Eiichiro Sumita. 2015. Transition- based neural constituent parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "text": "Transition system, the transition RR(X) and the lexicalization of symbols are omitted.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "num": null, |
| "text": "Schematic representation of local elements in a configuration.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "num": null, |
| "text": "Online training for a single annotated sentence s, using an oracle function o.", |
| "type_str": "figure", |
| "uris": null |
| }, |
| "TABREF3": { |
| "content": "<table><tr><td>: Results on the Penn Treebank (Mar-</td></tr><tr><td>cus et al., 1993). \u2020 use clusters or word vectors</td></tr><tr><td>learned on unannotated data. different architec-</td></tr><tr><td>ture (2.3Ghz Intel), single processor.</td></tr></table>", |
| "html": null, |
| "num": null, |
| "text": "", |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "content": "<table><tr><td/><td/><td colspan=\"9\">Arabic Basque French German Hebrew Hungarian Korean Polish Swedish</td><td>Avg</td></tr><tr><td/><td>Decoding</td><td/><td/><td/><td colspan=\"4\">Development F1 (EVALBSPMRL)</td><td/><td/><td/></tr><tr><td>Durrett and Klein (2015) \u2020</td><td>CKY</td><td>80.68</td><td>84.37</td><td>80.65</td><td>85.25</td><td>89.37</td><td>89.46</td><td>82.35</td><td>92.10</td><td>77.93</td><td>84.68</td></tr><tr><td colspan=\"2\">Crabb\u00e9 (2015) beam=8</td><td>81.25</td><td>84.01</td><td>80.87</td><td>84.08</td><td>90.69</td><td>88.27</td><td>83.09</td><td>92.78</td><td>77.87</td><td>84.77</td></tr><tr><td>static (this work)</td><td>greedy</td><td>80.25</td><td>84.29</td><td>79.87</td><td>83.99</td><td>89.78</td><td>88.44</td><td>84.98</td><td>92.38</td><td>76.63</td><td>84.51</td></tr><tr><td>dynamic (this work)</td><td>greedy</td><td>80.94</td><td>85.17</td><td>80.31</td><td>84.61</td><td>90.20</td><td>88.70</td><td>85.46</td><td>92.57</td><td>77.87</td><td>85.09</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Test F1 (EVALBSPMRL)</td><td/><td/><td/><td/></tr><tr><td>Bj\u00f6rkelund et al. (2014) \u2020</td><td/><td>81.32 *</td><td>88.24</td><td>82.53</td><td>81.66</td><td>89.80</td><td>91.72</td><td>83.81</td><td>90.50</td><td>85.50</td><td>86.12</td></tr><tr><td>Berkeley (Petrov et al., 2006)</td><td>CKY</td><td>79.19</td><td>70.50</td><td>80.38</td><td>78.30</td><td>86.96</td><td>81.62</td><td>71.42</td><td>79.23</td><td>79.18</td><td>78.53</td></tr><tr><td>Berkeley-Tags</td><td>CKY</td><td>78.66</td><td>74.74</td><td>79.76</td><td>78.28</td><td>85.42</td><td>85.22</td><td>78.56</td><td>86.75</td><td>80.64</td><td>80.89</td></tr><tr><td>Durrett and Klein (2015) \u2020</td><td>CKY</td><td>80.24</td><td>85.41</td><td>81.25</td><td>80.95</td><td>88.61</td><td>90.66</td><td>82.23</td><td>92.97</td><td>83.45</td><td>85.09</td></tr><tr><td colspan=\"2\">Crabb\u00e9 (2015) beam=8</td><td>81.31</td><td>84.94</td><td>80.84</td><td>79.26</td><td>89.65</td><td>90.14</td><td>82.65</td><td>92.66</td><td>83.24</td><td>84.97</td></tr><tr><td>Fern\u00e1ndez-Gonz\u00e1lez and Martins (2015)</td><td/><td>-</td><td>85.90</td><td>78.75</td><td>78.66</td><td>88.97</td><td>88.16</td><td>79.28</td><td>91.20</td><td>82.80</td><td>(84.22)</td></tr><tr><td>static (this work)</td><td>greedy</td><td>79.77</td><td>85.91</td><td>79.62</td><td>79.20</td><td>88.64</td><td>90.54</td><td>84.53</td><td>92.69</td><td>81.45</td><td>84.71</td></tr><tr><td>dynamic (this work)</td><td>greedy</td><td>80.71</td><td>86.24</td><td>79.91</td><td>80.15</td><td>88.69</td><td>90.51</td><td>85.10</td><td>92.96</td><td>81.74</td><td>85.11</td></tr><tr><td colspan=\"2\">dynamic (this work) beam=2</td><td>81.14</td><td>86.45</td><td>80.32</td><td>80.68</td><td>89.06</td><td>90.74</td><td>85.17</td><td>93.15</td><td>82.65</td><td>85.48</td></tr><tr><td colspan=\"2\">dynamic (this work) beam=4</td><td>81.59</td><td>86.45</td><td>80.48</td><td>80.69</td><td>89.18</td><td>90.73</td><td>85.31</td><td>93.13</td><td>82.77</td><td>85.59</td></tr><tr><td colspan=\"2\">dynamic (this work) beam=8</td><td>81.80</td><td>86.48</td><td>80.56</td><td>80.74</td><td>89.24</td><td>90.76</td><td>85.33</td><td>93.13</td><td>82.80</td><td>85.64</td></tr></table>", |
| "html": null, |
| "num": null, |
| "text": "Size of morphological attributes embeddings.", |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "text": "Results on development and test corpora. Metrics are provided by evalb spmrl with spmrl.prm parameters (http://www.spmrl.org/spmrl2013-sharedtask.html). \u2020 use clusters or word vectors learned on unannotated data.", |
| "type_str": "table" |
| }, |
| "TABREF7": { |
| "content": "<table><tr><td colspan=\"4\">A Supplementary Material</td><td/></tr><tr><td colspan=\"3\">'static' and 'dynamic' setting</td><td colspan=\"2\">'dynamic' setting</td></tr><tr><td>learning rate</td><td>\u03b1</td><td>iterations</td><td>k</td><td>p</td></tr><tr><td colspan=\"2\">{0.01, 0.02} {0, 10 \u22126 }</td><td>[1, 24]</td><td colspan=\"2\">{8, 16} {0.5, 0.9}</td></tr></table>", |
| "html": null, |
| "num": null, |
| "text": "(Volume 1: Long Papers), pages 1169-1179, Beijing, China, July. Association for Computational Linguistics. Yue Zhang and Stephen Clark. 2009. Transitionbased parsing of the chinese treebank using a global discriminative model. In Proceedings of the 11th International Conference on Parsing Technologies, IWPT '09, pages 162-171, Stroudsburg, PA, USA. Association for Computational Linguistics. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shiftreduce constituent parsing. In ACL (1), pages 434-443. The Association for Computer Linguistics.", |
| "type_str": "table" |
| } |
| } |
| } |
| } |