| { |
| "paper_id": "Q16-1023", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:07:11.984696Z" |
| }, |
| "title": "Simple and Accurate Dependency Parsing Using Bidirectional LSTM Feature Representations", |
| "authors": [ |
| { |
| "first": "Eliyahu", |
| "middle": [], |
| "last": "Kiperwasser", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Ilan University Ramat-Gan", |
| "location": { |
| "country": "Israel" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Ilan University", |
| "location": { |
| "settlement": "Ramat-Gan", |
| "country": "Israel" |
| } |
| }, |
| "email": "yoav.goldberg@gmail.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present a simple and effective scheme for dependency parsing which is based on bidirectional-LSTMs (BiLSTMs). Each sentence token is associated with a BiLSTM vector representing the token in its sentential context, and feature vectors are constructed by concatenating a few BiLSTM vectors. The BiLSTM is trained jointly with the parser objective, resulting in very effective feature extractors for parsing. We demonstrate the effectiveness of the approach by applying it to a greedy transition-based parser as well as to a globally optimized graph-based parser. The resulting parsers have very simple architectures, and match or surpass the state-of-the-art accuracies on English and Chinese.", |
| "pdf_parse": { |
| "paper_id": "Q16-1023", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present a simple and effective scheme for dependency parsing which is based on bidirectional-LSTMs (BiLSTMs). Each sentence token is associated with a BiLSTM vector representing the token in its sentential context, and feature vectors are constructed by concatenating a few BiLSTM vectors. The BiLSTM is trained jointly with the parser objective, resulting in very effective feature extractors for parsing. We demonstrate the effectiveness of the approach by applying it to a greedy transition-based parser as well as to a globally optimized graph-based parser. The resulting parsers have very simple architectures, and match or surpass the state-of-the-art accuracies on English and Chinese.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The focus of this paper is on feature representation for dependency parsing, using recent techniques from the neural-networks (\"deep learning\") literature. Modern approaches to dependency parsing can be broadly categorized into graph-based and transition-based parsers (K\u00fcbler et al., 2009) . Graph-based parsers (McDonald, 2006) treat parsing as a search-based structured prediction problem in which the goal is learning a scoring function over dependency trees such that the correct tree is scored above all other trees. Transition-based parsers (Nivre, 2004; Nivre, 2008) treat parsing as a sequence of actions that produce a parse tree, and a classifier is trained to score the possible actions at each stage of the process and guide the parsing process. Perhaps the simplest graph-based parsers are arc-factored (first order) models (McDonald, 2006) , in which the scoring function for a tree decomposes over the individual arcs of the tree. More elaborate models look at larger (overlapping) parts, requiring more sophisticated inference and training algorithms (Martins et al., 2009; Koo and Collins, 2010) . The basic transition-based parsers work in a greedy manner, performing a series of locally-optimal decisions, and boast very fast parsing speeds. More advanced transition-based parsers introduce some search into the process using a beam (Zhang and Clark, 2008) or dynamic programming (Huang and Sagae, 2010) .", |
| "cite_spans": [ |
| { |
| "start": 269, |
| "end": 290, |
| "text": "(K\u00fcbler et al., 2009)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 313, |
| "end": 329, |
| "text": "(McDonald, 2006)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 548, |
| "end": 561, |
| "text": "(Nivre, 2004;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 562, |
| "end": 574, |
| "text": "Nivre, 2008)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 838, |
| "end": 854, |
| "text": "(McDonald, 2006)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 1068, |
| "end": 1090, |
| "text": "(Martins et al., 2009;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 1091, |
| "end": 1113, |
| "text": "Koo and Collins, 2010)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 1353, |
| "end": 1376, |
| "text": "(Zhang and Clark, 2008)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 1400, |
| "end": 1423, |
| "text": "(Huang and Sagae, 2010)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Regardless of the details of the parsing framework being used, a crucial step in parser design is choosing the right feature function for the underlying statistical model. Recent work (see Section 2.2 for an overview) attempt to alleviate parts of the feature function design problem by moving from linear to non-linear models, enabling the modeler to focus on a small set of \"core\" features and leaving it up to the machine-learning machinery to come up with good feature combinations (Chen and Manning, 2014; Pei et al., 2015; Lei et al., 2014; Taub-Tabib et al., 2015) . However, the need to carefully define a set of core features remains. For example, the work of Chen and Manning (2014) uses 18 different elements in its feature function, while the work of Pei et al. (2015) uses 21 different elements. Other works, notably Dyer et al. (2015) and Le and Zuidema (2014) , propose more sophisticated feature representations, in which the feature engineering is replaced with architecture engineering.", |
| "cite_spans": [ |
| { |
| "start": 486, |
| "end": 510, |
| "text": "(Chen and Manning, 2014;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 511, |
| "end": 528, |
| "text": "Pei et al., 2015;", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 529, |
| "end": 546, |
| "text": "Lei et al., 2014;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 547, |
| "end": 571, |
| "text": "Taub-Tabib et al., 2015)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 669, |
| "end": 692, |
| "text": "Chen and Manning (2014)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 763, |
| "end": 780, |
| "text": "Pei et al. (2015)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 830, |
| "end": 848, |
| "text": "Dyer et al. (2015)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 853, |
| "end": 874, |
| "text": "Le and Zuidema (2014)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work, we suggest an approach which is much simpler in terms of both feature engineering and architecture engineering. Our proposal (Section 3) is centered around BiRNNs (Irsoy and Cardie, 2014; Schuster and Paliwal, 1997) , and more specifically BiLSTMs (Graves, 2008) , which are strong and trainable sequence models (see Section 2.3). The BiLSTM excels at representing elements in a sequence (i.e., words) together with their contexts, capturing the element and an \"infinite\" window around it. We represent each word by its BiLSTM encoding, and use a concatenation of a minimal set of such BiLSTM encodings as our feature function, which is then passed to a non-linear scoring function (multi-layer perceptron). Crucially, the BiLSTM is trained with the rest of the parser in order to learn a good feature representation for the parsing problem. If we set aside the inherent complexity of the BiLSTM itself and treat it as a black box, our proposal results in a pleasingly simple feature extractor.", |
| "cite_spans": [ |
| { |
| "start": 177, |
| "end": 201, |
| "text": "(Irsoy and Cardie, 2014;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 202, |
| "end": 229, |
| "text": "Schuster and Paliwal, 1997)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 262, |
| "end": 276, |
| "text": "(Graves, 2008)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We demonstrate the effectiveness of the approach by using the BiLSTM feature extractor in two parsing architectures, transition-based (Section 4) as well as a graph-based (Section 5). In the graphbased parser, we jointly train a structured-prediction model on top of a BiLSTM, propagating errors from the structured objective all the way back to the BiLSTM feature-encoder. To the best of our knowledge, we are the first to perform such end-to-end training of a structured prediction model and a recurrent feature extractor for non-sequential outputs. 1 Aside from the novelty of the BiLSTM feature extractor and the end-to-end structured training, we rely on existing models and techniques from the parsing and structured prediction literature. We stick to the simplest parsers in each categorygreedy inference for the transition-based architecture, and a first-order, arc-factored model for the graph-based architecture. Despite the simplicity of the parsing architectures and the feature functions, we achieve near state-of-the-art parsing accuracies in both English (93.1 UAS) and Chinese (86.6 UAS), using a first-order parser with two features and while training solely on Treebank data, without relying on semi-supervised signals such as pre-trained word embeddings (Chen and Manning, 2014) , word-clusters (Koo et al., 2008) , or tech-niques such as tri-training (Weiss et al., 2015) . When also including pre-trained word embeddings, we obtain further improvements, with accuracies of 93.9 UAS (English) and 87.6 UAS (Chinese) for a greedy transition-based parser with 11 features, and 93.6 UAS (En) / 87.4 (Ch) for a greedy transitionbased parser with 4 features.", |
| "cite_spans": [ |
| { |
| "start": 552, |
| "end": 553, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 1273, |
| "end": 1297, |
| "text": "(Chen and Manning, 2014)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1314, |
| "end": 1332, |
| "text": "(Koo et al., 2008)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 1371, |
| "end": 1391, |
| "text": "(Weiss et al., 2015)", |
| "ref_id": "BIBREF43" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Notation We use x 1:n to denote a sequence of n vectors", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background and Notation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "x 1 , \u2022 \u2022 \u2022 , x n . F \u03b8 (\u2022)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background and Notation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "is a function parameterized with parameters \u03b8. We write F L (\u2022) as shorthand for F \u03b8 L -an instantiation of F with a specific set of parameters \u03b8 L . We use \u2022 to denote a vector concatenation operation, and v[i] to denote an indexing operation taking the ith element of a vector v.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background and Notation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Traditionally, state-of-the-art parsers rely on linear models over hand-crafted feature functions. The feature functions look at core components (e.g. \"word on top of stack\", \"leftmost child of the second-totop word on the stack\", \"distance between the head and the modifier words\"), and are comprised of several templates, where each template instantiates a binary indicator function over a conjunction of core elements (resulting in features of the form \"word on top of stack is X and leftmost child is Y and . . . \"). The design of the feature function -which components to consider and which combinations of components to include -is a major challenge in parser design. Once a good feature function is proposed in a paper it is usually adopted in later works, and sometimes tweaked to improve performance. Examples of good feature functions are the feature-set proposed by Zhang and Nivre (2011) for transitionbased parsing (including roughly 20 core components and 72 feature templates), and the featureset proposed by McDonald et al. (2005) for graphbased parsing, with the paper listing 18 templates for a first-order parser, while the first order featureextractor in the actual implementation's code (MST-Parser 2 ) includes roughly a hundred feature templates.", |
| "cite_spans": [ |
| { |
| "start": 877, |
| "end": 899, |
| "text": "Zhang and Nivre (2011)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 1024, |
| "end": 1046, |
| "text": "McDonald et al. (2005)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Functions in Dependency Parsing", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The core features in a transition-based parser usually look at information such as the word-identity and part-of-speech (POS) tags of a fixed number of words on top of the stack, a fixed number of words on the top of the buffer, the modifiers (usually leftmost and right-most) of items on the stack and on the buffer, the number of modifiers of these elements, parents of words on the stack, and the length of the spans spanned by the words on the stack. The core features of a first-order graph-based parser usually take into account the word and POS of the head and modifier items, as well as POS-tags of the items around the head and modifier, POS tags of items between the head and modifier, and the distance and direction between the head and modifier.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Functions in Dependency Parsing", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Coming up with a good feature-set for a parser is a hard and time consuming task, and many researchers attempt to reduce the required manual effort. The work of Lei et al. (2014) suggests a low-rank tensor representation to automatically find good feature combinations. Taub-Tabib et al. (2015) suggest a kernel-based approach to implicitly consider all possible feature combinations over sets of core-features. The recent popularity of neural networks prompted a move from templates of sparse, binary indicator features to dense core feature encodings fed into non-linear classifiers. Chen and Manning (2014) encode each core feature of a greedy transition-based parser as a dense low-dimensional vector, and the vectors are then concatenated and fed into a nonlinear classifier (multi-layer perceptron) which can potentially capture arbitrary feature combinations. Weiss et al. (2015) showed further gains using the same approach coupled with a somewhat improved set of core features, a more involved network architecture with skip-layers, beam search-decoding, and careful hyper-parameter tuning. Pei et al. (2015) apply a similar methodology to graph-based parsing. While the move to neural-network classifiers alleviates the need for hand-crafting featurecombinations, the need to carefully define a set of core features remain. For example, the feature representation in Chen and Manning (2014) is a concatenation of 18 word vectors, 18 POS vectors and 12 dependency-label vectors. 3 The above works tackle the effort in hand-crafting effective feature combinations. A different line of work attacks the feature-engineering problem by suggesting novel neural-network architectures for encoding the parser state, including intermediatelybuilt subtrees, as vectors which are then fed to nonlinear classifiers. Titov and Henderson encode the parser state using incremental sigmoid-belief networks (2007) . In the work of Dyer et al. (2015) , the entire stack and buffer of a transition-based parser are encoded as a stack-LSTMs, where each stack element is itself based on a compositional representation of parse trees. Le and Zuidema (2014) encode each tree node as two compositional representations capturing the inside and outside structures around the node, and feed the representations into a reranker. A similar reranking approach, this time based on convolutional neural networks, is taken by Zhu et al. (2015) . Finally, in Kiperwasser and Goldberg (2016) we present an Easy-First parser based on a novel hierarchical-LSTM tree encoding.", |
| "cite_spans": [ |
| { |
| "start": 867, |
| "end": 886, |
| "text": "Weiss et al. (2015)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 1100, |
| "end": 1117, |
| "text": "Pei et al. (2015)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 1488, |
| "end": 1489, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 1900, |
| "end": 1906, |
| "text": "(2007)", |
| "ref_id": null |
| }, |
| { |
| "start": 1924, |
| "end": 1942, |
| "text": "Dyer et al. (2015)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 2123, |
| "end": 2144, |
| "text": "Le and Zuidema (2014)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 2403, |
| "end": 2420, |
| "text": "Zhu et al. (2015)", |
| "ref_id": "BIBREF46" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Research Efforts", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In contrast to these, the approach we present in this work results in much simpler feature functions, without resorting to elaborate network architectures or compositional tree representations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Research Efforts", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Work by Vinyals et al. (2015) employs a sequence-to-sequence with attention architecture for constituency parsing. Each token in the input sentence is encoded in a deep-BiLSTM representation, and then the tokens are fed as input to a deep-LSTM that predicts a sequence of bracketing actions based on the already predicted bracketing as well as the encoded BiLSTM vectors. A trainable attention mechanism is used to guide the parser to relevant BiLSTM vectors at each stage. This architecture shares with ours the use of BiLSTM encoding and end-to-end training. The sequence of bracketing actions can be interpreted as a sequence of Shift and Reduce operations of a transition-based parser. However, while the parser of Vinyals et al.", |
| "cite_spans": [ |
| { |
| "start": 8, |
| "end": 29, |
| "text": "Vinyals et al. (2015)", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Research Efforts", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "relies on a trainable attention mechanism for focusing on specific BiLSTM vectors, parsers in the transition-based family we use in Section 4 use a human designed stack and buffer mechanism to manually direct the parser's attention. While the effectiveness of the trainable attention approach is impressive, the stack-and-buffer guidance of transitionbased parsers results in more robust learning. Indeed, work by Cross and Huang (2016) , published while working on the camera-ready version of this paper, show that the same methodology as ours is highly effective also for greedy, transition-based constituency parsing, surpassing the beam-based architecture of Vinyals et al. (88.3F vs. 89.8F points) when trained on the Penn Treebank dataset and without using orthogonal methods such as ensembling and up-training.", |
| "cite_spans": [ |
| { |
| "start": 414, |
| "end": 436, |
| "text": "Cross and Huang (2016)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 663, |
| "end": 702, |
| "text": "Vinyals et al. (88.3F vs. 89.8F points)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Research Efforts", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Recurrent neural networks (RNNs) are statistical learners for modeling sequential data. An RNN allows one to model the ith element in the sequence based on the past -the elements x 1:i up to and including it. The RNN model provides a framework for conditioning on the entire history x 1:i without resorting to the Markov assumption which is traditionally used for modeling sequences. RNNs were shown to be capable of learning to count, as well as to model line lengths and complex phenomena such as bracketing and code indentation (Karpathy et al., 2015) . Our proposed feature extractors are based on a bidirectional recurrent neural network (BiRNN), an extension of RNNs that take into account both the past x 1:i and the future x i:n . We use a specific flavor of RNN called a long short-term memory network (LSTM). For brevity, we treat RNN as an abstraction, without getting into the mathematical details of the implementation of the RNNs and LSTMs. For further details on RNNs and LSTMs, the reader is referred to Goldberg (2015) and Cho (2015) .", |
| "cite_spans": [ |
| { |
| "start": 531, |
| "end": 554, |
| "text": "(Karpathy et al., 2015)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1020, |
| "end": 1035, |
| "text": "Goldberg (2015)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 1040, |
| "end": 1050, |
| "text": "Cho (2015)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional Recurrent Neural Networks", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The recurrent neural network (RNN) abstraction is a parameterized function RNN \u03b8 (x 1:n ) mapping a sequence of n input vectors x 1:n , x i \u2208 R d in to a sequence of n output vectors h 1:n , h i \u2208 R dout . Each output vector h i is conditioned on all the input vectors x 1:i , and can be thought of as a summary of the prefix x 1:i of x 1:n . In our notation, we ignore the intermediate vectors h 1:n\u22121 and take the output of RNN \u03b8 (x 1:n ) to be the vector h n .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional Recurrent Neural Networks", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "A bidirectional RNN is composed of two RNNs, RNN F and RNN R , one reading the sequence in its regular order, and the other reading it in reverse. Concretely, given a sequence of vectors x 1:n and a desired index i, the function BIRNN \u03b8 (x 1:n , i) is defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional Recurrent Neural Networks", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "BIRNN \u03b8 (x 1:n , i) = RNN F (x 1:i ) \u2022 RNN R (x n:i ) The vector v i = BIRNN(x 1:n , i)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional Recurrent Neural Networks", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "is then a representation of the ith item in x 1:n , taking into account both the entire history x 1:i and the entire future x i:n by concatenating the matching RNNs. We can view the BiRNN encoding of an item i as representing the item i together with a context of an infinite window around it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional Recurrent Neural Networks", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Computational Complexity Computing the BiRNN vectors encoding of the ith element of a sequence x 1:n requires O(n) time for computing the two RNNs and concatenating their outputs. A naive approach of computing the bidirectional representation of all n elements result in O(n 2 ) computation. However, it is trivial to compute the BiRNN encoding of all sequence items in linear time by pre-computing RNN F (x 1:n ) and RNN R (x n:1 ), keeping the intermediate representations, and concatenating the required elements as needed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional Recurrent Neural Networks", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "BiRNN Training Initially, the BiRNN encodings v i do not capture any particular information. During training, the encoded vectors v i are fed into further network layers, until at some point a prediction is made, and a loss is incurred. The back-propagation algorithm is used to compute the gradients of all the parameters in the network (including the BiRNN parameters) with respect to the loss, and an optimizer is used to update the parameters according to the gradients. The training procedure causes the BiRNN function to extract from the input sequence x 1:n the relevant information for the task task at hand. BiRNNs in this way has been empirically shown to be effective (Irsoy and Cardie, 2014) . In this work, we use BiRNNs and deep-BiRNNs interchangeably, specifying the number of layers when needed.", |
| "cite_spans": [ |
| { |
| "start": 679, |
| "end": 703, |
| "text": "(Irsoy and Cardie, 2014)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bidirectional Recurrent Neural Networks", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Historical Notes RNNs were introduced by Elman (1990), and extended to BiRNNs by Schuster and Paliwal (1997) . The LSTM variant of RNNs is due to Hochreiter and Schmidhuber (1997) . BiLSTMs were recently popularized by Graves (2008) , and deep BiRNNs were introduced to NLP by Irsoy and Cardie (2014) , who used them for sequence tagging. In the context of parsing, Lewis et al. (2016) and Vaswani et al. (2016) use a BiLSTM sequence tagging model to assign a CCG supertag for each token in the sentence. Lewis et al. (2016) feeds the resulting supertags sequence into an A* CCG parser. Vaswani et al. (2016) adds an additional layer of LSTM which receives the BiLSTM representation together with the k-best supertags for each word and outputs the most likely supertag given previous tags, and then feeds the predicted supertags to a discriminitively trained parser. In both works, the BiLSTM is trained to produce accurate CCG supertags, and is not aware of the global parsing objective.", |
| "cite_spans": [ |
| { |
| "start": 81, |
| "end": 108, |
| "text": "Schuster and Paliwal (1997)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 146, |
| "end": 179, |
| "text": "Hochreiter and Schmidhuber (1997)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 219, |
| "end": 232, |
| "text": "Graves (2008)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 277, |
| "end": 300, |
| "text": "Irsoy and Cardie (2014)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 366, |
| "end": 385, |
| "text": "Lewis et al. (2016)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 390, |
| "end": 411, |
| "text": "Vaswani et al. (2016)", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 505, |
| "end": 524, |
| "text": "Lewis et al. (2016)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 587, |
| "end": 608, |
| "text": "Vaswani et al. (2016)", |
| "ref_id": "BIBREF41" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Going deeper", |
| "sec_num": null |
| }, |
| { |
| "text": "We propose to replace the hand-crafted feature functions in favor of minimally-defined feature functions which make use of automatically learned Bidirectional LSTM representations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Given n-words input sentence s with words w 1 , . . . , w n together with the corresponding POS tags t 1 , . . . , t n , 4 we associate each word w i and POS t i with embedding vectors e(w i ) and e(t i ), and create a sequence of input vectors x 1:n in which each x i is a concatenation of the corresponding word and POS vectors:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "x i = e(w i ) \u2022 e(p i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The embeddings are trained together with the model. This encodes each word in isolation, disregarding its context. We introduce context by representing each input element as its (deep) BiLSTM vector, v i :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "v i = BILSTM(x 1:n , i)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our feature function \u03c6 is then a concatenation of a small number of BiLSTM vectors. The exact feature function is parser dependent and will be discussed when discussing the corresponding parsers. The resulting feature vectors are then scored using a non-linear function, namely a multi-layer perceptron with one hidden layer (MLP):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "M LP \u03b8 (x) = W 2 \u2022 tanh(W 1 \u2022 x + b 1 ) + b 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where \u03b8 = {W 1 , W 2 , b 1 , b 2 } are the model parameters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Beside using the BiLSTM-based feature functions, we make use of standard parsing techniques. Crucially, the BiLSTM is trained jointly with the rest of the parsing objective. This allows it to learn representations which are suitable for the parsing task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Consider a concatenation of two BiLSTM vectors (v i \u2022 v j ) scored using an MLP. The scoring function has access to the words and POS-tags of v i and v j , as well as the words and POS-tags of the words in an infinite window surrounding them. As LSTMs are known to capture length and sequence position information, it is very plausible that the scoring function can be sensitive also to the distance between i and j, their ordering, and the sequential material between them.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Parsing-time Complexity Once the BiLSTM is trained, parsing is performed by first computing the BiLSTM encoding v i for each word in the sentence (a linear time operation). 5 Then, parsing proceeds as usual, where the feature extraction involves a concatenation of a small number of the pre-computed v i vectors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approach", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We begin by integrating the feature extractor in a transition-based parser (Nivre, 2008) . We follow the notation in Goldberg and Nivre (2013) . The Scoring:", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 88, |
| "text": "(Nivre, 2008)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 117, |
| "end": 142, |
| "text": "Goldberg and Nivre (2013)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "LST M f xthe concat LST M f xbrown concat LST M f xfox concat LST M f xjumped concat LST M f xover concat LST M f xthe concat LST M f xlazy concat LST M f xdog concat LST M f xROOT concat LST M b s0 LST M b s1 LST M b s2 LST M b s3 LST M b s4 LST M b s5 LST M b s6 LST M b s7 LST M b s8 Vthe Vbrown Vfox Vjumped Vover Vthe Vlazy Vdog VROOT MLP (ScoreLeftArc, ScoreRightArc, ScoreShift)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Figure 1: Illustration of the neural model scheme of the transition-based parser when calculating the scores of the possible transitions in a given configuration. The configuration (stack and buffer) is depicted on the top. Each transition is scored using an MLP that is fed the BiLSTM encodings of the first word in the buffer and the three words at the top of the stack (the colors of the words correspond to colors of the MLP inputs above), and a transition is picked greedily. Each x i is a concatenation of a word and a POS vector, and possibly an additional external embedding vector for the word. The figure depicts a single-layer BiLSTM, while in practice we use two layers. When parsing a sentence, we iteratively compute scores for all possible transitions and apply the best scoring action until the final configuration is reached.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "transition-based parsing framework assumes a transition system, an abstract machine that processes sentences and produces parse trees. The transition system has a set of configurations and a set of transitions which are applied to configurations. When parsing a sentence, the system is initialized to an initial configuration based on the input sentence, and transitions are repeatedly applied to this configuration. After a finite number of transitions, the system arrives at a terminal configuration, and a parse tree is read off the terminal configuration. In a greedy parser, a classifier is used to choose the transition to take in each configuration, based on features extracted from the configuration itself. The parsing algorithm is presented in Algorithm 1 below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Given a sentence s, the parser is initialized with the configuration c (line 2). Then, a feature function \u03c6(c) represents the configuration c as a vector, which is fed to a scoring function SCORE assigning scores to (configuration,transition) pairs. SCORE Algorithm 1 Greedy transition-based parsing 1: Input:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "sentence s = w 1 , . . . , x w , t 1 , . . . , t n , parameterized function SCORE \u03b8 (\u2022) with param- eters \u03b8. 2: c \u2190 INITIAL(s) 3: while not TERMINAL(c) do 4:t \u2190 arg max t\u2208LEGAL(c) SCORE \u03b8 \u03c6(c), t 5:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "c \u2190t(c) 6: return tree(c) scores the possible transitions t, and the highest scoring transitiont is chosen (line 4). The transition t is applied to the configuration, resulting in a new parser configuration. The process ends when reaching a final configuration, from which the resulting parse tree is read and returned (line 6).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Transition systems differ by the way they define configurations, and by the particular set of transitions available to them. A parser is determined by the choice of a transition system, a feature function \u03c6 and a scoring function SCORE. Our choices are detailed below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The Arc-Hybrid System Many transition systems exist in the literature. In this work, we use the archybrid transition system (Kuhlmann et al., 2011) , which is similar to the more popular arc-standard system (Nivre, 2004) , but for which an efficient dynamic oracle is available (Goldberg and Nivre, 2012; Goldberg and Nivre, 2013) . In the arc-hybrid system, a configuration c = (\u03c3, \u03b2, T ) consists of a stack \u03c3, a buffer \u03b2, and a set T of dependency arcs. Both the stack and the buffer hold integer indices pointing to sentence elements. Given a sentence s = w 1 , . . . , w n , t 1 , . . . , t n , the system is initialized with an empty stack, an empty arc set, and \u03b2 = 1, . . . , n, ROOT , where ROOT is the special root index. Any configuration c with an empty stack and a buffer containing only ROOT is terminal, and the parse tree is given by the arc set T c of c. The archybrid system allows 3 possible transitions, SHIFT, LEFT and RIGHT , defined as:", |
| "cite_spans": [ |
| { |
| "start": 124, |
| "end": 147, |
| "text": "(Kuhlmann et al., 2011)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 207, |
| "end": 220, |
| "text": "(Nivre, 2004)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 278, |
| "end": 304, |
| "text": "(Goldberg and Nivre, 2012;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 305, |
| "end": 330, |
| "text": "Goldberg and Nivre, 2013)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "SHIFT[(\u03c3, b 0 |\u03b2, T )] = (\u03c3|b 0 , \u03b2, T ) LEFT [(\u03c3|s 1 |s 0 , b 0 |\u03b2, T )] = (\u03c3|s 1 , b 0 |\u03b2, T \u222a {(b 0 , s 0 , )}) RIGHT [(\u03c3|s 1 |s 0 , \u03b2, T )] = (\u03c3|s 1 , \u03b2, T \u222a {(s 1 , s 0 , )})", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The SHIFT transition moves the first item of the buffer (b 0 ) to the stack. The LEFT transition removes the first item on top of the stack (s 0 ) and attaches it as a modifier to b 0 with label , adding the arc (b 0 , s 0 , ). The RIGHT transition removes s 0 from the stack and attaches it as a modifier to the next item on the stack (s 1 ), adding the arc (s 1 , s 0 , ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Scoring Function Traditionally, the scoring function SCORE \u03b8 (x, t) is a discriminative linear model of the form SCORE", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "W (x, t) = (W \u2022 x)[t].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The linearity of SCORE required the feature function \u03c6(\u2022) to encode non-linearities in the form of combination features. We follow Chen and Manning (2014) and replace the linear scoring model with an MLP.", |
| "cite_spans": [ |
| { |
| "start": 131, |
| "end": 154, |
| "text": "Chen and Manning (2014)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "SCORE \u03b8 (x, t) = M LP \u03b8 (x)[t]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Simple Feature Function The feature function \u03c6(c) is typically complex (see Section 2.1). Our feature function is the concatenated BiLSTM vectors of the top 3 items on the stack and the first item on the buffer. I.e., for a configuration c = (. . . |s 2 |s 1 |s 0 , b 0 | . . . , T ) the feature extractor is defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u03c6(c) = v s 2 \u2022 v s 1 \u2022 v s 0 \u2022 v b 0 v i = BILSTM(x 1:n , i)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "This feature function is rather minimal: it takes into account the BiLSTM representations of s 1 , s 0 and b 0 , which are the items affected by the possible transitions being scored, as well as one extra stack context s 2 . 6 Figure 1 depicts transition scoring with our architecture and this feature function. Note that, unlike previous work, this feature function does not take into account T , the already built structure. The high parsing accuracies in the experimental sections suggest that the BiLSTM encoding is capable of estimating a lot of the missing information based on the provided stack and buffer elements and the sequential content between them.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 227, |
| "end": 235, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "While not explored in this work, relying on only four word indices for scoring an action results in very compact state signatures, making our proposed feature representation very appealing for use in transition-based parsers that employ dynamic-programming search (Huang and Sagae, 2010; Kuhlmann et al., 2011) .", |
| "cite_spans": [ |
| { |
| "start": 264, |
| "end": 287, |
| "text": "(Huang and Sagae, 2010;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 288, |
| "end": 310, |
| "text": "Kuhlmann et al., 2011)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Extended Feature Function One of the benefits of the greedy transition-based parsing framework is precisely its ability to look at arbitrary features from the already built tree. If we allow somewhat less minimal feature function, we could add the BiLSTM vectors corresponding to the right-most and leftmost modifiers of s 0 , s 1 and s 2 , as well as the leftmost modifier of b 0 , reaching a total of 11 BiLSTM vectors. We refer to this as the extended feature set. As we'll see in Section 6, using the extended set does indeed improve parsing accuracies when using pre-trained word embeddings, but has a minimal effect in the fully-supervised case. 7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transition-based Parser", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The training objective is to set the score of correct transitions above the scores of incorrect transitions. We use a margin-based objective, aiming to maximize the margin between the highest scoring correct action and the highest scoring incorrect action. The hinge loss at each parsing configuration c is defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Details of the Training Algorithm", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "max 0, 1\u2212 max to\u2208G M LP \u03c6(c) [t o ] + max tp\u2208A\\G M LP \u03c6(c) [t p ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Details of the Training Algorithm", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where A is the set of possible transitions and G is the set of correct (gold) transitions at the current stage. At each stage of the training process the parser scores the possible transitions A, incurs a loss, selects a transition to follow, and moves to the next configuration based on it. The local losses are summed throughout the parsing process of a sentence, and the parameters are updated with respect to the sum of the losses at sentence boundaries. 8 The gradients of the entire network (including the MLP and the BiLSTM) with respect to the sum of the losses are calculated using the backpropagation algorithm. As usual, we perform several training iterations over the training corpus, shuffling the order of sentences in each iteration.", |
| "cite_spans": [ |
| { |
| "start": 459, |
| "end": 460, |
| "text": "8", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Details of the Training Algorithm", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We follow Goldberg and Nivre (2013) ; Goldberg and Nivre (2012) in using error exploration training with a dynamic-oracle, which we briefly describe below.", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 35, |
| "text": "Goldberg and Nivre (2013)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 38, |
| "end": 63, |
| "text": "Goldberg and Nivre (2012)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error-Exploration and Dynamic Oracle Training", |
| "sec_num": null |
| }, |
| { |
| "text": "At each stage in the training process, the parser assigns scores to all the possible transitions t \u2208 A. It then selects a transition, applies it, and moves to the next step. Which transition should be followed? A common approach follows the highest scoring transition that can lead to the gold tree. However, when training in this way the parser sees only configurations that result from following correct actions, and as a result tends to suffer from error propagation at test time. Instead, in error-exploration training the parser follows the highest scoring action in A during training even if this action is incorrect, exposing it to configurations that result from erroneous decisions. This strategy requires defining the set G such that the correct actions to take are well-defined also for states that cannot lead to the gold tree. Such a set G is called a dynamic oracle. We perform error-exploration training using the dynamic-oracle defined by Goldberg and Nivre (2013) .", |
| "cite_spans": [ |
| { |
| "start": 955, |
| "end": 980, |
| "text": "Goldberg and Nivre (2013)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error-Exploration and Dynamic Oracle Training", |
| "sec_num": null |
| }, |
| { |
| "text": "We found that even when using error-exploration, after one iteration the model remembers the training set quite well, and does not make enough errors to make error-exploration effective. In order to expose the parser to more errors, we follow an aggressive-exploration scheme: we sometimes follow incorrect transitions also if they score below correct transitions. Specifically, when the score of the correct transition is greater than that of the wrong transition but the difference is smaller than a margin constant, we chose to follow the incorrect action with probability p agg (we use p agg = 0.1 in our experiments).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Aggressive Exploration", |
| "sec_num": null |
| }, |
| { |
| "text": "Summary The greedy transition-based parser follows standard techniques from the literature (margin-based objective, dynamic oracle training, error exploration, MLP-based non-linear scoring function). We depart from the literature by replacing the hand-crafted feature function over carefully selected components of the configuration with a concatenation of BiLSTM representations of a few prominent items on the stack and the buffer, and training the BiLSTM encoder jointly with the rest of the network.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Aggressive Exploration", |
| "sec_num": null |
| }, |
| { |
| "text": "Graph-based parsing follows the common structured prediction paradigm (Taskar et al., 2005; McDonald et al., 2005) :", |
| "cite_spans": [ |
| { |
| "start": 70, |
| "end": 91, |
| "text": "(Taskar et al., 2005;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 92, |
| "end": 114, |
| "text": "McDonald et al., 2005)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "predict(s) = arg max y\u2208Y(s) score global (s, y) score global (s, y) = part\u2208y score local (s, part)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Given an input sentence s (and the corresponding sequence of vectors x 1:n ) we look for the highest- Figure 2 : Illustration of the neural model scheme of the graph-based parser when calculating the score of a given parse tree. The parse tree is depicted below the sentence. Each dependency arc in the sentence is scored using an MLP that is fed the BiLSTM encoding of the words at the arc's end points (the colors of the arcs correspond to colors of the MLP inputs above), and the individual arc scores are summed to produce the final score. All the MLPs share the same parameters. The figure depicts a single-layer BiLSTM, while in practice we use two layers. When parsing a sentence, we compute scores for all possible n 2 arcs, and find the best scoring tree using a dynamic-programming algorithm.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 102, |
| "end": 110, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "LST M f x the concat LST M f x brown concat LST M f x fox concat LST M f x jumped concat LST M f x * concat LST M b s 0 LST M b s 1 LST M b s 2 LST M b s 3 LST M b s 4 V the V brown V fox V jumped V * M LP M LP M LP M LP +", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "scoring parse tree y in the space Y(s) of valid dependency trees over s. In order to make the search tractable, the scoring function is decomposed to the sum of local scores for each part independently. In this work, we focus on arc-factored graph based approach presented in McDonald et al. (2005) . Arc-factored parsing decomposes the score of a tree to the sum of the score of its head-modifier arcs (h, m):", |
| "cite_spans": [ |
| { |
| "start": 276, |
| "end": 298, |
| "text": "McDonald et al. (2005)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "parse(s) = arg max y\u2208Y(s) (h,m)\u2208y score \u03c6(s, h, m)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Given the scores of the arcs the highest scoring projective tree can be efficiently found using Eisner's decoding algorithm (1996) . McDonald et al. and most subsequent work estimate the local score of an arc by a linear model parameterized by a weight vector w, and a feature function \u03c6(s, h, m) assigning a sparse feature vector for an arc linking modifier m to head h. We follow Pei et al. (2015) and replace the linear scoring function with an MLP.", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 130, |
| "text": "Eisner's decoding algorithm (1996)", |
| "ref_id": null |
| }, |
| { |
| "start": 382, |
| "end": 399, |
| "text": "Pei et al. (2015)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The feature extractor \u03c6(s, h, m) is usually complex, involving many elements (see Section 2.1). In contrast, our feature extractor uses merely the BiLSTM encoding of the head word and the mod-ifier word:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u03c6(s, h, m) = BIRNN(x 1:n , h) \u2022 BIRNN(x 1:n , m)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The final model is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "parse(s) = arg max y\u2208Y(s) score global (s, y) = arg max y\u2208Y(s) (h,m)\u2208y score \u03c6(s, h, m) = arg max y\u2208Y(s) (h,m)\u2208y M LP (v h \u2022 v m ) v i = BIRNN(x 1:n , i)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The architecture is illustrated in Figure 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 35, |
| "end": 43, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Training The training objective is to set the score function such that correct tree y is scored above incorrect ones. We use a margin-based objective (Mc-Donald et al., 2005; LeCun et al., 2006) , aiming to maximize the margin between the score of the gold tree y and the highest scoring incorrect tree y . We define a hinge loss with respect to a gold tree y as:", |
| "cite_spans": [ |
| { |
| "start": 150, |
| "end": 174, |
| "text": "(Mc-Donald et al., 2005;", |
| "ref_id": null |
| }, |
| { |
| "start": 175, |
| "end": 194, |
| "text": "LeCun et al., 2006)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "max 0, 1 \u2212 max y =y (h,m)\u2208y M LP (v h \u2022 v m ) + (h,m)\u2208y M LP (v h \u2022 v m )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Each of the tree scores is then calculated by activating the MLP on the arc representations. The entire loss can viewed as the sum of multiple neural networks, which is sub-differentiable. We calculate the gradients of the entire network (including to the BiLSTM encoder and word embeddings).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Labeled Parsing Up to now, we described unlabeled parsing. A possible approach for adding labels is to score the combination of an unlabeled arc (h, m) and its label by considering the label as part of the arc (h, m, ). This results in |Labels|\u00d7|Arcs| parts that need to be scored, leading to slow parsing speeds and arguably a harder learning problem. Instead, we chose to first predict the unlabeled structure using the model given above, and then predict the label of each resulting arc. Using this approach, the number of parts stays small, enabling fast parsing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The labeling of an arc (h, m) is performed using the same feature representation \u03c6(s, h, m) fed into a different MLP predictor:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "label(h, m) = arg max \u2208labels M LP LBL (v h \u2022 v m )[ ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "As before we use a margin based hinge loss. The labeler is trained on the gold trees. 9 The BiLSTM encoder responsible for producing v h and v m is shared with the arc-factored parser: the same BiLSTM encoder is used in the parer and the labeler. This sharing of parameters can be seen as an instance of multi-task learning (Caruana, 1997). As we show in Section 6, the sharing is effective: training the BiLSTM feature encoder to be good at predicting arc-labels significantly improves the parser's unlabeled accuracy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Loss augmented inference In initial experiments, the network learned quickly and overfit the data. In order to remedy this, we found it useful to use loss augmented inference (Taskar et al., 2005) . The intuition behind loss augmented inference is to update against trees which have high model scores and are also very wrong. This is done by augmenting the score of each part not belonging to the gold tree by adding a constant to its score. Formally, the loss transforms as follows:", |
| "cite_spans": [ |
| { |
| "start": 175, |
| "end": 196, |
| "text": "(Taskar et al., 2005)", |
| "ref_id": "BIBREF38" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "max(0, 1 + score(x, y)\u2212 max y =y part\u2208y", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "(score local (x, part) + 1 part \u2208y ))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Parser", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The arc-factored model requires the scoring of n 2 arcs. Scoring is performed using an MLP with one hidden layer, resulting in n 2 matrix-vector multiplications from the input to the hidden layer, and n 2 multiplications from the hidden to the output layer. The first n 2 multiplications involve larger dimensional input and output vectors, and are the most time consuming. Fortunately, these can be reduced to 2n multiplications and n 2 vector additions, by observing that the multiplication", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Speed improvements", |
| "sec_num": null |
| }, |
| { |
| "text": "W \u2022 (v h \u2022 v m ) can be written as W 1 \u2022 v h + W 2 \u2022 v m", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Speed improvements", |
| "sec_num": null |
| }, |
| { |
| "text": "where W 1 and W 1 are are the first and second half of the matrix W and reusing the products across different pairs. Summary The graph-based parser is straightforward first-order parser, trained with a marginbased hinge-loss and loss-augmented inference. We depart from the literature by replacing the handcrafted feature function with a concatenation of BiLSTM representations of the head and modifier words, and training the BiLSTM encoder jointly with the structured objective. We also introduce a novel multi-task learning approach for labeled parsing by training a second-stage arc-labeler sharing the same BiLSTM encoder with the unlabeled parser.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Speed improvements", |
| "sec_num": null |
| }, |
| { |
| "text": "We evaluated our parsing model on English and Chinese data. For comparison purposes we follow the setup of Dyer et al. (2015) .", |
| "cite_spans": [ |
| { |
| "start": 107, |
| "end": 125, |
| "text": "Dyer et al. (2015)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Data For English, we used the Stanford Dependency (SD) (de Marneffe and Manning, 2008) conversion of the Penn Treebank (Marcus et al., 1993) , using the standard train/dev/test splits with the (Martins et al., 2013) ; Weiss15 (Weiss et al., 2015) ; Pei15: (Pei et al., 2015) ; Dyer15 (Dyer et al., 2015) ; Ballesteros16 ; LeZuidema14 (Le and Zuidema, 2014) ; Zhu15: (Zhu et al., 2015) .", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 140, |
| "text": "(Marcus et al., 1993)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 193, |
| "end": 215, |
| "text": "(Martins et al., 2013)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 226, |
| "end": 246, |
| "text": "(Weiss et al., 2015)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 256, |
| "end": 274, |
| "text": "(Pei et al., 2015)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 284, |
| "end": 303, |
| "text": "(Dyer et al., 2015)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 334, |
| "end": 356, |
| "text": "(Le and Zuidema, 2014)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 366, |
| "end": 384, |
| "text": "(Zhu et al., 2015)", |
| "ref_id": "BIBREF46" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "same predicted POS-tags as used in Dyer et al. (2015) ; Chen and Manning (2014) . This dataset contains a few non-projective trees. Punctuation symbols are excluded from the evaluation. For Chinese, we use the Penn Chinese Treebank 5.1 (CTB5), using the train/test/dev splits of (Zhang and Clark, 2008; Dyer et al., 2015) with gold partof-speech tags, also following (Dyer et al., 2015; Chen and Manning, 2014) .", |
| "cite_spans": [ |
| { |
| "start": 35, |
| "end": 53, |
| "text": "Dyer et al. (2015)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 56, |
| "end": 79, |
| "text": "Chen and Manning (2014)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 279, |
| "end": 302, |
| "text": "(Zhang and Clark, 2008;", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 303, |
| "end": 321, |
| "text": "Dyer et al., 2015)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 367, |
| "end": 386, |
| "text": "(Dyer et al., 2015;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 387, |
| "end": 410, |
| "text": "Chen and Manning, 2014)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "When using external word embeddings, we also use the same data as Dyer et al. (2015). 10 Implementation Details The parsers are implemented in python, using the PyCNN toolkit 11 for neural network training. The code is available at the github repository https://github.com/ elikip/bist-parser. We use the LSTM variant implemented in PyCNN, and optimize using the Adam optimizer (Kingma and Ba, 2015). Unless otherwise noted, we use the default values provided by PyCNN (e.g. for random initialization, learning rates etc). 10 We thank Dyer et al. for sharing their data with us. 11 https://github.com/clab/cnn/tree/ master/pycnn The word and POS embeddings e(w i ) and e(p i ) are initialized to random values and trained together with the rest of the parsers' networks. In some experiments, we introduce also pre-trained word embeddings. In those cases, the vector representation of a word is a concatenation of its randomlyinitialized vector embedding with its pre-trained word vector. Both are tuned during training. We use the same word vectors as in Dyer et al. (2015) During training, we employ a variant of word dropout (Iyyer et al., 2015) , and replace a word with the unknown-word symbol with probability that is inversely proportional to the frequency of the word. A word w appearing #(w) times in the training corpus is replaced with the unknown symbol with probability p unk (w) = \u03b1 #(w)+\u03b1 . If a word was dropped the external embedding of the word is also dropped with probability 0.5.", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 88, |
| "text": "Dyer et al. (2015). 10", |
| "ref_id": null |
| }, |
| { |
| "start": 523, |
| "end": 525, |
| "text": "10", |
| "ref_id": null |
| }, |
| { |
| "start": 1055, |
| "end": 1073, |
| "text": "Dyer et al. (2015)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1127, |
| "end": 1147, |
| "text": "(Iyyer et al., 2015)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We train the parsers for up to 30 iterations, and choose the best model according to the UAS accuracy on the development set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Hyperparameter Tuning We performed a very minimal hyper-parameter search with the graph-based parser, and use the same hyper-parameters for both parsers. The hyper-parameters of the final networks used for all the reported experiments are detailed in Table 2 Main Results Table 1 lists the test-set accuracies of our best parsing models, compared to other state-ofthe-art parsers from the literature. 12 It is clear that our parsers are very competitive, despite using very simple parsing architectures and minimal feature extractors. When not using external embeddings, the first-order graph-based parser with 2 features outperforms all other systems that are not using external resources, including the third-order TurboParser. The greedy transition based parser with 4 features also matches or outperforms most other parsers, including the beam-based transition parser with heavily engineered features of Zhang and Nivre (2011) and the Stack-LSTM parser of Dyer et al. (2015) , as well as the same parser when trained using a dynamic oracle . Moving from the simple (4 features) to the extended (11 features) feature set leads to some gains in accuracy for both English and Chinese.", |
| "cite_spans": [ |
| { |
| "start": 401, |
| "end": 403, |
| "text": "12", |
| "ref_id": null |
| }, |
| { |
| "start": 908, |
| "end": 930, |
| "text": "Zhang and Nivre (2011)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 960, |
| "end": 978, |
| "text": "Dyer et al. (2015)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 251, |
| "end": 258, |
| "text": "Table 2", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 272, |
| "end": 279, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments and Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Interestingly, when adding external word embeddings the accuracy of the graph-based parser degrades. We are not sure why this happens, and leave the exploration of effective semi-supervised parsing with the graph-based model for future work. The greedy parser does manage to benefit from the external embeddings, and using them we also see gains from moving from the simple to the extended feature set. Both feature sets result in very competitive re-sults, with the extended feature set yielding the best reported results for Chinese, and ranked second for English, after the heavily-tuned beam-based parser of Weiss et al. (2015) .", |
| "cite_spans": [ |
| { |
| "start": 612, |
| "end": 631, |
| "text": "Weiss et al. (2015)", |
| "ref_id": "BIBREF43" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We perform some ablation experiments in order to quantify the effect of the different components on our best models (Table 3) Table 3 : Ablation experiments results (dev set) for the graphbased parser without external embeddings and the greedy parser with external embeddings and extended feature set.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 116, |
| "end": 125, |
| "text": "(Table 3)", |
| "ref_id": null |
| }, |
| { |
| "start": 126, |
| "end": 133, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Additional Results", |
| "sec_num": null |
| }, |
| { |
| "text": "Loss augmented inference is crucial for the success of the graph-based parser, and the multi-task learning scheme for the arc-labeler contributes nicely to the unlabeled scores. Dynamic oracle training yields nice gains for both English and Chinese.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Additional Results", |
| "sec_num": null |
| }, |
| { |
| "text": "We presented a pleasingly effective approach for feature extraction for dependency parsing based on a BiLSTM encoder that is trained jointly with the parser, and demonstrated its effectiveness by integrating it into two simple parsing models: a greedy transition-based parser and a globally optimized first-order graph-based parser, yielding very competitive parsing accuracies in both cases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Transactions of the Association for Computational Linguistics, vol. 4, pp. 313-327, 2016. Action Editor: Marco Kuhlmann.Submission batch: 2/2016; Published 7/2016. c 2016 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Structured training of sequence tagging models over RNNbased representations was explored byChiu and Nichols (2016) andLample et al. (2016).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.seas.upenn.edu/~strctlrn/ MSTParser/MSTParser.html", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In all of these neural-network based approaches, the vector representations of words were initialized using pre-trained word-embeddings derived from a large corpus external to the training data. This puts the approaches in the semi-supervised category, making it hard to tease apart the contribution of the automatic feature-combination component from that of the semisupervised component.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In this work the tag sequence is assumed to be given, and in practice is predicted by an external model. Future work will address relaxing this assumption.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "While the BiLSTM computation is quite efficient as it is, as demonstrated byLewis et al. (2016), if using a GPU implementation the BiLSTM encoding can be efficiently performed over many of sentences in parallel, making its computation cost almost negligible.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "An additional buffer context is not needed, as b1 is by definition adjacent to b0, a fact that we expect the BiLSTM encoding of b0 to capture. In contrast, b0, s0, s1 and s2 are not necessarily adjacent to each other in the original sentence.7 We did not experiment with other feature configurations. It is well possible that not all of the additional 7 child encodings are needed for the observed accuracy gains, and that a smaller feature set will yield similar or even better improvements.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "To increase gradient stability and training speed, we simulate mini-batch updates by only updating the parameters when the sum of local losses contains at least 50 non-zero elements. Sums of fewer elements are carried across sentences. This assures us a sufficient number of gradient samples for every update thus minimizing the effect of gradient instability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "When training the labeled parser, we calculate the structure loss and the labeling loss for each training sentence, and sum the losses prior to computing the gradients.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Unfortunately, many papers still report English parsing results on the deficient Yamada and Matsumoto head rules (PTB-YM) rather than the more modern Stanford-dependencies (PTB-SD). We note that the PTB-YM and PTB-SD results are not strictly comparable, and in our experience the PTB-YM results are usually about half a UAS point higher.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Acknowledgements This research is supported by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI) and the Israeli Science Foundation (grant number 1555/15). We thank Lillian Lee for her important feedback and efforts invested in editing this paper. We also thank the reviewers for their valuable comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Training with exploration improves a greedy stack-LSTM parser. CoRR, abs/1603.03793. Rich Caruana. 1997. Multitask learning", |
| "authors": [ |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Machine Learning", |
| "volume": "28", |
| "issue": "", |
| "pages": "41--75", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miguel Ballesteros, Yoav Goldberg, Chris Dyer, and Noah A. Smith. 2016. Training with explo- ration improves a greedy stack-LSTM parser. CoRR, abs/1603.03793. Rich Caruana. 1997. Multitask learning. Machine Learning, 28:41-75, July.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A fast and accurate dependency parser using neural networks", |
| "authors": [ |
| { |
| "first": "Danqi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "740--750", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740-750, Doha, Qatar, October. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Named entity recognition with bidirectional LSTM-CNNs", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "C" |
| ], |
| "last": "Jason", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Chiu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Nichols", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transac- tions of the Association for Computational Linguistics, 4. To appear.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Natural language understanding with distributed representation", |
| "authors": [ |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kyunghyun Cho. 2015. Natural language under- standing with distributed representation. CoRR, abs/1511.07916.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Incremental parsing with minimal features using bi-directional LSTM", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Cross", |
| "suffix": "" |
| }, |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Cross and Liang Huang. 2016. Incremental pars- ing with minimal features using bi-directional LSTM. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics, Berlin, Ger- many, August. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Stanford dependencies manual", |
| "authors": [ |
| { |
| "first": "Marie-Catherine", |
| "middle": [], |
| "last": "De Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marie-Catherine de Marneffe and Christopher D. Man- ning. 2008. Stanford dependencies manual. Techni- cal report, Stanford University.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Transitionbased dependency parsing with stack long short-term memory", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Wang", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "Austin", |
| "middle": [], |
| "last": "Matthews", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "334--343", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short-term memory. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 334-343, Beijing, China, July. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Three new probabilistic models for dependency parsing: An exploration", |
| "authors": [ |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "16th International Conference on Computational Linguistics, Proceedings of the Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "340--345", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jason Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In 16th Interna- tional Conference on Computational Linguistics, Pro- ceedings of the Conference, COLING 1996, Center for Sprogteknologi, Copenhagen, Denmark, August 5-9, 1996, pages 340-345.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Finding structure in time", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [ |
| "L" |
| ], |
| "last": "Elman", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Cognitive Science", |
| "volume": "14", |
| "issue": "2", |
| "pages": "179--211", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey L. Elman. 1990. Finding structure in time. Cog- nitive Science, 14(2):179-211.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A dynamic oracle for arc-eager dependency parsing", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "The COLING 2012 Organizing Committee", |
| "volume": "", |
| "issue": "", |
| "pages": "959--976", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Goldberg and Joakim Nivre. 2012. A dynamic ora- cle for arc-eager dependency parsing. In Proceedings of COLING 2012, pages 959-976, Mumbai, India, De- cember. The COLING 2012 Organizing Committee.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Training deterministic parsers with non-deterministic oracles", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "403--414", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. Transactions of the Association for Computational Linguistics, 1:403-414.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A primer on neural network models for natural language processing", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Goldberg. 2015. A primer on neural net- work models for natural language processing. CoRR, abs/1510.00726.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Supervised sequence labelling with recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Graves", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Graves. 2008. Supervised sequence labelling with recurrent neural networks. Ph.D. thesis, Technical University Munich.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural Computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735- 1780.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Dynamic programming for linear-time incremental parsing", |
| "authors": [ |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1077--1086", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang Huang and Kenji Sagae. 2010. Dynamic pro- gramming for linear-time incremental parsing. In Pro- ceedings of the 48th Annual Meeting of the Associa- tion for Computational Linguistics, pages 1077-1086, Uppsala, Sweden, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Opinion mining with deep recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Ozan", |
| "middle": [], |
| "last": "Irsoy", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "720--728", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ozan Irsoy and Claire Cardie. 2014. Opinion mining with deep recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 720-728, Doha, Qatar, October. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Deep unordered composition rivals syntactic methods for text classification", |
| "authors": [ |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Iyyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Varun", |
| "middle": [], |
| "last": "Manjunatha", |
| "suffix": "" |
| }, |
| { |
| "first": "Jordan", |
| "middle": [], |
| "last": "Boyd-Graber", |
| "suffix": "" |
| }, |
| { |
| "first": "Hal", |
| "middle": [], |
| "last": "Daum\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Iii", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "1681--1691", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2015. Deep unordered composi- tion rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th Inter- national Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 1681-1691, Beijing, China, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Visualizing and understanding recurrent networks", |
| "authors": [ |
| { |
| "first": "Andrej", |
| "middle": [], |
| "last": "Karpathy", |
| "suffix": "" |
| }, |
| { |
| "first": "Justin", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Fei-Fei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2015. Visualizing and understanding recurrent networks. CoRR, abs/1506.02078.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 3rd International Conference for Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Repre- sentations, San Diego, California.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Easy-first dependency parsing with hierarchical tree LSTMs. Transactions of the Association for Computational Linguistics", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Easy-first dependency parsing with hierarchical tree LSTMs. Transactions of the Association for Compu- tational Linguistics, 4. To appear.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Efficient thirdorder dependency parsers", |
| "authors": [ |
| { |
| "first": "Terry", |
| "middle": [], |
| "last": "Koo", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Terry Koo and Michael Collins. 2010. Efficient third- order dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1-11, Uppsala, Sweden, July. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Simple semi-supervised dependency parsing", |
| "authors": [ |
| { |
| "first": "Terry", |
| "middle": [], |
| "last": "Koo", |
| "suffix": "" |
| }, |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Carreras", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "595--603", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Pro- ceedings of the 46th Annual Meeting of the Associ- ation for Computational Linguistics, pages 595-603, Columbus, Ohio, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Dependency Parsing. Synthesis Lectures on Human Language Technologies", |
| "authors": [ |
| { |
| "first": "Sandra", |
| "middle": [], |
| "last": "K\u00fcbler", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [ |
| "T" |
| ], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sandra K\u00fcbler, Ryan T. McDonald, and Joakim Nivre. 2009. Dependency Parsing. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Dynamic programming algorithms for transition-based dependency parsers", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Kuhlmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "G\u00f3mez-Rodr\u00edguez", |
| "suffix": "" |
| }, |
| { |
| "first": "Giorgio", |
| "middle": [], |
| "last": "Satta", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "673--682", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Kuhlmann, Carlos G\u00f3mez-Rodr\u00edguez, and Gior- gio Satta. 2011. Dynamic programming algorithms for transition-based dependency parsers. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Tech- nologies, pages 673-682, Portland, Oregon, USA, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Neural architectures for named entity recognition", |
| "authors": [ |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Lample", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandeep", |
| "middle": [], |
| "last": "Subramanian", |
| "suffix": "" |
| }, |
| { |
| "first": "Kazuya", |
| "middle": [], |
| "last": "Kawakami", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "260--270", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Subra- manian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270, San Diego, California, June. Associ- ation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "The insideoutside recursive neural network model for dependency parsing", |
| "authors": [ |
| { |
| "first": "Phong", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "Willem", |
| "middle": [], |
| "last": "Zuidema", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "729--739", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Phong Le and Willem Zuidema. 2014. The inside- outside recursive neural network model for depen- dency parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 729-739, Doha, Qatar, October. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "A tutorial on energy-based learning. Predicting structured data", |
| "authors": [ |
| { |
| "first": "Yann", |
| "middle": [], |
| "last": "Lecun", |
| "suffix": "" |
| }, |
| { |
| "first": "Sumit", |
| "middle": [], |
| "last": "Chopra", |
| "suffix": "" |
| }, |
| { |
| "first": "Raia", |
| "middle": [], |
| "last": "Hadsell", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc'aurelio", |
| "middle": [], |
| "last": "Ranzato", |
| "suffix": "" |
| }, |
| { |
| "first": "Fu Jie", |
| "middle": [], |
| "last": "Huang ; Tao", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Lei", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuan", |
| "middle": [], |
| "last": "Xin", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Tommi", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Jaakkola", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1381--1391", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yann LeCun, Sumit Chopra, Raia Hadsell, Marc'Aurelio Ranzato, and Fu Jie Huang. 2006. A tutorial on energy-based learning. Predicting structured data, 1. Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-rank tensors for scor- ing dependency structures. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1381-1391, Baltimore, Maryland, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "LSTM CCG parsing", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "221--231", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Lewis, Kenton Lee, and Luke Zettlemoyer. 2016. LSTM CCG parsing. In Proceedings of the 2016 Con- ference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 221-231, San Diego, California, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Building a large annotated corpus of English: The Penn Treebank", |
| "authors": [ |
| { |
| "first": "Mitchell", |
| "middle": [ |
| "P" |
| ], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Beatrice", |
| "middle": [], |
| "last": "Santorini", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [ |
| "Ann" |
| ], |
| "last": "Marcinkiewicz", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "313--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated cor- pus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Concise integer linear programming formulations for dependency parsing", |
| "authors": [ |
| { |
| "first": "Andre", |
| "middle": [], |
| "last": "Martins", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Xing", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "342--350", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andre Martins, Noah A. Smith, and Eric Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proceedings of the Joint Con- ference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Lan- guage Processing of the AFNLP, pages 342-350, Sun- tec, Singapore, August. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Turning on the turbo: Fast third-order nonprojective turbo parsers", |
| "authors": [ |
| { |
| "first": "Andre", |
| "middle": [], |
| "last": "Martins", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Almeida", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "617--622", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andre Martins, Miguel Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order non- projective turbo parsers. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 617-622, Sofia, Bulgaria, August. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Online large-margin training of dependency parsers", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Koby", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", |
| "volume": "", |
| "issue": "", |
| "pages": "91--98", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of the 43rd Annual Meet- ing of the Association for Computational Linguistics (ACL'05), pages 91-98, Ann Arbor, Michigan, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Discriminative Training and Spanning Tree Algorithms for Dependency Parsing", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan McDonald. 2006. Discriminative Training and Spanning Tree Algorithms for Dependency Parsing. Ph.D. thesis, University of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Incrementality in deterministic dependency parsing", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the ACL Workshop Incremental Parsing: Bringing Engineering and Cognition Together", |
| "volume": "", |
| "issue": "", |
| "pages": "50--57", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joakim Nivre. 2004. Incrementality in deterministic de- pendency parsing. In Frank Keller, Stephen Clark, Matthew Crocker, and Mark Steedman, editors, Pro- ceedings of the ACL Workshop Incremental Parsing: Bringing Engineering and Cognition Together, pages 50-57, Barcelona, Spain, July. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Algorithms for deterministic incremental dependency parsing", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computational Linguistics", |
| "volume": "34", |
| "issue": "4", |
| "pages": "513--553", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joakim Nivre. 2008. Algorithms for deterministic incre- mental dependency parsing. Computational Linguis- tics, 34(4):513-553.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "An effective neural network model for graph-based dependency parsing", |
| "authors": [ |
| { |
| "first": "Wenzhe", |
| "middle": [], |
| "last": "Pei", |
| "suffix": "" |
| }, |
| { |
| "first": "Tao", |
| "middle": [], |
| "last": "Ge", |
| "suffix": "" |
| }, |
| { |
| "first": "Baobao", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "313--322", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wenzhe Pei, Tao Ge, and Baobao Chang. 2015. An ef- fective neural network model for graph-based depen- dency parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguis- tics and the 7th International Joint Conference on Nat- ural Language Processing (Volume 1: Long Papers), pages 313-322, Beijing, China, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Bidirectional recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Kuldip", |
| "middle": [ |
| "K" |
| ], |
| "last": "Paliwal", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "IEEE Trans. Signal Processing", |
| "volume": "45", |
| "issue": "11", |
| "pages": "2673--2681", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Schuster and Kuldip K. Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Trans. Signal Processing, 45(11):2673-2681.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Learning structured prediction models: A large margin approach", |
| "authors": [ |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Vassil", |
| "middle": [], |
| "last": "Chatalbashev", |
| "suffix": "" |
| }, |
| { |
| "first": "Daphne", |
| "middle": [], |
| "last": "Koller", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Guestrin", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Machine Learning, Proceedings of the Twenty-Second International Conference (ICML 2005)", |
| "volume": "", |
| "issue": "", |
| "pages": "896--903", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Benjamin Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005. Learning structured pre- diction models: A large margin approach. In Machine Learning, Proceedings of the Twenty-Second Interna- tional Conference (ICML 2005), Bonn, Germany, Au- gust 7-11, 2005, pages 896-903.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Template kernels for dependency parsing", |
| "authors": [ |
| { |
| "first": "Hillel", |
| "middle": [], |
| "last": "Taub-Tabib", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Globerson", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1422--1427", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hillel Taub-Tabib, Yoav Goldberg, and Amir Glober- son. 2015. Template kernels for dependency pars- ing. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 1422-1427, Denver, Colorado, May-June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "A latent variable model for generative dependency parsing", |
| "authors": [ |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Henderson", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Tenth International Conference on Parsing Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "144--155", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ivan Titov and James Henderson. 2007. A latent variable model for generative dependency parsing. In Proceed- ings of the Tenth International Conference on Parsing Technologies, pages 144-155, Prague, Czech Repub- lic, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Supertagging with LSTMs", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Yonatan", |
| "middle": [], |
| "last": "Bisk", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Musa", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics (Short Papers)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Yonatan Bisk, Kenji Sagae, and Ryan Musa. 2016. Supertagging with LSTMs. In Pro- ceedings of the 15th Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics (Short Papers), San Diego, Califor- nia, June.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Grammar as a foreign language", |
| "authors": [ |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Lukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Terry", |
| "middle": [], |
| "last": "Koo", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "2773--2781", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2015. Gram- mar as a foreign language. In Advances in Neural In- formation Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, De- cember 7-12, 2015, Montreal, Quebec, Canada, pages 2773-2781.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Structured training for neural network transition-based parsing", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Alberti", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "323--333", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Pa- pers), pages 323-333, Beijing, China, July. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing", |
| "authors": [ |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "562--571", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Proceedings of the 2008 Conference on Empirical Methods in Nat- ural Language Processing, pages 562-571, Honolulu, Hawaii, October. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Transition-based dependency parsing with rich non-local features", |
| "authors": [ |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "188--193", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 188-193, Portland, Ore- gon, USA, June. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "A re-ranking model for dependency parser with recursive convolutional neural network", |
| "authors": [ |
| { |
| "first": "Chenxi", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xipeng", |
| "middle": [], |
| "last": "Qiu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xinchi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuanjing", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "1159--1168", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chenxi Zhu, Xipeng Qiu, Xinchi Chen, and Xuanjing Huang. 2015. A re-ranking model for dependency parser with recursive convolutional neural network. In Proceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th Inter- national Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 1159-1168, Beijing, China, July. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "html": null, |
| "num": null, |
| "text": "BiRNN functions BIRNN 1 , \u2022 \u2022 \u2022 , BIRNN k that feed into each other: the output BIRNN (x 1:n , 1), . . . , BIRNN (x 1:n , n) of BIRNN becomes the input of BIRNN +1 . Stacking", |
| "content": "<table><tr><td/><td colspan=\"4\">We use a variant of deep</td></tr><tr><td>bidirectional</td><td>RNN</td><td>(or</td><td>k-layer</td><td>BiRNN)</td></tr><tr><td colspan=\"3\">which is composed of k</td><td/><td/></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "html": null, |
| "num": null, |
| "text": "Test-set parsing results of various state-of-the-art parsing systems on the English (PTB) and Chinese (CTB) datasets. The systems that use embeddings may use different pre-trained embeddings. English results use predicted POS tags (different systems use different taggers), while Chinese results use gold POS tags. PTB-YM: English PTB, Yamada and Matsumoto head rules. PTB-SD: English PTB, Stanford Dependencies (different systems may use different versions of the Stanford converter). CTB: Chinese Treebank. reranking /blend in Method column indicates a reranking system where the reranker score is interpolated with the base-parser's score. The different systems and the numbers reported from them are taken from: ZhangNivre11:(Zhang and Nivre, 2011); Martins13:", |
| "content": "<table/>", |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "html": null, |
| "num": null, |
| "text": ".", |
| "content": "<table><tr><td>Word embedding dimension</td><td>100</td></tr><tr><td>POS tag embedding dimension</td><td>25</td></tr><tr><td>Hidden units in M LP</td><td>100</td></tr><tr><td>Hidden units in M LP LBL BI-LSTM Layers</td><td>100 2</td></tr><tr><td colspan=\"2\">BI-LSTM Dimensions (hidden/output) 125 / 125</td></tr><tr><td>\u03b1 (for word dropout)</td><td>0.25</td></tr><tr><td>p agg (for exploration training)</td><td>0.1</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "html": null, |
| "num": null, |
| "text": "Hyper-parameter values used in experiments", |
| "content": "<table/>", |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "html": null, |
| "num": null, |
| "text": ".", |
| "content": "<table><tr><td/><td>PTB</td><td/><td>CTB</td><td/></tr><tr><td/><td colspan=\"4\">UAS LAS UAS LAS</td></tr><tr><td colspan=\"5\">Graph (no ext. emb) 93.3 91.0 87.0 85.4</td></tr><tr><td>-POS</td><td colspan=\"4\">92.9 89.8 80.6 76.8</td></tr><tr><td>-ArcLabeler</td><td>92.7</td><td>-</td><td>86.2</td><td>-</td></tr><tr><td>-Loss Aug.</td><td colspan=\"4\">81.3 79.4 52.6 51.7</td></tr><tr><td>Greedy (ext. emb)</td><td colspan=\"4\">93.8 91.5 87.8 86.0</td></tr><tr><td>-POS</td><td colspan=\"4\">93.4 91.2 83.4 81.6</td></tr><tr><td>-DynOracle</td><td colspan=\"4\">93.5 91.4 87.5 85.9</td></tr></table>", |
| "type_str": "table" |
| } |
| } |
| } |
| } |