ACL-OCL / Base_JSON /prefixC /json /C18 /C18-1011.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C18-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:09:38.908851Z"
},
"title": "Two Local Models for Neural Constituent Parsing",
"authors": [
{
"first": "Zhiyang",
"middle": [],
"last": "Teng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Singapore University of Technology",
"location": {}
},
"email": "teng@mymail.sutd.edu.sg"
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Singapore University of Technology",
"location": {}
},
"email": "zhang@sutd.edu.sg"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Non-local features have been exploited by syntactic parsers for capturing dependencies between sub output structures. Such features have been a key to the success of state-of-the-art statistical parsers. With the rise of deep learning, however, it has been shown that local output decisions can give highly competitive accuracies, thanks to the power of dense neural input representations that embody global syntactic information. We investigate two conceptually simple local neural models for constituent parsing, which make local decisions to constituent spans and CFG rules, respectively. Consistent with previous findings along the line, our best model gives highly competitive results, achieving the labeled bracketing F1 scores of 92.4% on PTB and 87.3% on CTB 5.1.",
"pdf_parse": {
"paper_id": "C18-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "Non-local features have been exploited by syntactic parsers for capturing dependencies between sub output structures. Such features have been a key to the success of state-of-the-art statistical parsers. With the rise of deep learning, however, it has been shown that local output decisions can give highly competitive accuracies, thanks to the power of dense neural input representations that embody global syntactic information. We investigate two conceptually simple local neural models for constituent parsing, which make local decisions to constituent spans and CFG rules, respectively. Consistent with previous findings along the line, our best model gives highly competitive results, achieving the labeled bracketing F1 scores of 92.4% on PTB and 87.3% on CTB 5.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Non-local features have been shown crucial for statistical parsing (Huang, 2008a; Zhang and Nivre, 2011) . For dependency parsing, High-order dynamic programs , integer linear programming (Martins et al., 2010) and dual decomposition techniques have been exploited by graph-based parser to integrate non-local features. Transition-based parsers (Nivre, 2003; Nivre, 2008; Zhang and Nivre, 2011; Bohnet, 2010; Huang et al., 2012) are also known for leveraging non-local features for achieving high accuracies. For most state-of-the-art statistical parsers, a global training objective over the entire parse tree has been defined to avoid label bias (Lafferty et al., 2001) .",
"cite_spans": [
{
"start": 67,
"end": 81,
"text": "(Huang, 2008a;",
"ref_id": "BIBREF25"
},
{
"start": 82,
"end": 104,
"text": "Zhang and Nivre, 2011)",
"ref_id": "BIBREF63"
},
{
"start": 188,
"end": 210,
"text": "(Martins et al., 2010)",
"ref_id": "BIBREF37"
},
{
"start": 345,
"end": 358,
"text": "(Nivre, 2003;",
"ref_id": "BIBREF40"
},
{
"start": 359,
"end": 371,
"text": "Nivre, 2008;",
"ref_id": "BIBREF41"
},
{
"start": 372,
"end": 394,
"text": "Zhang and Nivre, 2011;",
"ref_id": "BIBREF63"
},
{
"start": 395,
"end": 408,
"text": "Bohnet, 2010;",
"ref_id": "BIBREF3"
},
{
"start": 409,
"end": 428,
"text": "Huang et al., 2012)",
"ref_id": "BIBREF24"
},
{
"start": 648,
"end": 671,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For neural parsing, on the other hand, local models have been shown to give highly competitive accuracies (Cross and Huang, 2016b; as compared to those that employ long-range features (Watanabe and Sumita, 2015; Zhou et al., 2015; Andor et al., 2016; Durrett and Klein, 2015) . Highly local features have been used in recent state-of-the-art models Dozat and Manning, 2016; Shi et al., 2017) . In particular, Dozat and Manning (2016) show that a locally trained arc-factored model can give the best reported accuracies on dependency parsing. The surprising result has been largely attributed to the representation power of long short-term memory (LSTM) encoders (Kiperwasser and Goldberg, 2016) .",
"cite_spans": [
{
"start": 106,
"end": 130,
"text": "(Cross and Huang, 2016b;",
"ref_id": "BIBREF13"
},
{
"start": 184,
"end": 211,
"text": "(Watanabe and Sumita, 2015;",
"ref_id": "BIBREF59"
},
{
"start": 212,
"end": 230,
"text": "Zhou et al., 2015;",
"ref_id": "BIBREF64"
},
{
"start": 231,
"end": 250,
"text": "Andor et al., 2016;",
"ref_id": "BIBREF1"
},
{
"start": 251,
"end": 275,
"text": "Durrett and Klein, 2015)",
"ref_id": "BIBREF15"
},
{
"start": 349,
"end": 373,
"text": "Dozat and Manning, 2016;",
"ref_id": "BIBREF14"
},
{
"start": 374,
"end": 391,
"text": "Shi et al., 2017)",
"ref_id": "BIBREF46"
},
{
"start": 409,
"end": 433,
"text": "Dozat and Manning (2016)",
"ref_id": "BIBREF14"
},
{
"start": 662,
"end": 694,
"text": "(Kiperwasser and Goldberg, 2016)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An interesting research question is to what extent the encoding power can be leveraged for constituent parsing. We investigate the problem by building a chart-based model that is local to unlabeled constituent spans (Abney, 1991) and CFG-rules, which have been explored by early PCFG models (Collins, 2003; Klein and Manning, 2003) . In particular, our models first predict unlabeled CFG trees leveraging biaffine modelling (Dozat and Manning, 2016) . Then, constituent labels are assigned on unlabeled trees by using a tree-LSTM to encode the syntactic structure, and a LSTM decoder for yielding label sequences on each node, which can include unary rules.",
"cite_spans": [
{
"start": 216,
"end": 229,
"text": "(Abney, 1991)",
"ref_id": "BIBREF0"
},
{
"start": 291,
"end": 306,
"text": "(Collins, 2003;",
"ref_id": "BIBREF11"
},
{
"start": 307,
"end": 331,
"text": "Klein and Manning, 2003)",
"ref_id": "BIBREF28"
},
{
"start": 424,
"end": 449,
"text": "(Dozat and Manning, 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experiments show that our conceptually simple models give highly competitive performances compared with the state-of-the-art. Our best models give labeled bracketing F1 scores of 92.4% on PTB and 87.3% on CTB 5.1 test sets, without reranking, ensembling and external parses. We release our code at 1: An example workflow of our parsers for the sentence \"The stock price keeps falling\". We annotate every non-terminal span with its covered span range. Figure 1a shows constituent span classifiers making 0/1 decisions for all possible spans. Based on the local classification probabilities, we obtain an unlabeled binarized parse tree ( Figure 1b ) using binary CKY parsing algorithms. We then hierarchically generate labels for each span (Figure 1c ) using encoder-decoder models. Figure 1d shows the final output parse tree after debinarization.",
"cite_spans": [],
"ref_spans": [
{
"start": 451,
"end": 460,
"text": "Figure 1a",
"ref_id": null
},
{
"start": 636,
"end": 645,
"text": "Figure 1b",
"ref_id": null
},
{
"start": 738,
"end": 748,
"text": "(Figure 1c",
"ref_id": null
},
{
"start": 781,
"end": 790,
"text": "Figure 1d",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "https://github.com/zeeeyang/two-local-neural-conparsers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our models consist of an unlabeled binarized tree parser and a label generator. Figure 1 shows a running example of our parsing model. The unlabeled parser (Figure 1a , 1b) learns an unlabeled parse tree using simple BiLSTM encoders (Hochreiter and Schmidhuber, 1997) . The label generator (Figure 1c, 1d ) predicts constituent labels for each span in the unlabeled tree using tree-LSTM models.",
"cite_spans": [
{
"start": 233,
"end": 267,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 80,
"end": 88,
"text": "Figure 1",
"ref_id": null
},
{
"start": 156,
"end": 166,
"text": "(Figure 1a",
"ref_id": null
},
{
"start": 290,
"end": 304,
"text": "(Figure 1c, 1d",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "In particular, we design two different classification models for unlabeled parsing: the span model (Section 2.1) and the rule model (Section 2.2). The span model identifies the probability of an arbitrary span being a constituent span. For example, the span [1, 2] in Figure 1a belongs to the correct parse tree (Figure 1d ). Ideally, our model assigns a high probability to this span. In contrast, the span [0, 3] is not a valid constituent span and our model labels it with 0. Different from the span model, the rule model considers the probability P (",
"cite_spans": [
{
"start": 258,
"end": 261,
"text": "[1,",
"ref_id": null
},
{
"start": 262,
"end": 264,
"text": "2]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 268,
"end": 277,
"text": "Figure 1a",
"ref_id": null
},
{
"start": 312,
"end": 322,
"text": "(Figure 1d",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "[i, j] \u2192 [i, k][k + 1, j]|S)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "for the production rule that the span [i, j] is composed by two children spans [i, k] and [k + 1, j], where i \u2264 k < j. For example, in Figure 1a , the rule model assigns high probability to the rule",
"cite_spans": [
{
"start": 38,
"end": 44,
"text": "[i, j]",
"ref_id": null
},
{
"start": 79,
"end": 85,
"text": "[i, k]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 135,
"end": 144,
"text": "Figure 1a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "[0, 2] \u2192 [0, 0][1, 2] instead of the rule [0, 2] \u2192 [0, 1][2, 2].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "Given the local probabilities, we use CKY algorithm to find the unlabeled binarized parses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "The label generator encodes a binarized unlabeled tree and to predict constituent labels for every span. The encoder is a binary tree-LSTM (Tai et al., 2015; Zhu et al., 2015) , which recursively composes the representation vectors for tree nodes bottom-up. Based on the representation vector of a constituent span, a LSTM decoder (Cho et al., 2014; generates chains of constituent labels, which can represent unary rules. For example, the decoder outputs \"VP \u2192S\u2192 </L>\" for the span [4, 4] and \"NP\u2192 </L>\" for the span [0, 2] in Figure 1c where </L> is a stopping symbol.",
"cite_spans": [
{
"start": 139,
"end": 157,
"text": "(Tai et al., 2015;",
"ref_id": "BIBREF52"
},
{
"start": 158,
"end": 175,
"text": "Zhu et al., 2015)",
"ref_id": "BIBREF66"
},
{
"start": 331,
"end": 349,
"text": "(Cho et al., 2014;",
"ref_id": "BIBREF7"
},
{
"start": 483,
"end": 486,
"text": "[4,",
"ref_id": null
},
{
"start": 487,
"end": 489,
"text": "4]",
"ref_id": null
},
{
"start": 518,
"end": 521,
"text": "[0,",
"ref_id": null
},
{
"start": 522,
"end": 524,
"text": "2]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 528,
"end": 537,
"text": "Figure 1c",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "2"
},
{
"text": "Given an unlabeled binarized tree T ub for the sentence S, S = w 0 , w 1 . . . w n\u22121 , the span model trains a neural network model P (Y [i,j] |S, \u0398) to distinguish constituent spans from non-constituent spans, where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "0 \u2264 i \u2264 n \u2212 2, 1 \u2264 j < n, i < j. Y [i,j] = 1 indicates the span [i, j] is a constituent span ([i, j] \u2208 T ub )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": ", and Y [i,j] = 0 for otherwise, \u0398 are model parameters. We do not model spans with length 1 since the span [i, i] always belongs to T ub .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "Network Structure. Figure 2a shows the neural network structures for the binary classification model. In the bottom, a bidirectional LSTM layer encodes the input sentence to extract non-local features. In particular, we append a starting symbol <s> and an ending symbol </s> to the left-to-right LSTM and the right-to-left LSTM, respectively. We denote the output hidden vectors of the left-to-right LSTM and the right-to-left LSTM for w 0 , w 1 , . . . , w n\u22121 is f 1 , f 2 , . . . , f n and r 0 , r 1 , . . . , r n\u22121 , respectively. We obtain the representation vector v[i, j] of the span [i, j] by simply concatenating the bidirectional output vectors at the input word i and the input word",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 28,
"text": "Figure 2a",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "j, v[i, j] = [f i+1 ; r i ; f j+1 ; r j ]. (1) v[i, j]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "is then passed through a nonlinear transformation layer and the probability distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (Y [i,j] |S, \u0398) is given by o[i, j] = tanh(W o v[i, j] + b o ), u[i, j] = W u o[i, j] + b u , P (Y [i,j] |S, \u0398) = softmax(u[i, j]),",
"eq_num": "(2)"
}
],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "where W o , b o , W u and b u are model parameters. Input Representation. Words and part-of-speech (POS) tags are integrated to obtain the input representation vectors. Given a word w, its corresponding characters c 0 , . . . , c |w|\u22121 and POS tag t, first, we obtain the word embedding E w word , character embeddings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "E c 0 char , . . . , E c |w|\u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "char , and POS tag embedding E t pos using lookup operations. Then a bidirectional LSTM is used to extract character-level features. Suppose that the last output vectors of the left-to-right and right-to-left LSTMs are h f char and h r char , respectively. The final input vector x input is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x char = tanh(W l char h l char + W r char h r char + b char ), x input = [E w word + x char ; E t pos ],",
"eq_num": "(3)"
}
],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "where W l char , W r char and b char are model parameters. Training objective. The training objective is to maximize the probabilities of P (Y [i,j] = 1|S, \u0398) for spans [i, j] \u2208 T ub and minimize the probabilities of P (Y [i,j] = 1|S, \u0398) for spans [i, j] / \u2208 T ub at the same time. Formally, the training loss for binary span classification L binary is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L binary = \u2212 [i,j]\u2208T ub log P (Y [i,j] = 1|S, \u0398) \u2212 [i,j] / \u2208T ub log P (Y [i,j] = 0|S, \u0398), (0 \u2264 i \u2264 n \u2212 2, 1 \u2264 j < n, i < j)",
"eq_num": "(4)"
}
],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "For a sentence with length n, there are n(n\u22121) 2 terms in total in Eq 4. Neural CKY algorithm. The unlabeled production probability for the rule r :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "[i, j] \u2192 [i, k][k + 1, j]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "given by the binary classification model is,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "P (r|S, \u0398) = P (Y [i,k] = 1|S, \u0398)P (Y [k+1,j] = 1|S, \u0398).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "During decoding, we find the optimal parse tree T * ub using the CKY algorithm. Note that our CKY algorithm is different from the standard CKY algorithm mainly in that there is no explicit phrase rule probabilities being involved. Hence our model can be regarded as a zero-order constituent tree model, which is the most local. All structural relations in a constituent tree must be implicitly captured by the BiLSTM encoder over the sentence alone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "The0 stock1 price2 keeps3 falling4 BiLSTM Span Representation Layer tanh y=0 MLP <s> </s> \" # $ % & ' ' & % $ # \" % # $ & [#,%] Classification (a) Span model. The0 stock1 price2 keeps3 falling4 BiLSTM Span Representation Layer relu MLP <s> </s> \" # $ % & ' ' & % $ # \" Biaffine relu relu +,# [#,%] [#,#] [$,%] +,# +,#",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "Partition Score",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "(b) Rule model. Multi-class Span Classification Model. The previous model preforms binary classifications to identify constituent spans. In this way, the classification model only captures the existence of constituent labels but does not leverage constituent label type information. In order to incorporate the syntactic label information into the span model, we use a multi-class classification model P (Y [i,j] = c|S, \u0398) to describe the probability that c is a constituent label for span [i, j] . The network structure is the same as the binary span classification model except the last layer. For the last layer, given",
"cite_spans": [
{
"start": 490,
"end": 496,
"text": "[i, j]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "o [i,j] in Eq 2, P (Y [i,j] = c|S, \u0398) is calculated by, m[i, j] = W m o[i, j] + b m , P (Y [i,j] = c|S, \u0398) = softmax(m[i, j]) [c] .",
"eq_num": "(5)"
}
],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "Here W m , b m , W m and b m are model parameters. The subscript [c] is to pick the probability for the label c. The training loss is,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L multi = \u2212 [i,j]\u2208T ub c\u2208[i,j],c =</L> log P (Y [i,j] = c|S, \u0398) \u2212 [i,j] / \u2208T ub log P (Y [i,j] = </L>|S, \u0398), (0 \u2264 i \u2264 n \u2212 2, 1 \u2264 j < n, i < j)",
"eq_num": "(6)"
}
],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "Note that there is an additional sum inside the first term in Eq 6, which is different from the first term in Eq 4. This is to say that we treat all constituent labels equally of a unary chain. For example, suppose there is a unary chain S\u2192VP in span [4, 4] . For this span, we hypothesize that both labels are plausible answers and pay equal attentions to VP and S during training. For the second term in Eq 6, we maximize the probability of the ending label for non-constituent spans. For decoding, we transform the multi-class probability distribution into a binary probability distribution by using,",
"cite_spans": [
{
"start": 251,
"end": 254,
"text": "[4,",
"ref_id": null
},
{
"start": 255,
"end": 257,
"text": "4]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "P (Y [i,j] = 1|S, \u0398) = c,c =</L> P (Y [i,j] = c|S, \u0398), P (Y [i,j] = 0|S, \u0398) = P (Y [i,j] = </L>|S, \u0398)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "In this way, the probability of a span being a constituent span takes all possible syntactic labels into considerations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Span Model",
"sec_num": "2.1"
},
{
"text": "The rule model directly calculates the probabilities of all possible splitting points k (i \u2264 k < j) for the span [i, j] . Suppose the partition score of splitting point k is ps k . The unlabeled production probability for the rule r :",
"cite_spans": [
{
"start": 113,
"end": 119,
"text": "[i, j]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "[i, j] \u2192 [i, k][k + 1, j] is given by a softmax distribution, P ([i, j] \u2192 [i, k][k + 1, j]|S, \u0398) = exp(ps k ) j\u22121 k =i exp(ps k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "The training objective is to minimize the log probability loss of all unlabeled production rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "L rule = \u2212 r\u2208T ub log P (r : [i, j] \u2192 [i, k][k + 1, j]|S, \u0398)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "The decoding algorithm is the standard CKY algorithm, which we omit here. The rule model can be regarded as a first-order constituent model, with the probability of each phrase rule being modeled. However, unlike structured learning algorithms (Finkel et al., 2008; Carreras et al., 2008) , which use a global score for each tree, our model learns each production rule probability individually. Such local learning has traditionally been found subjective to label bias (Lafferty et al., 2001) . Our model relies on input representations solely for resolving this issue.",
"cite_spans": [
{
"start": 244,
"end": 265,
"text": "(Finkel et al., 2008;",
"ref_id": "BIBREF18"
},
{
"start": 266,
"end": 288,
"text": "Carreras et al., 2008)",
"ref_id": "BIBREF4"
},
{
"start": 469,
"end": 492,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "Span Representation. Figure 2b shows one possible network architecture for the rule model by taking the partition point k = 1 for the span [1, 3] as an example. The BiLSTM encoder layer in the bottom is the same as that of the previous span classification model. We obtain the span representation vectors using difference vectors (Wang and Chang, 2016; Cross and Huang, 2016b) . Formally, the span representation vector sr[i, j] is given by,",
"cite_spans": [
{
"start": 330,
"end": 352,
"text": "(Wang and Chang, 2016;",
"ref_id": "BIBREF56"
},
{
"start": 353,
"end": 376,
"text": "Cross and Huang, 2016b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 21,
"end": 30,
"text": "Figure 2b",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s[i, j] = [f j+1 \u2212 f i ; r i \u2212 r j+1 ], sr[i, j] = [s[0, i \u2212 1]; s[i, j]; s[j + 1, n \u2212 1]].",
"eq_num": "(7)"
}
],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "We first combine the difference vectors (f j+1 \u2212 f i ) and (r i \u2212 r j+1 ) to obtain a simple span representation vector s [i, j] . In order to take more contextual information such as f p where p > j + 1 and r q where",
"cite_spans": [
{
"start": 122,
"end": 128,
"text": "[i, j]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "q < i, we concatenate s[0, i \u2212 1], s[i, j]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": ", and s[j + 1, n \u2212 1] to produce the final span representation vector sr [i, j] . We then transform sr[i, j] to an output vector r[i, j] using an activation function \u03c6,",
"cite_spans": [
{
"start": 73,
"end": 79,
"text": "[i, j]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r[i, j] = \u03c6(W M r sr[i, j] + b M r ),",
"eq_num": "(8)"
}
],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "where W M r and b M r and model parameters, and M is a parameter set index. We use separate parameters for the nonlinear transforming layer. M \u2208 {P, L, R} are for the parent span [i, j], the left child span [i, k] and the right child span [k + 1, j], respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "After obtaining the span representation vectors, we use these vectors to calculate the partition score ps k . In particular, we investigate two scoring methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "Linear Model. In the linear model, the partition score is calculated by a linear affine transformation. For the splitting point k,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "ps k = w T ll,k r[i, k] + w T lr,k r[k + 1, j] + b ll,k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "where w T ll,k and w T ll,k are two vectors, and b ll,k is a size 1 parameter. Biaffine model. Since the possible splitting points for spans are varied with the length of span, we also try a biaffine scoring model (as shown in Figure 2b ), which is good at handling variable-sized classification problems (Dozat and Manning, 2016; Ma and Hovy, 2017 ). The biaffine model produces the score lps k between the parent span [i, j] and the left child span [i, k] using a biaffine scorer",
"cite_spans": [
{
"start": 305,
"end": 330,
"text": "(Dozat and Manning, 2016;",
"ref_id": "BIBREF14"
},
{
"start": 331,
"end": 348,
"text": "Ma and Hovy, 2017",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 227,
"end": 236,
"text": "Figure 2b",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "lps k = (r[i, j] \u2295 1) T W pl (r[i, k] \u2295 1)",
"eq_num": "(9)"
}
],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "where W pl is model parameters. Similarly, we calculate the score rps k between the parent span [i, j] and the right child span [k + 1, j] using W pr and b pr as parameters. The overall partition score ps k is therefore given by ps k = lps k + rps k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule Model",
"sec_num": "2.2"
},
{
"text": "Lexicalized Tree-LSTM Encoder. Shown in Figure 1c , we use lexicalized tree LSTM (Teng and Zhang, 2016) for encoding, which shows good representation abilities for unlabeled trees. The encoder first propagates lexical information from two children spans to their parent using a lexical gate, then it produces the representation vectors of the parent span by composing the vectors of children spans using a binarized tree-LSTM (Tai et al., 2015; Zhu et al., 2015) . Formally, the lexical vector tx[i, j] for the span [i, j] with the partition point at k is defined by:",
"cite_spans": [
{
"start": 81,
"end": 103,
"text": "(Teng and Zhang, 2016)",
"ref_id": "BIBREF54"
},
{
"start": 426,
"end": 444,
"text": "(Tai et al., 2015;",
"ref_id": "BIBREF52"
},
{
"start": 445,
"end": 462,
"text": "Zhu et al., 2015)",
"ref_id": "BIBREF66"
},
{
"start": 516,
"end": 522,
"text": "[i, j]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 40,
"end": 49,
"text": "Figure 1c",
"ref_id": null
}
],
"eq_spans": [],
"section": "Label Generator",
"sec_num": "2.3"
},
{
"text": "i lex [i,j] = \u03c3(W lex l tx[i, k] + W lex r tx[k + 1, j] + W lex lh h [i,k] + W lex rh h [k+1,j] + b lex ) tx[i, j] = i lex [i,j] tx[i, k] + (1.0 \u2212 i lex [i,j] ) tx[k + 1, j],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Generator",
"sec_num": "2.3"
},
{
"text": "where W lex l , W lex r and b lex are model parameters, is element-wise multiplication and \u03c3 is the logistic function. The lexical vector tx[i, i] for the leaf node i is the concatenate of the output vectors of the BiLSTM encoder and the input representation x input [i] (Eq 3), as shown in Figure 1c .",
"cite_spans": [],
"ref_spans": [
{
"start": 291,
"end": 300,
"text": "Figure 1c",
"ref_id": null
}
],
"eq_spans": [],
"section": "Label Generator",
"sec_num": "2.3"
},
{
"text": "The output state vector h[i, j] of the span [i, j] given by a binary tree LSTM encoder is,",
"cite_spans": [
{
"start": 44,
"end": 50,
"text": "[i, j]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Label Generator",
"sec_num": "2.3"
},
{
"text": "i p = \u03c3(W 1 tx p + W 2 h l + W 3 c l + W 4 h r + W 5 c r + b 1 ), f l p = \u03c3(W 6 tx p + W 7 h l + W 8 c l + W 9 h r + W 10 c r + b 2 ), f r p = \u03c3(W 11 tx p + W 12 h r + W 13 c l + W 14 h r + W 15 c r + b 3 ), g p = tanh(W 16 tx p + W 17 h l + W 18 h r + b 4 ), c p = f l p c l + f r p c r + i p g p , o p = \u03c3(W 19 tx p + W 20 h p + W 21 h r + W 22 c p + b 5 ), h p = o p tanh(c p ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Generator",
"sec_num": "2.3"
},
{
"text": "Here the subscripts p, l and r denote",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Generator",
"sec_num": "2.3"
},
{
"text": "[i, j], [i, k] and [k + 1, j], respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Generator",
"sec_num": "2.3"
},
{
"text": "Label Decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Generator",
"sec_num": "2.3"
},
{
"text": "Suppose that the constituent label chain for the span",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Generator",
"sec_num": "2.3"
},
{
"text": "[i, j] is (YL 0 [i,j] , YL 1 [i,j] , . . . , YL m [i,j]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Generator",
"sec_num": "2.3"
},
{
"text": "). The decoder for the span [i, j] learns a conditional language model depending on the output vector h[i, j] from the tree LSTM encoder. Formally, the probability distribution of generating the label at time step z is given by,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Generator",
"sec_num": "2.3"
},
{
"text": "P (YL z [i,j] |T ub , YL z<m [i,j] ) = softmax g(h[i, j], E label (YL z\u22121 [i,j] ), d z\u22121 ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Generator",
"sec_num": "2.3"
},
{
"text": "where YL z<m [i,j] is the decoding prefix, d z\u22121 is the state vector of the decoder LSTM and E label (YL z\u22121 [i,j] ) is the embedding of the previous output label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Generator",
"sec_num": "2.3"
},
{
"text": "The training objective is to minimize the negative log-likelihood of the label generation distribution,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Generator",
"sec_num": "2.3"
},
{
"text": "L label [i, j] = \u2212 m z=0 log P (YL z [i,j] |T ub , YL z<m [i,j] ), L label = [i,j]\u2208T ub L label [i, j].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Generator",
"sec_num": "2.3"
},
{
"text": "In conclusion, each model contains an unlabeled structure predictor and a label generator. The latter is the same for all models. All the span models perform binary classification. The difference is that BinarySpan doesn't consider label information for unlabeled tree prediction. While MultiSpan guides unlabeled tree prediction with such information, simulating binary classifications. The unlabeled parser and the label generator share parts of the network components, such as word embeddings, char embeddings, POS embeddings and the BiLSTM encoding layer. We jointly train the unlabeled parser and the label generator for each model by minimizing the overall loss",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint training",
"sec_num": "2.4"
},
{
"text": "L total = L parser + L label + \u03bb 2 ||\u0398|| 2 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint training",
"sec_num": "2.4"
},
{
"text": "where \u03bb is a regularization hyper-parameter. We set L parser = L binary or L parser = L multi and L parser = L rule when using the binary span classification model, the multi-class model and the rule model, respectively. (Xue et al., 2005) . The training set consists of articles 001-270 and 440-1151, the development set contains articles 301-325 and the test set includes articles 271-300. We use automatically reassigned POS tags in the same way as Cross and Huang (2016b) for English and Dyer et al. (2016) for Chinese. We use ZPar (Zhang and Clark, 2011) 1 to binarize both English and Chinese data with the head rules of Collins (2003) . The head directions of the binarization results are ignored during training. The types of English and Chinese constituent span labels after binarization are 52 and 56, respectively. The maximum number of greedy decoding steps for generating consecutive constituent labels is limited to 4 for both English and Chinese. We evaluate parsing performance in terms of both unlabeled bracketing metrics and labeled bracketing metrics including unlabeled F1 (UF) 2 , labeled precision (LP), labeled recall (LR) and labeled bracketing F1 (LF) after debinarization using EVALB 3 .",
"cite_spans": [
{
"start": 221,
"end": 239,
"text": "(Xue et al., 2005)",
"ref_id": "BIBREF60"
},
{
"start": 452,
"end": 475,
"text": "Cross and Huang (2016b)",
"ref_id": "BIBREF13"
},
{
"start": 480,
"end": 510,
"text": "English and Dyer et al. (2016)",
"ref_id": null
},
{
"start": 627,
"end": 641,
"text": "Collins (2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint training",
"sec_num": "2.4"
},
{
"text": "Unknown words. For English, we combine the methods of Dyer et al. (2016) and Cross and Huang (2016b) to handle unknown words. In particular, we first map all words (not just singleton words) in the training corpus into unknown word classes using the same rule as Dyer et al. (2016) . During each training epoch, every word w in the training corpus is stochastically mapped into its corresponding unknown word class unk w with probability P (w \u2192 unk w ) = \u03b3 \u03b3+#w , where #w is the frequency count and \u03b3 is a control parameter. Intuitively, the more times a word appears, the less opportunity it will be mapped into its unknown word type. There are 54 unknown word types for English. Following Cross and Huang (2016b) , \u03b3 = 0.8375. For Chinese, we simply use one unknown word type to dynamically replace singletons words with a probability of 0.5.",
"cite_spans": [
{
"start": 54,
"end": 72,
"text": "Dyer et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 77,
"end": 100,
"text": "Cross and Huang (2016b)",
"ref_id": "BIBREF13"
},
{
"start": 263,
"end": 281,
"text": "Dyer et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 692,
"end": 715,
"text": "Cross and Huang (2016b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint training",
"sec_num": "2.4"
},
{
"text": "Hyper-parameters. Table 1 shows all hyper-parameters. These values are tuned using the corresponding development sets. We optimize our models with stochastic gradient descent (SGD). The initial learning rate is 0.1. Our model are initialized with pretrained word embeddings both for English and Chinese. The pretrained word embeddings are the same as those used in Dyer et al. (2016) . The other parameters are initialized according to the default settings of DyNet . We apply dropout (Srivastava et al., 2014) to the inputs of every LSTM layer, including the word LSTM layers, the character LSTM layers, the tree-structured LSTM layers and the constituent label LSTM layers. For Chinese, we find that 0.3 is a good choice for the dropout probability. The number of training epochs is decided by the evaluation performances on development set. In particular, we perform evaluations on development set for every 10,000 examples. The training procedure stops when the results of next 20 evaluations do not become better than the previous best record. ",
"cite_spans": [
{
"start": 365,
"end": 383,
"text": "Dyer et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 485,
"end": 510,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Joint training",
"sec_num": "2.4"
},
{
"text": "We study the two span representation methods, namely the simple concatenating representation v[i, j] (Eq 1) and the combining of three difference vectors sr[i, j] (Eq 7), and the two representative models, i.e, the binary span classification model (BinarySpan) and the biaffine rule model (BiaffineRule). We investigate appropriate representations for different models on the English dev dataset. Table 2 shows the effects of different span representation methods, where v[i, j] is better for BinarySpan and sr[i, j] is better for BiaffineRule. When using sr [i, j] for BinarySpan, the performance drops greatly (92.17 \u2192 91.80). Similar observations can be found when replacing sr [i, j] with v[i, j] for BiaffineRule. Therefore, we use v[i, j] for the span models and sr [i, j] for the rule models in latter experiments. Table 3 shows the main results on the English and Chinese dev sets. For English, BinarySpan acheives 92.17 LF score. The multi-class span classifier (MultiSpan) is much better than BinarySpan due to the awareness of label information. Similar phenomenon can be observed on the Chinese dataset. We also test the linear rule (LinearRule) methods. For English, LinearRule obtains 92.03 LF score, which is much worse than BiaffineRule. In general, the performances of BiaffineRule and MultiSpan are quite close both for English and Chinese.",
"cite_spans": [
{
"start": 559,
"end": 565,
"text": "[i, j]",
"ref_id": null
},
{
"start": 681,
"end": 687,
"text": "[i, j]",
"ref_id": null
},
{
"start": 772,
"end": 778,
"text": "[i, j]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 397,
"end": 404,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 822,
"end": 829,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Development Results",
"sec_num": "3.2"
},
{
"text": "For MultiSpan, both the first stage (unlabeled tree prediction) and the second stage (label generation) exploit constituent types. We design three development experiments to answer what the accuracy would be like of the predicted labels of the first stage were directly used in the second stage. The first one doesn't include the label probabilities of the first stage for the second stage. For the second experiment, we directly use the model output from the first setting for decoding, summing up the label classification probabilities of the first stage and the label generation probabilities of the second stage in order to make label decisions. For the third setting, we do the sum-up of label probabilities for the second stage both during training and decoding. These settings give LF scores of 92.44, 92.49 and 92.44, respectively, which are very similar. We choose the first one due to its simplicity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development Results",
"sec_num": "3.2"
},
{
"text": "English. Table 4 summarizes the performances of various constituent parsers on PTB test set. BinarySpan achieves 92.1 LF score, outperforming the neural CKY parsing models (Durrett and Klein, 2015) and the top-down neural parser . MultiSpan and BiaffineRule obtain similar performances. Both are better than BianrySpan. MultiSpan obtains 92.4 LF score, which is very close to the state-of-the-art result when no external parses are included. An interesting observation is that the model of show higher LP score than our models (93.2 v.s 92.6), while our model gives better LR scores (90.4 v.s. 93.2) . This potentially suggests that the global constraints such as structured label loss used in helps make careful decisions. Our local models are likely to gain a better balance between bold guesses and accurate scoring of constituent spans. Table 7 shows the unlabeled parsing accuracies on PTB test set. MultiSpan performs the best, showing 92.50 UF score. When the unlabeled parser is 100% correct, BiaffineRule are better than the other two, producing an oracle LF score of 97.12%, which shows the robustness of our label generator. The decoding speeds of BinarySpan and MutliSpan are similar, reaching about 21 sentences per second. BiaffineRule is much slower than the span models.",
"cite_spans": [
{
"start": 172,
"end": 197,
"text": "(Durrett and Klein, 2015)",
"ref_id": "BIBREF15"
},
{
"start": 583,
"end": 599,
"text": "(90.4 v.s. 93.2)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 4",
"ref_id": "TABREF8"
},
{
"start": 841,
"end": 848,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "3.3"
},
{
"text": "Chinese. Table 5 shows the parsing performance on CTB 5.1 test set. Under the same settings, all the three models outperform the state-of-the-art neural model (Dyer et al., 2016; Liu and Zhang, 2017a) . Charniak (2016) (S,R,E) 93.8 Sagae and Lavie (2006) 87.8 88.1 87.9 Durrett and Klein (2015) (S) 91.1 Petrov and Klein (2007) 90.1 90.2 90.1 Vinyals et al. (2015) (S, E) 92.8 Carreras et al. (2008) 90.7 91.4 91.1 Charniak and Johnson (2005) (S, R) 91.2 91.8 91.5 Zhu et al. (2013) 90.2 90.7 90.4 Huang (2008b) (R) 91.7 Watanabe and Sumita (2015) 90.7 Huang and Harper (2009) Compared with the in-order transition-based parser, our best model improves the labeled F1 score by 1.2 (86.1 \u2192 87.3). In addition, MultiSpan and BiaffineRule achieve better performance than the reranking system using recurrent neural network grammars (Dyer et al., 2016) and methods that do joint POS tagging and parsing (Wang and Xue, 2014; Wang et al., 2015) .",
"cite_spans": [
{
"start": 159,
"end": 178,
"text": "(Dyer et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 179,
"end": 200,
"text": "Liu and Zhang, 2017a)",
"ref_id": "BIBREF33"
},
{
"start": 203,
"end": 218,
"text": "Charniak (2016)",
"ref_id": "BIBREF8"
},
{
"start": 219,
"end": 226,
"text": "(S,R,E)",
"ref_id": null
},
{
"start": 232,
"end": 254,
"text": "Sagae and Lavie (2006)",
"ref_id": "BIBREF45"
},
{
"start": 270,
"end": 298,
"text": "Durrett and Klein (2015) (S)",
"ref_id": null
},
{
"start": 304,
"end": 327,
"text": "Petrov and Klein (2007)",
"ref_id": "BIBREF42"
},
{
"start": 343,
"end": 371,
"text": "Vinyals et al. (2015) (S, E)",
"ref_id": null
},
{
"start": 377,
"end": 399,
"text": "Carreras et al. (2008)",
"ref_id": "BIBREF4"
},
{
"start": 415,
"end": 442,
"text": "Charniak and Johnson (2005)",
"ref_id": "BIBREF5"
},
{
"start": 465,
"end": 482,
"text": "Zhu et al. (2013)",
"ref_id": "BIBREF65"
},
{
"start": 498,
"end": 515,
"text": "Huang (2008b) (R)",
"ref_id": null
},
{
"start": 521,
"end": 547,
"text": "Watanabe and Sumita (2015)",
"ref_id": "BIBREF59"
},
{
"start": 553,
"end": 576,
"text": "Huang and Harper (2009)",
"ref_id": "BIBREF22"
},
{
"start": 829,
"end": 848,
"text": "(Dyer et al., 2016)",
"ref_id": "BIBREF16"
},
{
"start": 899,
"end": 919,
"text": "(Wang and Xue, 2014;",
"ref_id": "BIBREF57"
},
{
"start": 920,
"end": 938,
"text": "Wang et al., 2015)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 5",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "3.3"
},
{
"text": "Constituent label. Table 6 shows the LF scores for eight major constituent labels on PTB test set. BinarySpan consistently underperforms to the other two models. The error distribution of MultiSpan and BiaffineRule are different. For constituent labels including SBAR, WHNP and QP, BiaffineRule is the winner. This is likely because the partition point distribution of these labels are less trivial than other labels. For NP, PP, ADVP and ADJP, MultiSpan obtains better scores than BiaffineRule, showing the importance of the explicit type information for correctly identifying these labels. In addition, the three models give similar performances of VP and S, indicating that simple local classifiers might be sufficient enough for these two labels. LF v.s. Length. Figure 3 and Figure 4 show the LF score distributions against sentence length and span length on the PTB test set, respectively. We also include the output of the previous state-of-the-art top-down neural parser and the reranking results of transition-based neural generative parser (RNNG) (Dyer et al., 2016) , which represents models that can access more global information. For sentence length, the overall trends of the five models are similar. The LF score decreases as the length increases, but there is no salient difference in the downing rate (also true for span length \u22646), demonstrating our local models can alleviate the label bias problem. BiaffineRule outperforms the other three models (except RNNG) when the sentence length less than 30 or the span length less than 4. This suggests that when the length is short, the rule model can easily recognize the partition point. When the sentence length greater than 30 or the span length greater than 10, MultiSpan becomes the best option (except RNNG), showing that for long spans, the constituent label information are useful. Table 7 : UF, oralce LF and speed.",
"cite_spans": [
{
"start": 1057,
"end": 1076,
"text": "(Dyer et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 19,
"end": 26,
"text": "Table 6",
"ref_id": "TABREF11"
},
{
"start": 767,
"end": 775,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 780,
"end": 788,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1855,
"end": 1862,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4"
},
{
"text": "Globally trained discriminative models have given highly competitive accuracies on graph-based constituent parsing. The key is to explicitly consider connections between output substructures in order to avoid label bias. State-of-the-art statistical methods use a single model to score a feature representation for all phrase-structure rules in a parse tree (Taskar et al., 2004; Finkel et al., 2008; Carreras et al., 2008) . More sophisticated features that span over more than one rule have been used for reranking (Huang, 2008b) . Durrett and Klein (2015) used neural networks to augment manual indicator features for CRF parsing. Structured learning has been used for transition-based constituent parsing also (Sagae and Lavie, 2005; Zhang and Clark, 2009; Zhang and Clark, 2011; Zhu et al., 2013) , and neural network models have been used to substitute indicator features for transition-based parsing (Watanabe and Sumita, 2015; Dyer et al., 2016; Goldberg et al., 2014; Kiperwasser and Goldberg, 2016; Cross and Huang, 2016a; Coavoux and Crabb\u00e9, 2016; Shi et al., 2017) . Compared to the above methods on constituent parsing, our method does not use global structured learning, but instead learns local constituent patterns, relying on a bi-directional LSTM encoder for capturing non-local structural relations in the input. Our work is inspired by the biaffine dependency parser of Dozat and Manning (2016) . Similar to our work, show that a model that bi-partitions spans locally can give high accuracies under a highly-supervised setting. Compared to their model, we build direct local span classification and CFG rule classification models instead of using span labeling and splitting features to learn a margin-based objective. Our results are better although our models are simple. In addition, they collapse unary chains as fixed patterns while we handle them with an encoder-decoder model.",
"cite_spans": [
{
"start": 358,
"end": 379,
"text": "(Taskar et al., 2004;",
"ref_id": "BIBREF53"
},
{
"start": 380,
"end": 400,
"text": "Finkel et al., 2008;",
"ref_id": "BIBREF18"
},
{
"start": 401,
"end": 423,
"text": "Carreras et al., 2008)",
"ref_id": "BIBREF4"
},
{
"start": 517,
"end": 531,
"text": "(Huang, 2008b)",
"ref_id": "BIBREF26"
},
{
"start": 534,
"end": 558,
"text": "Durrett and Klein (2015)",
"ref_id": "BIBREF15"
},
{
"start": 714,
"end": 737,
"text": "(Sagae and Lavie, 2005;",
"ref_id": "BIBREF44"
},
{
"start": 738,
"end": 760,
"text": "Zhang and Clark, 2009;",
"ref_id": "BIBREF61"
},
{
"start": 761,
"end": 783,
"text": "Zhang and Clark, 2011;",
"ref_id": "BIBREF62"
},
{
"start": 784,
"end": 801,
"text": "Zhu et al., 2013)",
"ref_id": "BIBREF65"
},
{
"start": 907,
"end": 934,
"text": "(Watanabe and Sumita, 2015;",
"ref_id": "BIBREF59"
},
{
"start": 935,
"end": 953,
"text": "Dyer et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 954,
"end": 976,
"text": "Goldberg et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 977,
"end": 1008,
"text": "Kiperwasser and Goldberg, 2016;",
"ref_id": "BIBREF27"
},
{
"start": 1009,
"end": 1032,
"text": "Cross and Huang, 2016a;",
"ref_id": "BIBREF12"
},
{
"start": 1033,
"end": 1058,
"text": "Coavoux and Crabb\u00e9, 2016;",
"ref_id": "BIBREF10"
},
{
"start": 1059,
"end": 1076,
"text": "Shi et al., 2017)",
"ref_id": "BIBREF46"
},
{
"start": 1390,
"end": 1414,
"text": "Dozat and Manning (2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We investigated two locally trained span-level constituent parsers using BiLSTM encoders, demonstrating empirically the strength of the local models on learning syntactic structures. On standard evaluation, our models give the best results among existing neural constituent parsers without external parses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/SUTDNLP/ZPar 2 For UF, we exclude the sentence span [0,n-1] and all spans with length 1. 3 http://nlp.cs.nyu.edu/evalb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Yue Zhang is the corresponding author. We thank all the anonymous reviews for their thoughtful comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Parsing by chunks",
"authors": [
{
"first": "P",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 1991,
"venue": "Principle-based parsing",
"volume": "",
"issue": "",
"pages": "257--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven P. Abney. 1991. Parsing by chunks. In Principle-based parsing, pages 257-278. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Globally normalized transition-based neural networks",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Andor",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Presta",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "2442--2452",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2442-2452, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Top accuracy and fast dependency parsing is not a contradiction",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "89--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 89-97, Beijing, China, August. Coling 2010 Organizing Committee.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Tag, dynamic programming, and the perceptron for efficient, feature-rich parsing",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Twelfth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Carreras, Michael Collins, and Terry Koo. 2008. Tag, dynamic programming, and the perceptron for efficient, feature-rich parsing. In Proceedings of the Twelfth Conference on Computational Natural Language Learning, pages 9-16. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Coarse-to-fine n-best parsing and maxent discriminative reranking",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 173-180, Ann Arbor, Michigan, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A maximum-entropy-inspired parser",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference",
"volume": "",
"issue": "",
"pages": "132--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference, pages 132-139. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On the properties of neural machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.1259"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Parsing as language modeling",
"authors": [
{
"first": "Kook",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Choe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2331--2336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 2331-2336, Austin, Texas, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Fast and accurate deep network learning by exponential linear units (elus)",
"authors": [
{
"first": "Djork-Arn\u00e9",
"middle": [],
"last": "Clevert",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Unterthiner",
"suffix": ""
},
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Djork-Arn\u00e9 Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2015. Fast and accurate deep network learning by exponential linear units (elus). CoRR, abs/1511.07289.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Neural greedy constituent parsing with dynamic oracles",
"authors": [
{
"first": "Maximin",
"middle": [],
"last": "Coavoux",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Crabb\u00e9",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "172--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximin Coavoux and Benoit Crabb\u00e9. 2016. Neural greedy constituent parsing with dynamic oracles. In Pro- ceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 172-182, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Head-driven statistical models for natural language parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational linguistics",
"volume": "29",
"issue": "4",
"pages": "589--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational linguistics, 29(4):589-637.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Incremental parsing with minimal features using bi-directional lstm",
"authors": [
{
"first": "James",
"middle": [],
"last": "Cross",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "32--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Cross and Liang Huang. 2016a. Incremental parsing with minimal features using bi-directional lstm. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 32-37, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles",
"authors": [
{
"first": "James",
"middle": [],
"last": "Cross",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Cross and Liang Huang. 2016b. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1-11, Austin, Texas, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Deep biaffine attention for neural dependency parsing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat and Christopher D. Manning. 2016. Deep biaffine attention for neural dependency parsing. CoRR, abs/1611.01734.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Neural crf parsing",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "302--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Durrett and Dan Klein. 2015. Neural crf parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 302-312, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Recurrent neural network grammars",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "199--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network gram- mars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 199-209, San Diego, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Parsing as reduction",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "-Gonz\u00e1lez",
"middle": [],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1523--1533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Fern\u00e1ndez-Gonz\u00e1lez and Andr\u00e9 F. T. Martins. 2015. Parsing as reduction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1523-1533, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Efficient, feature-based, conditional random field parsing",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kleeman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "959--967",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, conditional random field parsing. In Proceedings of ACL-08: HLT, pages 959-967, Columbus, Ohio, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving neural parsing by disentangling model combination and reranking effects",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Fried",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "161--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Fried, Mitchell Stern, and Dan Klein. 2017. Improving neural parsing by disentangling model combina- tion and reranking effects. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 161-166, Vancouver, Canada, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A tabular method for dynamic oracles in transitionbased parsing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Sartorio",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "119--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg, Francesco Sartorio, and Giorgio Satta. 2014. A tabular method for dynamic oracles in transition- based parsing. Transactions of the association for Computational Linguistics, 2:119-130.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Self-training PCFG grammars with latent annotations across languages",
"authors": [
{
"first": "Zhongqiang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Harper",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "832--841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongqiang Huang and Mary Harper. 2009. Self-training PCFG grammars with latent annotations across lan- guages. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 832-841, Singapore, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Self-training with products of latent variable grammars",
"authors": [
{
"first": "Zhongqiang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Harper",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "12--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongqiang Huang, Mary Harper, and Slav Petrov. 2010. Self-training with products of latent variable grammars. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 12-22, Cambridge, MA, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Structured perceptron with inexact search",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Suphan",
"middle": [],
"last": "Fayong",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "142--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142-151, Montr\u00e9al, Canada, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Forest reranking: Discriminative parsing with non-local features",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "586--594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang. 2008a. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL- 08: HLT, pages 586-594, Columbus, Ohio, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Forest reranking: Discriminative parsing with non-local features",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "586--594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang. 2008b. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL- 08: HLT, pages 586-594, Columbus, Ohio, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Simple and accurate dependency parsing using bidirectional LSTM feature representations",
"authors": [
{
"first": "Eliyahu",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. CoRR, abs/1603.04351.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423-430, Sapporo, Japan, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Efficient third-order dependency parsers",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo and Michael Collins. 2010. Efficient third-order dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1-11, Uppsala, Sweden, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Dual decomposition for parsing with non-projective head automata",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1288--1298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo, Alexander M. Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1288-1298, Cambridge, MA, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "What do recurrent neural network grammars learn about syntax?",
"authors": [
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "1",
"issue": "",
"pages": "1249--1258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2017. What do recurrent neural network grammars learn about syntax? In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1249-1258, Valencia, Spain, April. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the eighteenth international conference on machine learning, ICML",
"volume": "1",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, Fernando Pereira, et al. 2001. Conditional random fields: Probabilistic mod- els for segmenting and labeling sequence data. In Proceedings of the eighteenth international conference on machine learning, ICML, volume 1, pages 282-289.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "In-order transition-based constituent parsing",
"authors": [
{
"first": "Jiangming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "413--424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiangming Liu and Yue Zhang. 2017a. In-order transition-based constituent parsing. Transactions of the Associa- tion for Computational Linguistics, 5:413-424.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Shift-reduce constituent parsing with neural lookahead features",
"authors": [
{
"first": "Jiangming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "45--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiangming Liu and Yue Zhang. 2017b. Shift-reduce constituent parsing with neural lookahead features. Transac- tions of the Association for Computational Linguistics, 5:45-58.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Neural probabilistic model for non-projective mst parsing",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1701.00874"
]
},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2017. Neural probabilistic model for non-projective mst parsing. arXiv preprint arXiv:1701.00874.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Building a large annotated corpus of english: The penn treebank",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Turbo parsers: Dependency parsing by approximate variational inference",
"authors": [
{
"first": "Andre",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Aguiar",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Figueiredo",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "34--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andre Martins, Noah Smith, Eric Xing, Pedro Aguiar, and Mario Figueiredo. 2010. Turbo parsers: Dependency parsing by approximate variational inference. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 34-44, Cambridge, MA, October. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Reranking and self-training for parser adaptation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "337--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2006. Reranking and self-training for parser adaptation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 337-344, Sydney, Australia, July. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "The dynamic neural network toolkit",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Matthews",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Clothiaux",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1701.03980"
]
},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, et al. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "An efficient algorithm for projective dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 8th International Workshop on Parsing Technologies (IWPT",
"volume": "",
"issue": "",
"pages": "149--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of the 8th Interna- tional Workshop on Parsing Technologies (IWPT, pages 149-160.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Algorithms for deterministic incremental dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2008,
"venue": "Comput. Linguist",
"volume": "34",
"issue": "4",
"pages": "513--553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Comput. Linguist., 34(4):513-553, December.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Improved inference for unlexicalized parsing",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Human Language Technolo- gies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics;",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "Proceedings of the Main Conference",
"volume": "",
"issue": "",
"pages": "404--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Proceedings of the Main Conference, pages 404-411, Rochester, New York, April. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A classifier-based parser with linear run-time complexity",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth International Workshop on Parsing Technology",
"volume": "",
"issue": "",
"pages": "125--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Sagae and Alon Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceedings of the Ninth International Workshop on Parsing Technology, pages 125-132. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Parser combination by reparsing",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers",
"volume": "",
"issue": "",
"pages": "129--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Sagae and Alon Lavie. 2006. Parser combination by reparsing. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 129-132, New York City, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Fast(er) exact decoding and global training for transition-based dependency parsing via a minimal feature set",
"authors": [
{
"first": "Tianze",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "12--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianze Shi, Liang Huang, and Lillian Lee. 2017. Fast(er) exact decoding and global training for transition-based dependency parsing via a minimal feature set. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 12-23, Copenhagen, Denmark, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Bayesian symbol-refined tree substitution grammars for syntactic parsing",
"authors": [
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Akinori",
"middle": [],
"last": "Fujino",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "440--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroyuki Shindo, Yusuke Miyao, Akinori Fujino, and Masaaki Nagata. 2012. Bayesian symbol-refined tree substitution grammars for syntactic parsing. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 440-448, Jeju Island, Korea, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Parsing with compositional vector grammars",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Ng",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "455--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, John Bauer, Christopher D. Manning, and Ng Andrew Y. 2013. Parsing with compositional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 455-465, Sofia, Bulgaria, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Dropout: a simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929- 1958.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "A minimal span-based neural constituency parser",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "818--827",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 818-827, Vancouver, Canada, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104-3112.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Improved semantic representations from treestructured long short-term memory networks",
"authors": [
{
"first": "Kai Sheng",
"middle": [],
"last": "Tai",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1556--1566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree- structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1556-1566, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Max-margin parsing",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Taskar, Dan Klein, Mike Collins, Daphne Koller, and Christopher Manning. 2004. Max-margin parsing. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 1-8, Barcelona, Spain, July. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Bidirectional tree-structured LSTM with head lexicalization",
"authors": [
{
"first": "Zhiyang",
"middle": [],
"last": "Teng",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiyang Teng and Yue Zhang. 2016. Bidirectional tree-structured LSTM with head lexicalization. CoRR, abs/1611.06788.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Grammar as a foreign language",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2773--2781",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, \u0141ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems, pages 2773-2781.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Graph-based dependency parsing with bidirectional lstm",
"authors": [
{
"first": "Wenhui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2306--2315",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhui Wang and Baobao Chang. 2016. Graph-based dependency parsing with bidirectional lstm. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2306-2315, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Joint pos tagging and transition-based constituent parsing in chinese with non-local features",
"authors": [
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "733--742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiguo Wang and Nianwen Xue. 2014. Joint pos tagging and transition-based constituent parsing in chinese with non-local features. In Proceedings of the 52nd Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 733-742, Baltimore, Maryland, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Feature optimization for constituent parsing via neural networks",
"authors": [
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1138--1147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiguo Wang, Haitao Mi, and Nianwen Xue. 2015. Feature optimization for constituent parsing via neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1138- 1147, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Transition-based neural constituent parsing",
"authors": [
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1169--1179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taro Watanabe and Eiichiro Sumita. 2015. Transition-based neural constituent parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1169-1179, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "The penn chinese treebank: Phrase structure annotation of a large corpus",
"authors": [
{
"first": "Naiwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Fu-Dong",
"middle": [],
"last": "Chiou",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2005,
"venue": "Natural language engineering",
"volume": "11",
"issue": "02",
"pages": "207--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Natural language engineering, 11(02):207-238.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Transition-based parsing of the chinese treebank using a global discriminative model",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 11th International Conference on Parsing Technologies (IWPT'09)",
"volume": "",
"issue": "",
"pages": "162--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2009. Transition-based parsing of the chinese treebank using a global discrimi- native model. In Proceedings of the 11th International Conference on Parsing Technologies (IWPT'09), pages 162-171, Paris, France, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Syntactic processing using the generalized perceptron and beam search",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational linguistics",
"volume": "37",
"issue": "1",
"pages": "105--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2011. Syntactic processing using the generalized perceptron and beam search. Computational linguistics, 37(1):105-151.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Transition-based dependency parsing with rich non-local features",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "188--193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Pro- ceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Tech- nologies, pages 188-193, Portland, Oregon, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "A neural probabilistic structured-prediction model for transition-based dependency parsing",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shujian",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1213--1222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhou, Yue Zhang, Shujian Huang, and Jiajun Chen. 2015. A neural probabilistic structured-prediction model for transition-based dependency parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1213-1222, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Fast and accurate shift-reduce constituent parsing",
"authors": [
{
"first": "Muhua",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jingbo",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "434--443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shift-reduce constituent parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 434-443, Sofia, Bulgaria, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Long short-term memory over recursive structures",
"authors": [
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Parinaz",
"middle": [],
"last": "Sobhani",
"suffix": ""
},
{
"first": "Hongyu",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32nd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1604--1612",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long short-term memory over recursive structures. In Proceedings of the 32nd International Conference on Machine Learning, pages 1604-1612, Lille, France, July.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Figure 1: An example workflow of our parsers for the sentence \"The stock price keeps falling\". We annotate every non-terminal span with its covered span range. Figure 1a shows constituent span classifiers making 0/1 decisions for all possible spans. Based on the local classification probabilities, we obtain an unlabeled binarized parse tree (Figure 1b) using binary CKY parsing algorithms. We then hierarchically generate labels for each span (Figure 1c) using encoder-decoder models. Figure 1d shows the final output parse tree after debinarization."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Neural network structures for span and rule models using BiLSTM encoders."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Sentence length v.s LF scores.Figure 4: Span length v.s LF scores."
},
"TABREF2": {
"type_str": "table",
"html": null,
"text": "Hyper-parameters for training.",
"content": "<table><tr><td>3 Experiments</td></tr><tr><td>3.1 Experimental Settings</td></tr><tr><td>Data. We perform experiments for both English and Chinese. Following standard conventions, our</td></tr><tr><td>English data are obtained from the Wall Street Journal (WSJ) of the Penn Treebank (PTB) (Marcus et</td></tr><tr><td>al.</td></tr></table>",
"num": null
},
"TABREF3": {
"type_str": "table",
"html": null,
"text": "92.16 92.19 92.17 sr[i, j] 91.90 91.70 91.80 BiaffineRule v[i, j] 91.79 91.67 91.73 sr[i, j] 92.49 92.23 92.36",
"content": "<table><tr><td>Model</td><td>SpanVec LP</td><td>LR</td><td>LF</td></tr><tr><td>BinarySpan</td><td>v[i, j]</td><td/><td/></tr></table>",
"num": null
},
"TABREF4": {
"type_str": "table",
"html": null,
"text": "Span representation methods. .16 92.19 92.17 91.31 90.48 90.89 MultiSpan 92.47 92.41 92.44 91.69 90.91 91.30 LinearRule 92.03 92.03 92.03 91.03 89.19 90.10 BiaffineRule 92.49 92.23 92.36 91.31 91.28 91.29",
"content": "<table><tr><td>Model</td><td>LP</td><td>English LR</td><td>LF</td><td>LP</td><td>Chinese LR</td><td>LF</td></tr><tr><td colspan=\"2\">BinarySpan 92</td><td/><td/><td/><td/><td/></tr></table>",
"num": null
},
"TABREF5": {
"type_str": "table",
"html": null,
"text": "Main development results.",
"content": "<table/>",
"num": null
},
"TABREF8": {
"type_str": "table",
"html": null,
"text": "Results on the PTB test set. S denotes parsers using auto parsed trees. E, R and ST denote ensembling, reranking and self-training systems, respectively.",
"content": "<table><tr><td>Parser</td><td>LR LP LF Parser</td><td>LR LP LF</td></tr><tr><td colspan=\"2\">Charniak and Johnson (2005) (R) 80.8 83.8 82.3 Petrov and Klein (2007)</td><td>81.9 84.8 83.3</td></tr><tr><td>Zhu et al. (2013) (S)</td><td>84.4 86.8 85.6 Zhang and Clark (2009)</td><td>78.6 78.0 78.3</td></tr><tr><td>Wang et al. (2015) (S)</td><td>86.6 Watanabe and Sumita (2015)</td><td>84.3</td></tr><tr><td>Huang and Harper (2009) (ST)</td><td>85.2 Dyer et al. (2016)</td><td>84.6</td></tr><tr><td>Dyer et al. (2016) (R)</td><td>86.9 BinarySpan</td><td>85.9 87.1 86.5</td></tr><tr><td>Liu and Zhang (2017b)</td><td>85.2 85.9 85.5 MultiSpan</td><td>86.6 88.0 87.3</td></tr><tr><td>Liu and Zhang (2017a)</td><td>86.1 BiaffineRule</td><td>87.1 87.5 87.3</td></tr></table>",
"num": null
},
"TABREF9": {
"type_str": "table",
"html": null,
"text": "Results on the Chinese Treebank 5.1 test set.",
"content": "<table/>",
"num": null
},
"TABREF10": {
"type_str": "table",
"html": null,
"text": "93.26 92.55 89.58 88.59 85.85 76.86 95.87 89.57 MultiSpan 93.61 93.41 92.76 89.96 89.16 86.39 78.21 95.98 89.51 BiaffineRule 93.53 93.46 92.78 89.30 89.56 85.89 77.47 96.66 90.31",
"content": "<table><tr><td>Model</td><td>NP</td><td>VP</td><td>S</td><td>PP SBAR ADVP ADJP WHNP QP</td></tr><tr><td colspan=\"2\">BinarySpan 93.35</td><td/><td/><td/></tr></table>",
"num": null
},
"TABREF11": {
"type_str": "table",
"html": null,
"text": "LF scores for major constituent labels.",
"content": "<table><tr><td>Model</td><td>UF</td><td colspan=\"2\">LF Speed(sents/s)</td></tr><tr><td colspan=\"3\">BinarySpan 92.16 96.79</td><td>22.12</td></tr><tr><td colspan=\"3\">MultiSpan 92.50 97.03</td><td>21.55</td></tr><tr><td colspan=\"3\">BiaffineRule 92.22 97.12</td><td>6.00</td></tr></table>",
"num": null
}
}
}
}