| { |
| "paper_id": "P13-1043", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:32:35.392871Z" |
| }, |
| "title": "Fast and Accurate Shift-Reduce Constituent Parsing", |
| "authors": [ |
| { |
| "first": "Muhua", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Natural Language Processing Lab", |
| "institution": "Northeastern University", |
| "location": { |
| "country": "China" |
| } |
| }, |
| "email": "zhumuhua@gmail.com" |
| }, |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Singapore University of Technology and Design", |
| "location": { |
| "country": "Singapore" |
| } |
| }, |
| "email": "zhang@sutd.edu.sg" |
| }, |
| { |
| "first": "Wenliang", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Soochow University", |
| "location": { |
| "country": "China" |
| } |
| }, |
| "email": "chenwenliang@gmail.com" |
| }, |
| { |
| "first": "Min", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Soochow University", |
| "location": { |
| "country": "China" |
| } |
| }, |
| "email": "mzhang@i2r.a-star.edu.sg" |
| }, |
| { |
| "first": "Jingbo", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Natural Language Processing Lab", |
| "institution": "Northeastern University", |
| "location": { |
| "country": "China" |
| } |
| }, |
| "email": "zhujingbo@mail.neu.edu.cn" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Shift-reduce dependency parsers give comparable accuracies to their chartbased counterparts, yet the best shiftreduce constituent parsers still lag behind the state-of-the-art. One important reason is the existence of unary nodes in phrase structure trees, which leads to different numbers of shift-reduce actions between different outputs for the same input. This turns out to have a large empirical impact on the framework of global training and beam search. We propose a simple yet effective extension to the shift-reduce process, which eliminates size differences between action sequences in beam-search. Our parser gives comparable accuracies to the state-of-the-art chart parsers. With linear run-time complexity, our parser is over an order of magnitude faster than the fastest chart parser.", |
| "pdf_parse": { |
| "paper_id": "P13-1043", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Shift-reduce dependency parsers give comparable accuracies to their chartbased counterparts, yet the best shiftreduce constituent parsers still lag behind the state-of-the-art. One important reason is the existence of unary nodes in phrase structure trees, which leads to different numbers of shift-reduce actions between different outputs for the same input. This turns out to have a large empirical impact on the framework of global training and beam search. We propose a simple yet effective extension to the shift-reduce process, which eliminates size differences between action sequences in beam-search. Our parser gives comparable accuracies to the state-of-the-art chart parsers. With linear run-time complexity, our parser is over an order of magnitude faster than the fastest chart parser.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Transition-based parsers employ a set of shiftreduce actions and perform parsing using a sequence of state transitions. The pioneering models rely on a classifier to make local decisions, and search greedily for a transition sequence to build a parse tree. Greedy, classifier-based parsers have been developed for both dependency grammars (Yamada and Matsumoto, 2003; Nivre et al., 2006) and phrase-structure grammars (Sagae and Lavie, 2005) . With linear run-time complexity, they were commonly regarded as a faster but less accurate alternative to graph-based chart parsers (Collins, 1997; Charniak, 2000; McDonald et al., 2005) .", |
| "cite_spans": [ |
| { |
| "start": 339, |
| "end": 367, |
| "text": "(Yamada and Matsumoto, 2003;", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 368, |
| "end": 387, |
| "text": "Nivre et al., 2006)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 418, |
| "end": 441, |
| "text": "(Sagae and Lavie, 2005)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 576, |
| "end": 591, |
| "text": "(Collins, 1997;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 592, |
| "end": 607, |
| "text": "Charniak, 2000;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 608, |
| "end": 630, |
| "text": "McDonald et al., 2005)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Various methods have been proposed to address the disadvantages of greedy local parsing, among which a framework of beam-search and global discriminative training have been shown effective for dependency parsing (Zhang and Clark, 2008; Huang and Sagae, 2010) . While beam-search reduces error propagation compared with greedy search, a discriminative model that is globally optimized for whole sequences of transition actions can avoid local score biases (Lafferty et al., 2001) . This framework preserves the most important advantage of greedy local parsers, including linear run-time complexity and the freedom to define arbitrary features. With the use of rich non-local features, transition-based dependency parsers achieve state-of-the-art accuracies that are comparable to the best-graph-based parsers (Zhang and Nivre, 2011; Bohnet and Nivre, 2012) . In addition, processing tens of sentences per second (Zhang and Nivre, 2011) , these transition-based parsers can be a favorable choice for dependency parsing.", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 235, |
| "text": "(Zhang and Clark, 2008;", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 236, |
| "end": 258, |
| "text": "Huang and Sagae, 2010)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 455, |
| "end": 478, |
| "text": "(Lafferty et al., 2001)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 808, |
| "end": 831, |
| "text": "(Zhang and Nivre, 2011;", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 832, |
| "end": 855, |
| "text": "Bohnet and Nivre, 2012)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 911, |
| "end": 934, |
| "text": "(Zhang and Nivre, 2011)", |
| "ref_id": "BIBREF43" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The above global-learning and beam-search framework can be applied to transition-based phrase-structure (constituent) parsing also (Zhang and Clark, 2009) , maintaining all the aforementioned benefits. However, the effects were not as significant as for transition-based dependency parsing. The best reported accuracies of transition-based constituent parsers still lag behind the state-of-the-art (Sagae and Lavie, 2006; Zhang and Clark, 2009) . One difference between phrasestructure parsing and dependency parsing is that for the former, parse trees with different numbers of unary rules require different numbers of actions to build. Hence the scoring model needs to disambiguate between transitions sequences with different sizes. For the same sentence, the largest output can take twice as many as actions to build as the smallest one. This turns out to have a significant empirical impact on parsing with beam-search.", |
| "cite_spans": [ |
| { |
| "start": 131, |
| "end": 154, |
| "text": "(Zhang and Clark, 2009)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 398, |
| "end": 421, |
| "text": "(Sagae and Lavie, 2006;", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 422, |
| "end": 444, |
| "text": "Zhang and Clark, 2009)", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We propose an extension to the shift-reduce process to address this problem, which gives significant improvements to the parsing accuracies. Our method is conceptually simple, requiring only one additional transition action to eliminate size differences between different candidate outputs. On standard evaluations using both the Penn Treebank and the Penn Chinese Treebank, our parser gave higher accuracies than the Berkeley parser (Petrov and Klein, 2007) , a state-of-the-art chart parser. In addition, our parser runs with over 89 sentences per second, which is 14 times faster than the Berkeley parser, and is the fastest that we are aware of for phrase-structure parsing. An open source release of our parser (version 0.6) is freely available on the Web. 1 In addition to the above contributions, we apply a variety of semi-supervised learning techniques to our transition-based parser. These techniques have been shown useful to improve chart-based parsing Chen et al., 2012) , but little work has been done for transition-based parsers. We therefore fill a gap in the literature by reporting empirical results using these methods. Experimental results show that semi-supervised methods give a further improvement of 0.9% in F-score on the English data and 2.4% on the Chinese data. Our Chinese results are the best that we are aware of on the standard CTB data.", |
| "cite_spans": [ |
| { |
| "start": 434, |
| "end": 458, |
| "text": "(Petrov and Klein, 2007)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 762, |
| "end": 763, |
| "text": "1", |
| "ref_id": null |
| }, |
| { |
| "start": 965, |
| "end": 983, |
| "text": "Chen et al., 2012)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We adopt the parser of Zhang and Clark (2009) for our baseline, which is based on the shift-reduce process of Sagae and Lavie (2005) , and employs global perceptron training and beam search.", |
| "cite_spans": [ |
| { |
| "start": 23, |
| "end": 45, |
| "text": "Zhang and Clark (2009)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 110, |
| "end": 132, |
| "text": "Sagae and Lavie (2005)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baseline parser", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Shift-reduce parsing is based on a left-to-right scan of the input sentence. At each step, a transition action is applied to consume an input word or construct a new phrase-structure. A stack is used to maintain partially constructed phrasestructures, while the input words are stored in a buffer. The set of transition actions are", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Vanilla Shift-Reduce", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u2022 SHIFT: pop the front word from the buffer, and push it onto the stack.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Vanilla Shift-Reduce", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "1 http://sourceforge.net/projects/zpar/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Vanilla Shift-Reduce", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Axioms [\u03c6, 0, false,0] Goal [S, n, true, C]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Vanilla Shift-Reduce", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Inference Rules:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Vanilla Shift-Reduce", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "[S, i, false, c] SHIFT [S|w, i + 1, false, c + cs] [S|s1s0, i, false, c] REDUCE-L/R-X [S|X, i, false, c + cr] [S|s0, i, false, c] UNARY-X [S|X, i, false, c + cu] [S, n, false, c] FINISH [S, n, true, c + c f ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Vanilla Shift-Reduce", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Figure 1: Deduction system of the baseline shiftreduce parsing process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Vanilla Shift-Reduce", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u2022 REDUCE-L/R-X: pop the top two constituents off the stack, combine them into a new constituent with label X, and push the new constituent onto the stack.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Vanilla Shift-Reduce", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u2022 UNARY-X: pop the top constituent off the stack, raise it to a new constituent with label X, and push the new constituent onto the stack.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Vanilla Shift-Reduce", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u2022 FINISH: pop the root node off the stack and ends parsing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Vanilla Shift-Reduce", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The deduction system for the process is shown in Figure 1 , where the item is formed as stack, buffer front index, completion mark, score , and c s , c r , and c u represent the incremental score of the SHIFT, REDUCE, and UNARY parsing steps, respectively; these scores are calculated according to the context features of the parser state item. n is the number of words in the input.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 49, |
| "end": 57, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Vanilla Shift-Reduce", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "For a given input sentence, the initial state has an empty stack and a buffer that contains all the input words. An agenda is used to keep the k best state items at each step. At initialization, the agenda contains only the initial state. At each step, every state item in the agenda is popped and expanded by applying a valid transition action, and the top k from the newly constructed state items are put back onto the agenda. The process repeats until the agenda is empty, and the best completed state item (recorded as candidate output) is taken for Description Templates unigrams s0tc, s0wc, s1tc, s1wc, s2tc s2wc, s3tc, s3wc, q0wt, q1wt q2wt, q3wt, s0lwc, s0rwc s0uwc, s1lwc, s1rwc, s1uwc bigrams s0ws1w, s0ws1c, s0cs1w, s0cs1c, s0wq0w, s0wq0t, s0cq0w, s0cq0t, q0wq1w, q0wq1t, q0tq1w, q0tq1t, s1wq0w, s1wq0t, s1cq0w, s1cq0t trigrams s0cs1cs2c, s0ws1cs2c, s0cs1wq0t s0cs1cs2w, s0cs1cq0t, s0ws1cq0t s0cs1wq0t, s0cs1cq0w Table 1 : A summary of baseline feature templates, where s i represents the i th item on the stack S and q i denotes the i th item in the queue Q. w refers to the head lexicon, t refers to the head POS, and c refers to the constituent label.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 924, |
| "end": 931, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Global Discriminative Training and Beam-Search", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "the output. The score of a state item is the total score of the transition actions that have been applied to build the item:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Global Discriminative Training and Beam-Search", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "C(\u03b1) = N i=1 \u03a6(a i ) \u2022 \u03b8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Global Discriminative Training and Beam-Search", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Here \u03a6(a i ) represents the feature vector for the i th action a i in state item \u03b1. It is computed by applying the feature templates in Table 1 to the context of \u03b1. N is the total number of actions in \u03b1.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 136, |
| "end": 143, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Global Discriminative Training and Beam-Search", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The model parameter \u03b8 is trained with the averaged perceptron algorithm, applied to state items (sequence of actions) globally. We apply the early update strategy (Collins and Roark, 2004) , stopping parsing for parameter updates when the goldstandard state item falls off the agenda.", |
| "cite_spans": [ |
| { |
| "start": 163, |
| "end": 188, |
| "text": "(Collins and Roark, 2004)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Global Discriminative Training and Beam-Search", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Our baseline features are adopted from Zhang and Clark (2009) , and are shown in Table 1 Here s i represents the i th item on the top of the stack S and q i denotes the i th item in the front end of the queue Q. The symbol w denotes the lexical head of an item; the symbol c denotes the constituent label of an item; the symbol t is the POS of a lexical head. These features are adapted from Zhang and Clark (2009) . We remove Chinese specific features and make the baseline parser languageindependent.", |
| "cite_spans": [ |
| { |
| "start": 39, |
| "end": 61, |
| "text": "Zhang and Clark (2009)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 392, |
| "end": 414, |
| "text": "Zhang and Clark (2009)", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 81, |
| "end": 88, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Baseline Features", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Unlike dependency parsing, constituent parse trees for the same sentence can have different numbers of nodes, mainly due to the existence of unary nodes. As a result, completed state items for the same sentence can have different numbers of unary actions. Take the phrase \"address issues\" for example, two possible parses are shown in Figure 2 (a) and (b), respectively. The first parse corresponds to the action sequence [SHIFT, SHIFT, REDUCE-R-NP, FINISH] , while the second parse corresponds to the action sequence [SHIFT, SHIFT, UNARY-NP, REDUCE-L-VP, FINISH] , which consists of one more action than the first case. In practice, variances between state items can be much larger than the chosen example. In the extreme case where a state item does not contain any unary action, the number of actions is 2n, where n is the number of words in the sentence. On the other hand, if the maximum number of consequent unary actions is 2 (Sagae and Lavie, 2005; Zhang and Clark, 2009) , then the maximum number of actions a state item can have is 4n.", |
| "cite_spans": [ |
| { |
| "start": 422, |
| "end": 457, |
| "text": "[SHIFT, SHIFT, REDUCE-R-NP, FINISH]", |
| "ref_id": null |
| }, |
| { |
| "start": 518, |
| "end": 563, |
| "text": "[SHIFT, SHIFT, UNARY-NP, REDUCE-L-VP, FINISH]", |
| "ref_id": null |
| }, |
| { |
| "start": 933, |
| "end": 956, |
| "text": "(Sagae and Lavie, 2005;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 957, |
| "end": 979, |
| "text": "Zhang and Clark, 2009)", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 335, |
| "end": 343, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Improved hypotheses comparison", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The significant variance in the number of actions N can have an impact on the linear separability of state items, for which the feature vectors are N i=1 \u03a6 (a i ). This turns out to have a significant empirical influence on perceptron training with early-update, where the training of the model interacts with search (Daume III, 2006) .", |
| "cite_spans": [ |
| { |
| "start": 317, |
| "end": 334, |
| "text": "(Daume III, 2006)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improved hypotheses comparison", |
| "sec_num": "3" |
| }, |
| { |
| "text": "One way of improving the comparability of state items is to reduce the differences in their sizes, and we use a padding method to achieve this. The idea is to extend the set of actions by adding an IDLE action, so that completed state items can be further expanded using the IDLE action. The action does not change the state itself, but simply adds to the number of actions in the sequence. A feature vector is extracted for the IDLE action according to the final state context, in the same way as other actions. Using the IDLE action, the transition sequence for the two parses in Figure 2 Inference Rules:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 582, |
| "end": 590, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Improved hypotheses comparison", |
| "sec_num": "3" |
| }, |
| { |
| "text": "[S, i, false, k,c] SHIFT [S|w, i + 1, false, k + 1, c + cs] [S|s1s0, i, false, k, c] REDUCE-L/R-X [S|X, i, false, k + 1, c + cr] [S|s0, i, false, k, c] UNARY-X [S|X, i, false, k + 1, c + cu] [S, n, false, k, c] FINISH [S, n, true, k + 1, c + c f ] [S, n, true, k, c] IDLE [S, n, true, k + 1, c + ci]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improved hypotheses comparison", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Figure 3: Deductive system of the extended transition system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Improved hypotheses comparison", |
| "sec_num": "3" |
| }, |
| { |
| "text": "corresponding feature vectors have about the same sizes, and are more linearly separable. Note that there can be more than one action that are padded to a sequence of actions, and the number of IDLE actions depends on the size difference between the current action sequence and the largest action sequence without IDLE actions. Given this extension, the deduction system is shown in Figure 3 . We add the number of actions k to an item. The initial item (Axioms) has k = 0, while the goal item has 2n \u2264 k \u2264 4n. Given this process, beam-search decoding can be made simpler than that of Zhang and Clark (2009) . While they used a candidate output to record the best completed state item, and finish decoding when the agenda contains no more items, we can simply finish decoding when all items in the agenda are completed, and output the best state item in the agenda. With this new transition process, we experimented with several extended features,and found that the templates in Table 2 are useful to improve the accuracies further. Here s i ll denotes the left child of s i 's left child. Other notations can be explained in a similar way.", |
| "cite_spans": [ |
| { |
| "start": 585, |
| "end": 607, |
| "text": "Zhang and Clark (2009)", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 383, |
| "end": 391, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 979, |
| "end": 986, |
| "text": "Table 2", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Improved hypotheses comparison", |
| "sec_num": "3" |
| }, |
| { |
| "text": "This section discusses how to extract information from unlabeled data or auto-parsed data to further improve shift-reduce parsing accuracies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semi-supervised Parsing with Large Data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We consider three types of information, including paradigmatic relations, dependency relations, and structural relations. These relations are captured by word clustering, lexical dependencies, and a dependency language model, respectively. Based on the information, we propose a set of novel features specifically designed for shift-reduce constituent parsing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semi-supervised Parsing with Large Data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "s 0 llwc, s 0 lrwc, s 0 luwc s 0 rlwc, s 0 rrwc, s 0 ruwc s 0 ulwc, s 0 urwc, s 0 uuwc s 1 llwc, s 1 lrwc, s 1 luwc s 1 rlwc, s 1 rrwc, s 1 ruwc", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semi-supervised Parsing with Large Data", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Word clusters are regarded as lexical intermediaries for dependency parsing and POS tagging (Sun and Uszkoreit, 2012) . We employ the Brown clustering algorithm (Liang, 2005) on unannotated data (word segmentation is performed if necessary). In the initial state of clustering, each word in the input corpus is regarded as a cluster, then the algorithm repeatedly merges pairs of clusters that cause the least decrease in the likelihood of the input corpus. The clustering results are a binary tree with words appearing as leaves. Each cluster is represented as a bit-string from the root to the tree node that represents the cluster. We define a function CLU(w) to return the cluster ID (a bit string) of an input word w.", |
| "cite_spans": [ |
| { |
| "start": 92, |
| "end": 117, |
| "text": "(Sun and Uszkoreit, 2012)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 161, |
| "end": 174, |
| "text": "(Liang, 2005)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Paradigmatic Relations: Word Clustering", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Lexical dependencies represent linguistic relations between words: whether a word modifies another word. The idea of exploiting lexical dependency information from auto-parsed data has been explored before for dependency parsing (Chen et al., 2009) and constituent parsing (Zhu et al., 2012) . To extract lexical dependencies, we first run the baseline parser on unlabeled data. To simplify the extraction process, we can convert auto-parsed constituency trees into dependency trees by using Penn2Malt. 2 From the dependency trees, we extract bigram lexical dependencies w 1 , w 2 , L/R where the symbol L (R) means that w 1 (w 2 ) is the head of w 2 (w 1 ). We also extract trigram lexical dependencies w 1 , w 2 , w 3 , L/R , where L means that w 1 is the head of w 2 and w 3 , meanwhile w 2 and w 3 are required to be siblings.", |
| "cite_spans": [ |
| { |
| "start": 229, |
| "end": 248, |
| "text": "(Chen et al., 2009)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 273, |
| "end": 291, |
| "text": "(Zhu et al., 2012)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency Relations: Lexical Dependencies", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Following the strategy of Chen et al. (2009) , we assign categories to bigram and trigram items separately according to their frequency counts. Specifically, top-10% most frequent items are assigned to the category of High Frequency (HF); otherwise if an item is among top 20%, we assign it to the category of Middle Frequency (MF); otherwise the category of Low Frequency (LF). Hereafter, we refer to the bigram and trigram lexical dependency lists as BLD and TLD, respectively.", |
| "cite_spans": [ |
| { |
| "start": 26, |
| "end": 44, |
| "text": "Chen et al. (2009)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency Relations: Lexical Dependencies", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The dependency language model is proposed by Shen et al. (2008) and is used as additional information for graph-based dependency parsing in Chen et al. (2012) . Formally, given a dependency tree y of an input sentence x, we can denote by H(y) the set of words that have at least one dependent. For each x h \u2208 H(y), we have a corresponding dependency structure", |
| "cite_spans": [ |
| { |
| "start": 45, |
| "end": 63, |
| "text": "Shen et al. (2008)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 140, |
| "end": 158, |
| "text": "Chen et al. (2012)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structural Relations: Dependency Language Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "D h = (x Lk , . . . x L1 , x h , x R1 , . . . , x Rm ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structural Relations: Dependency Language Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The probability P (D h ) is defined to be", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structural Relations: Dependency Language Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "P (D h ) = P L (D h ) \u00d7 P R (D h )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structural Relations: Dependency Language Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "where P L (D h ) can be in turn defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structural Relations: Dependency Language Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "P L (D h ) \u2248 P (x L1 |x h ) \u00d7P (x L2 |x L1 , x h ) \u00d7 . . . \u00d7P (x Lk |x Lk\u22121 , . . . , x Lk\u2212N +1 , x h ) P R (D h )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structural Relations: Dependency Language Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "can be defined in a similar way. We build dependency language models on autoparsed data. Again, we convert constituency trees into dependency trees for the purpose of simplicity. From the dependency trees, we build a bigram and a trigram language model, which are denoted by BLM and TLM, respectively. The following are the templates of the records of the dependency language models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structural Relations: Dependency Language Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "(", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structural Relations: Dependency Language Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "1) x Li , x h , P (x Li |x h ) (2) x Ri , x h , P (x Ri |x h ) (3) x Li , x Li\u22121 , x h , P (x Li |x Li\u22121 , x h ) (4) x Ri , x Ri\u22121 , x h , P (x Ri |x Ri\u22121 , x h )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structural Relations: Dependency Language Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Here the templates (1) and (2) belong to BLM and the templates (3) and 4 use the dependency language models, we employ a map function \u03a6(r) to assign a category to each record r according to its probability, as in Chen et al. (2012) . The following is the map function.", |
| "cite_spans": [ |
| { |
| "start": 213, |
| "end": 231, |
| "text": "Chen et al. (2012)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structural Relations: Dependency Language Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u03a6(r) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structural Relations: Dependency Language Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "HP if P (r) \u2208 top\u221210% M P else if P (r) \u2208 top\u221230% LP otherwise", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structural Relations: Dependency Language Model", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We design a set of features based on the information extracted from auto-parsed data or unannotated data. The features are summarized in Table 3 . Here CLU returns a cluster ID for a word.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 137, |
| "end": 144, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Semi-supervised Features", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "The functions BLD l/r (\u2022), TLD l/r (\u2022), BLM l/r (\u2022),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semi-supervised Features", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "and TLM l/r (\u2022) check whether a given word combination can be found in the corresponding lists. For example, BLD l (s 1 w, s 0 w) returns a category tag (HF, MF, or LF) if s 1 w, s 0 w, L exits in the list BLD, else it returns NONE.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semi-supervised Features", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Labeled English data employed in this paper were derived from the Wall Street Journal (WSJ) corpus of the Penn Treebank (Marcus et al., 1993) . We used sections 2-21 as labeled training data, section 24 for system development, and section 23 for final performance evaluation. For labeled Chinese data, we used the version 5.1 of the Penn Chinese Treebank (CTB) (Xue et al., 2005) . Articles 001-270 and 440-1151 were used for training, articles 301-325 were used as development data, and articles 271-300 were used for evaluation. For both English and Chinese data, we used tenfold jackknifing (Collins, 2000) to automatically assign POS tags to the training data. We found that this simple technique could achieve an improvement of 0.4% on English and an improvement of 2.0% on Chinese. For English POS tagging, we adopted SVMTool, 3 and for Chinese POS tagging we employed the Stanford POS tagger. 4 We took the WSJ articles from the TIPSTER corpus (LDC93T3A) as unlabeled English data. In addition, we removed from the unlabeled English data the sentences that appear in the WSJ corpus of the Penn Treebank. For unlabeled Chinese data, we used Chinese Gigaword (LDC2003T09), on which we conducted Chinese word segmentation by using a CRF-based segmenter. Table 4 summarizes data statistics on sentence and word numbers of the data sets listed above.", |
| "cite_spans": [ |
| { |
| "start": 120, |
| "end": 141, |
| "text": "(Marcus et al., 1993)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 361, |
| "end": 379, |
| "text": "(Xue et al., 2005)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 594, |
| "end": 609, |
| "text": "(Collins, 2000)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 900, |
| "end": 901, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1258, |
| "end": 1265, |
| "text": "Table 4", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Set-up", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We used EVALB to evaluate parser performances, including labeled precision (LP), labeled recall (LR), and bracketing F1. 5 For significance tests, we employed the randomized permutationbased tool provided by Daniel Bikel. 6 In both training and decoding, we set the beam size to 16, which achieves a good tradeoff between efficiency and accuracy. The optimal iteration number of perceptron learning is determined on the development sets. For word clustering, we set the cluster number to 50 for both the English and Chinese experiments. Table 5 reports the results of the extended parser (baseline + padding + supervised features) on the English and Chinese development sets. We integrated the padding method into the baseline parser, based on which we further incorporated the supervised features in Table 2 . From the results we find that the padding method improves the parser accuracies by 0.5% and 0.4% on English and Chinese, respectively. Incorporating the supervised features in Table 2 gives further improvements of 0.2% on English and 0.1% on Chinese. Based on the extended parser, we experimented different types of semi-supervised features by adding the features incrementally. The results are shown in Table 6 . By comparing the results in Table 5 and the results in Table 6 we can see that the semi-supervised features achieve an overall improvement of 1.0% on the English data and an im- Table 8 : Comparison of our parsers and related work on the test set of CTB5.1. * Huang (2009) adapted the parsers to Chinese parsing on CTB5.1. \u2020 We run the parser on CTB5.1 to get the results.", |
| "cite_spans": [ |
| { |
| "start": 121, |
| "end": 122, |
| "text": "5", |
| "ref_id": null |
| }, |
| { |
| "start": 208, |
| "end": 223, |
| "text": "Daniel Bikel. 6", |
| "ref_id": null |
| }, |
| { |
| "start": 1483, |
| "end": 1497, |
| "text": "* Huang (2009)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 537, |
| "end": 544, |
| "text": "Table 5", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 801, |
| "end": 808, |
| "text": "Table 2", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 987, |
| "end": 994, |
| "text": "Table 2", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 1215, |
| "end": 1222, |
| "text": "Table 6", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 1280, |
| "end": 1287, |
| "text": "Table 6", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 1403, |
| "end": 1410, |
| "text": "Table 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Set-up", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "provement of 1.5% on the Chinese data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results on Development Sets", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Here we report the final results on the English and Chinese test sets. We compared the final results with a large body of related work. We grouped the parsers into three categories: single parsers (SI), discriminative reranking parsers (RE), and semisupervised parsers (SE). Table 7 shows the comparative results on the English test set and Table 8 reports the comparison on the Chinese test set. From the results we can see that our extended parser (baseline + padding + supervised features) outperforms the Berkeley parser by 0.3% on English, and is comparable with the Berkeley parser on Chinese (\u22120.1% less). Here +padding means the padding technique and the features in Table 2 . After integrating semi-supervised features, the parsing accuracy on English is improved to 91.3%. We note that the performance is on the same level Parser #Sent/Second Ratnaparkhi (1997) Unk Collins (1999) 3.5 Charniak (2000) 5.7 Sagae & Lavie (2005) * 3.7 \u2021 Sagae & Lavie (2006) \u2020 2.2 \u2021 Petrov & Klein (2007) 6.2 Unk as the performance of self-trained parsers, except for McClosky et al. (2006) , which is based on the combination of reranking and self-training. On Chinese, the final parsing accuracy is 85.6%. To our knowledge, this is by far the best reported performance on this data set. The padding technique, supervised features, and semi-supervised features achieve an overall improvement of 1.4% over the baseline on English, which is significant on the level of p < 10 \u22125 . The overall improvement on Chinese is 3.0%, which is also significant on the level of p < 10 \u22125 .", |
| "cite_spans": [ |
| { |
| "start": 853, |
| "end": 871, |
| "text": "Ratnaparkhi (1997)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 876, |
| "end": 890, |
| "text": "Collins (1999)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 895, |
| "end": 910, |
| "text": "Charniak (2000)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 915, |
| "end": 937, |
| "text": "Sagae & Lavie (2005) *", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 971, |
| "end": 994, |
| "text": "\u2021 Petrov & Klein (2007)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 1058, |
| "end": 1080, |
| "text": "McClosky et al. (2006)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 275, |
| "end": 282, |
| "text": "Table 7", |
| "ref_id": "TABREF8" |
| }, |
| { |
| "start": 341, |
| "end": 348, |
| "text": "Table 8", |
| "ref_id": null |
| }, |
| { |
| "start": 675, |
| "end": 682, |
| "text": "Table 2", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Final Results", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We also compared the running times of our parsers with the related single parsers. We ran timing tests on an Intel 2.3GHz processor with 8GB memory. The comparison is shown in Table 9 . From the table, we can see that incorporating semisupervised features decreases parsing speed, but the semi-supervised parser still has the advantage of efficiency over other parsers. Specifically, the semi-supervised parser is 7 times faster than the Berkeley parser. Note that Sagae & Lavie (2005) and Sagae & Lavie (2006) are also shift-reduce parsers, and their running times were evaluated on different hardwares. In practice, the running times of the shift-reduce parsers should be much shorter than the reported times in the table.", |
| "cite_spans": [ |
| { |
| "start": 465, |
| "end": 485, |
| "text": "Sagae & Lavie (2005)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 490, |
| "end": 510, |
| "text": "Sagae & Lavie (2006)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 176, |
| "end": 183, |
| "text": "Table 9", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison of Running Time", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "We conducted error analysis for the three systems: the baseline parser, the extended parser with the padding technique, and the semi-supervised parser, focusing on the English test set. The analysis was performed in four dimensions: parsing accuracies on different phrase types, on constituents of different span lengths, on different sentence lengths, and on sentences with different numbers of unknown words. Table 10 shows the parsing accuracies of the baseline, extended parser, and semi-supervised parser on different phrase types. Here we only consider the nine most frequent phrase types in the English test set. In the table, the phrase types are ordered from left to right in the descending order of their frequencies. We also show the improvements of the semi-supervised parser over the baseline parser (the last row in the table). As the results show, the extended parser achieves improvements on most of the phrase types with two exceptions: Preposition Prase (PP) and Quantifier Phrase (QP). Semisupervised features further improve parsing accuracies over the extended parser (QP is an exception). From the last row, we can see that improvements of the semi-supervised parser over the baseline on VP, S, SBAR, ADVP, and ADJP are above the average improvement (1.4%). Figure 5 shows a comparison of the three parsers on spans of different lengths. Here we consider span lengths up to 8. As the results show, both the padding extension and semi-supervised features are more helpful on relatively large spans: the performance gaps between the three parsers are enlarged with increasing span lengths. Figure 6 shows a comparison of parsing accuracies of the three parsers on sentences of different lengths. Each number on the horizontal axis represents the sentences whose lengths are between the number and its previous number. For example, the number 30 refers to the sentences whose lengths are between 20 and 30. From the results we can see that semi-supervised features improve parsing accuracy on both short and long sentences. The points at 70 are exceptions. In fact, sentences with lengths between 60 and 70 have only 8 instances, and the statistics on such a small number of sentences are not reliable. Figure 4 shows a comparison of parsing accuracies of the baseline, extended parser, and semisupervised parser on sentences with different numbers of unknown words. As the results show, the padding method is not very helpful on sentences with large numbers of unknown words, while semi-supervised features help significantly on this aspect. This conforms to the intuition that semi-supervised methods reduce data sparseness and improve the performance on unknown words.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 411, |
| "end": 419, |
| "text": "Table 10", |
| "ref_id": null |
| }, |
| { |
| "start": 1280, |
| "end": 1288, |
| "text": "Figure 5", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 1610, |
| "end": 1618, |
| "text": "Figure 6", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 2222, |
| "end": 2230, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error Analysis", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "In this paper, we addressed the problem of different action-sequence lengths for shift-reduce phrase-structure parsing, and designed a set of novel non-local features to further improve parsing. The resulting supervised parser outperforms the Berkeley parser, a state-of-the-art chart parser, in both accuracies and speeds. In addition, we incorporated a set of semi-supervised features. The Table 10 : Comparison of parsing accuracies of the baseline, extended parser, and semi-supervised parsers on different phrase types. 0 1 2 3 4 5 6 7 70 80 90 100 9 1 .9 8 8 9 .7 3 8 8 .8 7 8 7 .9 6 8 5 .9 5 8 3 .7 8 1 .4 2 8 2 .7 4 9 2 .1 7 9 0 .5 3 8 9 .5 1 8 7 .9 9 8 8 .6 6 8 7 .3 3 8 3 .8 9 8 0 .4 9 9 2 .8 8 9 1 .2 6 9 0 .4 3 8 9 .8 8 9 0 .3 5 8 6 .3 9 9 0 .6 8 9 0 .2 4 F-score (%) Baseline Extended Semi-supervised Figure 4 : Comparison of parsing accuracies of the baseline, extended parser, and semi-supervised parser on sentences of different unknown words.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 392, |
| "end": 400, |
| "text": "Table 10", |
| "ref_id": null |
| }, |
| { |
| "start": 814, |
| "end": 822, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "final parser reaches an accuracy of 91.3% on English and 85.6% on Chinese, by far the best reported accuracies on the CTB data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "http://w3.msi.vxu.se/\u223cnivre/research/Penn2Malt.html", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.lsi.upc.edu/\u223cnlp/SVMTool/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the anonymous reviewers for their valuable comments. Yue Zhang and Muhua Zhu were supported partially by SRG-ISTD-2012-038 from Singapore University of Technology and Design. Muhua Zhu and Jingbo Zhu were funded in part by the National Science Foundation of China (61073140; 61272376), Specialized Research Fund for the Doctoral Program of Higher Education (20100042110031), and the Fundamental Research Funds for the Central Universities (N100204002). Wenliang Chen was funded partially by the National Science Foundation of China (61203314).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Word Cluster Features CLU(s1w) CLU(s0w) CLU(q0w) CLU(s1w)s1t CLU(s0w)s0t CLU(q0w)", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Word Cluster Features CLU(s1w) CLU(s0w) CLU(q0w) CLU(s1w)s1t CLU(s0w)s0t CLU(q0w)q0w", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Lexical Dependency Features BLD l (s1w, s0w) BLD l (s1w, s0w)\u2022s1t\u2022s0t BLDr(s1w, s0w)", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lexical Dependency Features BLD l (s1w, s0w) BLD l (s1w, s0w)\u2022s1t\u2022s0t BLDr(s1w, s0w)", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "BLDr(s1w, s0w)\u2022s1t\u2022s0t BLD l (s1w, q0w)\u2022s1t\u2022q0t BLD l (s1w, q0w) BLDr(s1w, q0w) BLDr(s1w, q0w)\u2022s1t\u2022q0t BLD l (s0w, q0w) BLD l (s0w, q0w)\u2022s0t\u2022q0t BLDr(s0w, q0w)\u2022s0t\u2022q0t", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "BLDr(s1w, s0w)\u2022s1t\u2022s0t BLD l (s1w, q0w)\u2022s1t\u2022q0t BLD l (s1w, q0w) BLDr(s1w, q0w) BLDr(s1w, q0w)\u2022s1t\u2022q0t BLD l (s0w, q0w) BLD l (s0w, q0w)\u2022s0t\u2022q0t BLDr(s0w, q0w)\u2022s0t\u2022q0t", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "BLDr(s0w, q0w) TLD l (s1w, s1rdw, s0w) TLD l (s1w, s1rdw, s0w)\u2022s1t\u2022s0t", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "BLDr(s0w, q0w) TLD l (s1w, s1rdw, s0w) TLD l (s1w, s1rdw, s0w)\u2022s1t\u2022s0t", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "TLDr(s1w, s0ldw, s0w)", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "TLDr(s1w, s0ldw, s0w)", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "TLDr(s0w, N ON E, q0w) TLDr(s0w, N ON E, q0w)\u2022s0t\u2022q0t Dependency Language Model Features BLM l (s1w, s0w) BLM l (s1w, s0w)\u2022s1t\u2022s0t BLMr(s1w", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "TLDr(s0w, N ON E, q0w) TLDr(s0w, N ON E, q0w)\u2022s0t\u2022q0t Dependency Language Model Features BLM l (s1w, s0w) BLM l (s1w, s0w)\u2022s1t\u2022s0t BLMr(s1w, s0w)", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "BLMr(s1w, s0w)\u2022s1t\u2022s0t BLM l (s0w, q0w) BLM l (s0w, q0w)\u2022s0t\u2022q0t", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "BLMr(s1w, s0w)\u2022s1t\u2022s0t BLM l (s0w, q0w) BLM l (s0w, q0w)\u2022s0t\u2022q0t", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "BLMr(s0w, q0w)\u2022s0t\u2022q0t BLMr(s0w, q0w) TLM l (s1w, s1rdw, s0w) TLM l (s1w, s1rdw, s0w)\u2022s1t\u2022s0t TLMr(s1w, s0ldw, s0w) TLMr(s1w, s0ldw, s0w)\u2022s1t\u2022s0t", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "BLMr(s0w, q0w)\u2022s0t\u2022q0t BLMr(s0w, q0w) TLM l (s1w, s1rdw, s0w) TLM l (s1w, s1rdw, s0w)\u2022s1t\u2022s0t TLMr(s1w, s0ldw, s0w) TLMr(s1w, s0ldw, s0w)\u2022s1t\u2022s0t", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "On the parameter space of generative lexicalized statistical parsing models", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Daniel", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bikel", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel M. Bikel. 2004. On the parameter space of generative lexicalized statistical parsing models. Ph.D. thesis, University of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "A transitionbased system for joint part-of-speech tagging and labeled non-projective dependency parsing", |
| "authors": [ |
| { |
| "first": "Bernd", |
| "middle": [], |
| "last": "Bohnet", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "12--14", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernd Bohnet and Joakim Nivre. 2012. A transition- based system for joint part-of-speech tagging and la- beled non-projective dependency parsing. In Pro- ceedings of EMNLP, pages 12-14, Jeju Island, Ko- rea.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Tag, dynamic programming, and the perceptron for efficient, feature-rich parsing", |
| "authors": [ |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Carreras", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Terry", |
| "middle": [], |
| "last": "Koo", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "9--16", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xavier Carreras, Michael Collins, and Terry Koo. 2008. Tag, dynamic programming, and the percep- tron for efficient, feature-rich parsing. In Proceed- ings of CoNLL, pages 9-16, Manchester, England.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Coarseto-fine n-best parsing and maxent discriminative reranking", |
| "authors": [ |
| { |
| "first": "Eugune", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "173--180", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugune Charniak and Mark Johnson. 2005. Coarse- to-fine n-best parsing and maxent discriminative reranking. In Proceedings of ACL, pages 173-180.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A maximum-entropyinspired parser", |
| "authors": [ |
| { |
| "first": "Eugune", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "132--139", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugune Charniak. 2000. A maximum-entropy- inspired parser. In Proceedings of NAACL, pages 132-139, Seattle, Washington, USA.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Improving dependency parsing with subtrees from auto-parsed data", |
| "authors": [ |
| { |
| "first": "Wenliang", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Junichi", |
| "middle": [], |
| "last": "Kazama", |
| "suffix": "" |
| }, |
| { |
| "first": "Kiyotaka", |
| "middle": [], |
| "last": "Uchimoto", |
| "suffix": "" |
| }, |
| { |
| "first": "Kentaro", |
| "middle": [], |
| "last": "Torisawa", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "570--579", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wenliang Chen, Junichi Kazama, Kiyotaka Uchimoto, and Kentaro Torisawa. 2009. Improving depen- dency parsing with subtrees from auto-parsed data. In Proceedings of EMNLP, pages 570-579, Singa- pore.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Utilizing dependency language models for graphbased dependency", |
| "authors": [ |
| { |
| "first": "Wenliang", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Min", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Haizhou", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "213--222", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wenliang Chen, Min Zhang, and Haizhou Li. 2012. Utilizing dependency language models for graph- based dependency. In Proceedings of ACL, pages 213-222, Jeju, Republic of Korea.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Incremental parsing with the perceptron algorithm", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Roark", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceed- ings of ACL, Stroudsburg, PA, USA.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Three generative, lexicalised models for statistical parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of ACL, Madrid, Spain.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Head-driven statistical models for natural language parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 1999. Head-driven statistical models for natural language parsing. Ph.D. thesis, Univer- sity of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Discriminative reranking for natural language processing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "175--182", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 2000. Discriminative reranking for natural language processing. In Proceedings of ICML, pages 175-182, Stanford, CA, USA.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Practical Structured Learning for Natural Language Processing", |
| "authors": [ |
| { |
| "first": "Hal", |
| "middle": [], |
| "last": "Daume", |
| "suffix": "" |
| }, |
| { |
| "first": "Iii", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hal Daume III. 2006. Practical Structured Learn- ing for Natural Language Processing. Ph.D. thesis, USC.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Selftraining PCFG grammars with latent annotations across languages", |
| "authors": [ |
| { |
| "first": "Zhongqiang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [], |
| "last": "Harper", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "832--841", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhongqiang Huang and Mary Harper. 2009. Self- training PCFG grammars with latent annotations across languages. In Proceedings of EMNLP, pages 832-841, Singapore.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Dynamic programming for linear-time incremental parsing", |
| "authors": [ |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1077--1086", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang Huang and Kenji Sagae. 2010. Dynamic pro- gramming for linear-time incremental parsing. In Proceedings of ACL, pages 1077-1086, Uppsala, Sweden.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Self-training with products of latent variable grammars", |
| "authors": [ |
| { |
| "first": "Zhongqiang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [], |
| "last": "Harper", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "12--22", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhongqiang Huang, Mary Harper, and Slav Petrov. 2010. Self-training with products of latent variable grammars. In Proceedings of EMNLP, pages 12-22, Massachusetts, USA.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Forest reranking: discriminative parsing with non-local features", |
| "authors": [ |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "586--594", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang Huang. 2008. Forest reranking: discriminative parsing with non-local features. In Proceedings of ACL, pages 586-594, Ohio, USA.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Improve Chinese parsing with Max-Ent reranking parser", |
| "authors": [ |
| { |
| "first": "Liang-Ya", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Master Project Report", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang-Ya Huang. 2009. Improve Chinese parsing with Max-Ent reranking parser. In Master Project Re- port, Brown University.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Simple semi-supervised dependency parsing", |
| "authors": [ |
| { |
| "first": "Terry", |
| "middle": [], |
| "last": "Koo", |
| "suffix": "" |
| }, |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Carreras", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "282--289", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Con- ditional random fields: Probabilistic models for seg- menting and labeling sequence data. In Proceed- ings of ICML, pages 282-289, Massachusetts, USA, June.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Semi-supervised learning for natural language", |
| "authors": [ |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Percy Liang. 2005. Semi-supervised learning for nat- ural language. Master's thesis, Massachusetts Insti- tute of Technology.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Building a large annotated corpus of English", |
| "authors": [ |
| { |
| "first": "Mitchell", |
| "middle": [ |
| "P" |
| ], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Beatrice", |
| "middle": [], |
| "last": "Santorini", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [ |
| "A" |
| ], |
| "last": "Marcinkiewiz", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "313--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary A. Marcinkiewiz. 1993. Building a large anno- tated corpus of English. Computational Linguistics, 19(2):313-330.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Effective self-training for parsing", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mcclosky", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the HLT/NAACL, Main Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "152--159", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David McClosky, Eugene Charniak, and Mark John- son. 2006. Effective self-training for parsing. In Proceedings of the HLT/NAACL, Main Conference, pages 152-159, New York City, USA, June.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Online large-margin training of dependency parsers", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Koby", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "91--98", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of de- pendency parsers. In Proceedings of ACL, pages 91- 98, Ann Arbor, Michigan, June.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Maltparser: a data-driven parser-generator for dependency parsing", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "2216--2219", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joakim Nivre, Johan Hall, and Jens Nilsson. 2006. Maltparser: a data-driven parser-generator for de- pendency parsing. In Proceedings of LREC, pages 2216-2219.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Improved inference for unlexicalized parsing", |
| "authors": [ |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of HLT/NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "404--411", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Slav Petrov and Dan Klein. 2007. Improved infer- ence for unlexicalized parsing. In Proceedings of HLT/NAACL, pages 404-411, Rochester, New York, April.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "A linear observed time statistical parser based on maximum entropy models", |
| "authors": [ |
| { |
| "first": "Adwait", |
| "middle": [], |
| "last": "Ratnaparkhi", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adwait Ratnaparkhi. 1997. A linear observed time sta- tistical parser based on maximum entropy models. In Proceedings of EMNLP, Rhode Island, USA.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "A classifier-based parser with linear run-time complexity", |
| "authors": [ |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| }, |
| { |
| "first": "Alon", |
| "middle": [], |
| "last": "Lavie", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of IWPT", |
| "volume": "", |
| "issue": "", |
| "pages": "125--132", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenji Sagae and Alon Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceed- ings of IWPT, pages 125-132, Vancouver, Canada.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Parser combination by reparsing", |
| "authors": [ |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| }, |
| { |
| "first": "Alon", |
| "middle": [], |
| "last": "Lavie", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of HLT/NAACL, Companion Volume: Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "129--132", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenji Sagae and Alon Lavie. 2006. Parser combina- tion by reparsing. In Proceedings of HLT/NAACL, Companion Volume: Short Papers, pages 129-132, New York, USA.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "A new string-to-dependency machine translation algorithm with a target dependency language model", |
| "authors": [ |
| { |
| "first": "Libin", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Jinxi", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "577--585", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algo- rithm with a target dependency language model. In Proceedings of ACL, pages 577-585, Ohio, USA.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Capturing paradigmatic and syntagmatic lexical relations: towards accurate Chinese part-of-speech tagging", |
| "authors": [ |
| { |
| "first": "Weiwei", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Hans", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Weiwei Sun and Hans Uszkoreit. 2012. Capturing paradigmatic and syntagmatic lexical relations: to- wards accurate Chinese part-of-speech tagging. In Proceedings of ACL, Jeju, Republic of Korea.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "The Penn Chinese Treebank: phrase structure annotation of a large corpus", |
| "authors": [ |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Fu Dong Chiou", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Natural Language Engineering", |
| "volume": "11", |
| "issue": "2", |
| "pages": "207--238", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nianwen Xue, Fei Xia, Fu dong Chiou, and Martha Palmer. 2005. The Penn Chinese Treebank: phrase structure annotation of a large corpus. Natural Lan- guage Engineering, 11(2):207-238.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Statistical dependency analysis with support vector machines", |
| "authors": [ |
| { |
| "first": "Hiroyasu", |
| "middle": [], |
| "last": "Yamada", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuji", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of IWPT", |
| "volume": "", |
| "issue": "", |
| "pages": "195--206", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hiroyasu Yamada and Yuji Matsumoto. 2003. Statis- tical dependency analysis with support vector ma- chines. In Proceedings of IWPT, pages 195-206, Nancy, France.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Joint word segmentation and POS tagging using a single perceptron", |
| "authors": [ |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACL/HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "888--896", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yue Zhang and Stephen Clark. 2008. Joint word seg- mentation and POS tagging using a single percep- tron. In Proceedings of ACL/HLT, pages 888-896, Columbus, Ohio.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Transition-based parsing of the Chinese Treebank using a global discriminative model", |
| "authors": [ |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of IWPT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yue Zhang and Stephen Clark. 2009. Transition-based parsing of the Chinese Treebank using a global dis- criminative model. In Proceedings of IWPT, Paris, France, October.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Transition-based dependency parsing with rich non-local features", |
| "authors": [ |
| { |
| "first": "Yue", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "188--193", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of ACL, pages 188-193, Portland, Ore- gon, USA.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Exploiting lexical dependencies from large-scale data for better shift-reduce constituency parsing", |
| "authors": [ |
| { |
| "first": "Muhua", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jingbo", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Huizhen", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "3171--3186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Muhua Zhu, Jingbo Zhu, and Huizhen Wang. 2012. Exploiting lexical dependencies from large-scale data for better shift-reduce constituency parsing. In Proceedings of COLING, pages 3171-3186, Mum- bai, India.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Example parse trees of the same sentence with different numbers of actions.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF1": { |
| "text": "can be [SHIFT, SHIFT, REDUCE-NP, FINISH, IDLE] and [SHIFT, SHIFT, UNARY-NP, REDUCE-L-VP, FINISH], respectively. Their Axioms [\u03c6, 0, false, 0, 0] Goal [S, n, true, m : 2n \u2264 m \u2264 4n, C]", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF3": { |
| "text": "Comparison of parsing accuracies of the baseline, extended parser, and semi-supervised parsers on spans of different lengths.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF5": { |
| "text": "Comparison of parsing accuracies of the baseline, extended parser, and semi-supervised parser on sentences of different lengths.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF0": { |
| "html": null, |
| "text": "New features for the extended parser.", |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF1": { |
| "html": null, |
| "text": "belong to TLM. To", |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td/><td>Stat</td><td>Train</td><td>Dev</td><td>Test</td><td>Unlabeled</td></tr><tr><td>EN</td><td colspan=\"4\"># sent # word 950.0k 40.1k 56.7k 39.8k 1.7k 2.4k</td><td>3,139.1k 76,041.4k</td></tr><tr><td>CH</td><td colspan=\"2\"># sent # word 493.8k 18.1k</td><td>350 8.0k</td><td>348 6.8k</td><td>11,810.7k 269,057.2k</td></tr></table>" |
| }, |
| "TABREF2": { |
| "html": null, |
| "text": "Statistics on sentence and word numbers of the experimental data.", |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF3": { |
| "html": null, |
| "text": "Semi-supervised features designed on the base of word clusters, lexical dependencies, and dependency language models. Here the symbol s i denotes a stack item, q i denotes a queue item, w represents a word, and t represents a POS tag.", |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td colspan=\"2\">Lan. System</td><td>LR</td><td>LP</td><td>F1</td></tr><tr><td>ENG</td><td colspan=\"4\">Baseline +padding 88.8 89.5 89.1 88.4 88.7 88.6 +features 89.0 89.7 89.3</td></tr><tr><td>CHN</td><td colspan=\"4\">Baseline +padding 85.5 87.2 86.4 85.6 86.3 86.0 +features 85.5 87.6 86.5</td></tr></table>" |
| }, |
| "TABREF4": { |
| "html": null, |
| "text": "Experimental results on the English and Chinese development sets with the padding technique and new supervised features added", |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF5": { |
| "html": null, |
| "text": "http://nlp.stanford.edu/software/tagger.shtml 5 http://nlp.cs.nyu.edu/evalb 6 http://www.cis.upenn.edu/\u223cdbikel/software.html#comparator", |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td colspan=\"2\">Lan. Features</td><td>LR</td><td>LP</td><td>F1</td></tr><tr><td>ENG</td><td colspan=\"4\">+word cluster +lexical dependencies 89.7 90.3 90.0 89.3 90.0 89.7 +dependency LM 90.0 90.6 90.3</td></tr><tr><td>CHN</td><td colspan=\"4\">+word cluster +lexical dependencies 87.2 88.6 87.9 85.7 87.5 86.6 +dependency LM 87.2 88.7 88.0</td></tr></table>" |
| }, |
| "TABREF6": { |
| "html": null, |
| "text": "Experimental results on the English and Chinese development sets with different types of semi-supervised features added incrementally to the extended parser.", |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF8": { |
| "html": null, |
| "text": "Comparison of our parsers and related work on the English test set.", |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td/><td/><td>Shift-reduce</td></tr><tr><td colspan=\"3\">parsers. \u2020 The results of self-training with a sin-</td></tr><tr><td colspan=\"2\">gle latent annotation grammar.</td><td/></tr><tr><td colspan=\"2\">Type Parser Charniak (2000) *</td><td>LR 79.6 82.1 80.8 LP F1</td></tr><tr><td/><td>Bikel (2004) \u2020</td><td>79.3 82.0 80.6</td></tr><tr><td>SI</td><td>Baseline</td><td>82.1 83.1 82.6</td></tr><tr><td/><td>Baseline+Padding</td><td>82.1 84.3 83.2</td></tr><tr><td>RE</td><td colspan=\"2\">Petrov & Klein (2007) Charniak & Johnson (2005) * 80.8 83.8 82.3 81.9 84.8 83.3</td></tr><tr><td>SE</td><td>Zhu et al. (2012) Baseline+Padding+Semi</td><td>80.6 81.9 81.2 84.4 86.8 85.6</td></tr></table>" |
| }, |
| "TABREF10": { |
| "html": null, |
| "text": "Comparison of running times on the English test set, where the time for loading models is excluded.", |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>The results of SVM-based shift-</td></tr><tr><td>reduce parsing with greedy search. \u2020 The results of</td></tr><tr><td>MaxEnt-based shift-reduce parser with best-first</td></tr><tr><td>search. \u2021 Times reported by authors running on</td></tr><tr><td>different hardware.</td></tr></table>" |
| } |
| } |
| } |
| } |