ACL-OCL / Base_JSON /prefixQ /json /Q17 /Q17-1019.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q17-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:12:15.806040Z"
},
"title": "Learning to Prune: Exploring the Frontier of Fast and Accurate Parsing",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Vieira",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": "timv@cs.jhu.edu"
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Pruning hypotheses during dynamic programming is commonly used to speed up inference in settings such as parsing. Unlike prior work, we train a pruning policy under an objective that measures end-to-end performance: we search for a fast and accurate policy. This poses a difficult machine learning problem, which we tackle with the LOLS algorithm. LOLS training must continually compute the effects of changing pruning decisions: we show how to make this efficient in the constituency parsing setting, via dynamic programming and change propagation algorithms. We find that optimizing end-to-end performance in this way leads to a better Pareto frontier-i.e., parsers which are more accurate for a given runtime.",
"pdf_parse": {
"paper_id": "Q17-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "Pruning hypotheses during dynamic programming is commonly used to speed up inference in settings such as parsing. Unlike prior work, we train a pruning policy under an objective that measures end-to-end performance: we search for a fast and accurate policy. This poses a difficult machine learning problem, which we tackle with the LOLS algorithm. LOLS training must continually compute the effects of changing pruning decisions: we show how to make this efficient in the constituency parsing setting, via dynamic programming and change propagation algorithms. We find that optimizing end-to-end performance in this way leads to a better Pareto frontier-i.e., parsers which are more accurate for a given runtime.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Decades of research have been dedicated to heuristics for speeding up inference in natural language processing tasks, such as constituency parsing (Pauls and Klein, 2009; Caraballo and Charniak, 1998) and machine translation (Petrov et al., 2008; Xu et al., 2013) . Such research is necessary because of a trend toward richer models, which improve accuracy at the cost of slower inference. For example, state-of-theart constituency parsers use grammars with millions of rules, while dependency parsers routinely use millions of features. Without heuristics, these parsers take minutes to process a single sentence.",
"cite_spans": [
{
"start": 147,
"end": 170,
"text": "(Pauls and Klein, 2009;",
"ref_id": "BIBREF35"
},
{
"start": 171,
"end": 200,
"text": "Caraballo and Charniak, 1998)",
"ref_id": "BIBREF5"
},
{
"start": 225,
"end": 246,
"text": "(Petrov et al., 2008;",
"ref_id": "BIBREF38"
},
{
"start": 247,
"end": 263,
"text": "Xu et al., 2013)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To speed up inference, we will learn a pruning policy. During inference, the pruning policy is invoked to decide whether to keep or prune various parts of the search space, based on features of the input and (potentially) the state of the inference process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach searches for a policy with maximum end-to-end performance (reward) on training data, where the reward is a linear combination of problemspecific measures of accuracy and runtime, namely reward = accuracy \u2212 \u03bb \u2022 runtime. The parameter \u03bb \u2265 0 specifies the relative importance of runtime and accuracy. By adjusting \u03bb, we obtain policies with different speed-accuracy tradeoffs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For learning, we use Locally Optimal Learning to Search (LOLS) (Chang et al., 2015b) , an algorithm for learning sequential decision-making policies, which accounts for the end-to-end performance of the entire decision sequence jointly. Unfortunately, executing LOLS naively in our setting is prohibitive because it would run inference from scratch millions of times under different policies, training examples, and variations of the decision sequence. Thus, this paper presents efficient algorithms for repeated inference, which are applicable to a wide variety of NLP tasks, including parsing, machine translation and sequence tagging. These algorithms, based on change propagation and dynamic programming, dramatically reduce time spent evaluating similar decision sequences by leveraging problem structure and sharing work among evaluations.",
"cite_spans": [
{
"start": 63,
"end": 84,
"text": "(Chang et al., 2015b)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our approach by learning pruning heuristics for constituency parsing. In this setting, our approach is the first to account for end-to-end performance of the pruning policy, without making independence assumptions about the reward function, as in prior work (Bodenstab et al., 2011) . In the larger context of learning-to-search for structured prediction, our work is unusual in that it learns to control a dynamic programming algorithm (i.e., graphbased parsing) rather than a greedy algorithm (e.g., transition-based parsing). Our experiments show that accounting for end-to-end performance in training leads to better policies along the entire Pareto frontier of accuracy and runtime.",
"cite_spans": [
{
"start": 270,
"end": 294,
"text": "(Bodenstab et al., 2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A simple yet effective approach to speeding up parsing was proposed by Bodenstab et al. (2011) , who trained a pruning policy \u03c0 to classify whether or not spans of the input sentence w 1 \u2022 \u2022 \u2022 w n form plausible constituents based on features of the input sentence. These predictions enable a parsing algorithm, such as CKY, to skip expensive steps during its execution: unlikely constituents are pruned. Only plausible constituents are kept, and the parser assembles the highest-scoring parse from the available constituents.",
"cite_spans": [
{
"start": 71,
"end": 94,
"text": "Bodenstab et al. (2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CKY with pruning",
"sec_num": "2"
},
{
"text": "Alg. 1 provides pseudocode for weighted CKY with pruning. Weighted CKY aims to find the highestscoring derivation (parse tree) of a given sentence, where a given grammar specifies a non-negative score for each derivation rule and a derivation's score is the product of the scores of the rules it uses. 1 CKY uses a dynamic programming strategy to fill in a three-dimensional array \u03b2, known as the chart. The score \u03b2 ikx is the score of the highest-scoring subderivation with fringe w i+1 . . . w k and root x. This value is computed by looping over the possible ways to assemble such a subderivation from smaller subderivations with scores \u03b2 ijy and \u03b2 jkz (lines 17-22). Additionally, we track a witness (backpointer) for each \u03b2 ikx , so that we can easily reconstruct the corresponding subderivation at line 23. The chart is initialized with lexical grammar rules (lines 3-9), which derive words from grammar symbols.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CKY with pruning",
"sec_num": "2"
},
{
"text": "The key difference between pruned and unpruned CKY is an additional \"if\" statement (line 14), which queries the pruning policy \u03c0 to decide whether to compute the several values \u03b2 ikx associated with a span (i, k). Note that width-1 and width-n spans are always kept because all valid parses require them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CKY with pruning",
"sec_num": "2"
},
{
"text": "3 End-to-end training Bodenstab et al. (2011) train their pruning policy as a supervised classifier of spans. They derive direct supervision as follows: try to keep a span if it appears in the gold-standard parse, and prune it otherwise. They found that using an asymmetric weighting scheme helped find the right balance between false positives and false negatives. Intuitively, failing to prune is only a slight slowdown, whereas pruning a good item can ruin the accuracy of the parse.",
"cite_spans": [
{
"start": 22,
"end": 45,
"text": "Bodenstab et al. (2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CKY with pruning",
"sec_num": "2"
},
{
"text": "1 As is common practice, we assume the grammar has been binarized. We focus on pre-trained grammars, leaving coadaptation of the grammar and pruning policy to future work. As indicated at lines 6 and 19, a rule's score may be made to depend on the context in which that rule is applied (Finkel et al., 2008) , although the pre-trained grammars in our present experiments are ordinary PCFGs for which this is not the case.",
"cite_spans": [
{
"start": 286,
"end": 307,
"text": "(Finkel et al., 2008)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CKY with pruning",
"sec_num": "2"
},
{
"text": "Algorithm 1 PARSE: Weighted CKY with pruning 1: Input: grammar G, sentence w, policy \u03c0 Output: completed chart \u03b2, derivation d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted CKY with pruning",
"sec_num": "2"
},
{
"text": "Initialize chart 3: \u03b2 := 0 4: for k := 1 to n :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "for x such that (x \u2192 w k ) \u2208 rules(G) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "s := G(x \u2192 w k | w, k) 7: if s > \u03b2 k\u22121,k,x : 8: \u03b2 k\u22121,k,x := s 9: witness(k\u22121, k, x) := (k\u22121, k, w k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "10: for width := 2 to n :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "for i := 0 to n \u2212 width :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "k := i + width Current span is (i, k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "13:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "Policy determines whether to fill in this span 14:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "if \u03c0(w, i, k) = prune :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "15:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "continue 16:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "Fill in span by considering each split point j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "for j := i + 1 to k \u2212 1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "17:",
"sec_num": null
},
{
"text": "18:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "17:",
"sec_num": null
},
{
"text": "for (x \u2192 y z) \u2208 rules(G) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "17:",
"sec_num": null
},
{
"text": "19:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "17:",
"sec_num": null
},
{
"text": "s := \u03b2 ijy \u2022\u03b2 jkz \u2022G(x \u2192 y z | w, i, j, k) 20:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "17:",
"sec_num": null
},
{
"text": "if s > \u03b2 ikx :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "17:",
"sec_num": null
},
{
"text": "21:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "17:",
"sec_num": null
},
{
"text": "\u03b2 ikx := s 22: witness(i, k, x) := (j, y, z) 23: d := follow backpointers from (0, n, ROOT) 24: return (\u03b2, d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "17:",
"sec_num": null
},
{
"text": "Our end-to-end training approach improves upon asymmetric weighting by jointly evaluating the sequence of pruning decisions, measuring its effect on the test-time evaluation metric by actually running pruned CKY (Alg. 1). To estimate the value of a pruning policy \u03c0, we call PARSE(G, w (i) , \u03c0) on each training sentence w (i) , and apply the reward function, r = accuracy \u2212\u03bb\u2022runtime. The empirical value of a policy is its average reward on the training set:",
"cite_spans": [
{
"start": 323,
"end": 326,
"text": "(i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "17:",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R(\u03c0) = 1 m m i=1 E r(PARSE(G, w (i) , \u03c0))",
"eq_num": "(1)"
}
],
"section": "17:",
"sec_num": null
},
{
"text": "The expectation in the definition may be dropped if PARSE, \u03c0, and r are all deterministic, as in our setting. 2 Our definition of r depends on the user parameter \u03bb \u2265 0, which specifies the amount of accuracy the user would sacrifice to save one unit of runtime. Training under a range of values for \u03bb gives rise to policies covering a number of operating points along the Pareto frontier of accuracy and runtime. End-to-end training gives us a principled way to decide what to prune. Rather than artificially labeling each pruning decision as inherently good or bad, we evaluate its effect in the context of the particular sentence and the other pruning decisions. Actions that prune a gold constituent are not equally bad-some cause cascading errors, while others are \"worked around\" in the sense that the grammar still selects a mostly-gold parse. Similarly, actions that prune a non-gold constituent are not equally good-some provide more overall speedup (e.g., pruning narrow constituents prevents wider ones from being built), and some even improve accuracy by suppressing an incorrect but high-scoring parse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "17:",
"sec_num": null
},
{
"text": "More generally, the gold vs. non-gold distinction is not even available in NLP tasks where one is pruning potential elements of a latent structure, such as an alignment (Xu et al., 2013) or a finer-grained parse (Matsuzaki et al., 2005 ). Yet our approach can still be used in such settings, by evaluating the reward on the downstream task that the latent structure serves.",
"cite_spans": [
{
"start": 169,
"end": 186,
"text": "(Xu et al., 2013)",
"ref_id": "BIBREF50"
},
{
"start": 212,
"end": 235,
"text": "(Matsuzaki et al., 2005",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "17:",
"sec_num": null
},
{
"text": "Past work on optimizing end-to-end performance is discussed in \u00a78. One might try to scale these techniques to learning to prune, but in this work we take a different approach. Given a policy, we can easily find small ways to improve it on specific sentences by varying individual pruning actions (e.g., if \u03c0 currently prunes a span then try keeping it instead). Given a batch of improved action sequences (trajectories), the remaining step is to search for a policy which produces the improved trajectories. Conveniently, this can be reduced to a classification problem, much like the asymmetric weighting approach, except that the supervised labels and misclassification costs are not fixed across iterations, but rather are derived from interaction with the environment (i.e., PARSE and the reward function). This idea is formalized as a learning algorithm called Locally Optimal Learning to Search (Chang et al., 2015b) , described in \u00a74.",
"cite_spans": [
{
"start": 901,
"end": 922,
"text": "(Chang et al., 2015b)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "17:",
"sec_num": null
},
{
"text": "The counterfactual interventions we requireevaluating how reward would change if we changed one action-can be computed more efficiently using our novel algorithms ( \u00a75) than by the default strategy of running the parser repeatedly from scratch. The key is to reuse work among evaluations, which is possible because LOLS only makes tiny changes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "17:",
"sec_num": null
},
{
"text": "Pruned inference is a sequential decision process. The process begins in an initial state s 0 . In pruned CKY, s 0 specifies the state of Alg. 1 at line 10, after the chart has been initialized from some selected sentence. Next, the policy is invoked to choose action a 0 = \u03c0(s 0 )-in our case at line 14-which affects what the parser does next. Eventually the parser reaches some state s 1 from which it calls the policy to choose action a 1 = \u03c0(s 1 ), and so on. When the policy is invoked at state s t , it selects action a t based on features extracted from the current state s t -a snapshot of the input sentence, grammar and parse chart at time t. 3 We call the state-action",
"cite_spans": [
{
"start": 654,
"end": 655,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "sequence s 0 a 0 s 1 a 1 \u2022 \u2022 \u2022 s T a trajectory,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "where T is the trajectory length. At the final state, the reward function is evaluated, r(s T ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "The LOLS algorithm for learning a policy is given in Alg. 2, 4 with a graphical illustration in Fig. 1 . At a high level, LOLS alternates between evaluating and improving the current policy \u03c0 i .",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 102,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "The evaluation phase first samples a trajectory from \u03c0 i , called a roll-in:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "s 0 a 0 s 1 a 1 \u2022 \u2022 \u2022 s T \u223c ROLL-IN(\u03c0 i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "In our setting, s 0 is derived from a randomly sampled training sentence, but the rest of the trajectory is then deterministically computed by \u03c0 i given s 0 . Then we revisit each state s in the roll-in (line 7), and try each available action\u0101 \u2208 A(s) (line 9), executing \u03c0 i thereafter-a rollout-to measure the resulting reward r[\u0101] (line 10). Our parser is deterministic, so a single rollout is an unbiased, 0-variance estimate of the expected reward. This process is repeated many times, yielding a large list Q i of pairs s, r , where s is a state that was encountered in some roll-in and r maps the possible actions A(s) in that state to their measured rewards.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "The improvement phase now trains a new policy \u03c0 i+1 to try to choose high-reward actions, seeking a policy that will \"on average\" get high rewards r[\u03c0 i+1 (s)]. Good generalization is important: the policy must select high-reward actions even in states s that are not represented in Q i , in case they are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "s 0 s 1 s 2 s 3 s 4 s 5 r 2 [keep] = r(s 5 ) s 3s4s5 r 2 [prune] = r(s 5 ) intervention a 0 a 1 keep a 3 a 4 p r u n e\u0101 3\u01014",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "Figure 1: Example LOLS iteration (lines 6-10). Roll-in with the current policy \u03c0 i (starting with a random sentence),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "s 0 a 0 s 1 a 1 \u2022 \u2022 \u2022 s 5 \u223c ROLL-IN(\u03c0 i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "Perform interventions at each state along the roll-in (only t = 2 is shown). The intervention tries alternative actions at each state (e.g.,\u0101 2 = prune at s 2 ). We rollout after the intervention by following \u03c0 i until a terminal state,s 3\u01013s4\u01014s5 \u223c ROLLOUT(\u03c0 i , s 2 ,\u0101 2 ), and evaluate the reward of the terminal state r(s 5 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "Algorithm 2 LOLS algorithm for learning to prune. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "Q i := \u2205 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "for j := 1 to minibatch size :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "6: s 0 a 0 s 1 a 1 \u2022 \u2022 \u2022 s T \u223c ROLL-IN(\u03c0 i ) Sample 7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "for t := 0 to T \u22121 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "Intervene: Evaluate each action at s t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning algorithm",
"sec_num": "4"
},
{
"text": "for\u0101 t \u2208 A(s t ) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9:",
"sec_num": null
},
{
"text": "Possible actions 10: r t [\u0101 t ] \u223c ROLLOUT(\u03c0 i , s t ,\u0101 t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9:",
"sec_num": null
},
{
"text": "11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9:",
"sec_num": null
},
{
"text": "Q i .append( s t , r t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9:",
"sec_num": null
},
{
"text": "12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9:",
"sec_num": null
},
{
"text": "Improve: Train with dataset aggregation 13:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9:",
"sec_num": null
},
{
"text": "\u03c0 i+1 \u2190 TRAIN i k=1 Q k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9:",
"sec_num": null
},
{
"text": "Finalize: Pick the best policy over all iterations",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14:",
"sec_num": null
},
{
"text": "15: return argmax i R(\u03c0 i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14:",
"sec_num": null
},
{
"text": "encountered when running the new policy \u03c0 i+1 (or when parsing test sentences). Thus, beyond just regularizing the training objective, we apply dataset aggregation : we take the training set to include not just Q i but also the examples from previous iterations (line 13). This also ensures that the sequence of policies \u03c0 1 , \u03c0 2 , . . . will be \"stable\" and will eventually converge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14:",
"sec_num": null
},
{
"text": "So line 13 seeks to find a good classifier \u03c0 i+1 using a training set: a possible classifier \u03c0 would receive from each training example s, r a reward of r[\u03c0(s)]. In our case, where A(s) = {keep, prune}, this cost-sensitive classification problem is equivalent to training an ordinary binary classifier, after converting each training example s, r to s, argmax a r[a] and giving this example a weight of | r t,keep \u2212 r t,prune |. Our specific classifier is described in \u00a76.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14:",
"sec_num": null
},
{
"text": "In summary, the evaluation phase of LOLS collects training data for a cost-sensitive classifier, where the inputs (states), outputs (actions), and costs are obtained by interacting with the environment. LOLS concocts a training set and repeatedly revises it, similar to the well-known Expectation-Maximization algorithm. This enables end-to-end training of systems with discrete decisions and nondecomposable reward functions. LOLS gives us a principled framework for deriving (nonstationary) \"supervision\" even in tricky cases such as latent-variable inference (mentioned in \u00a73). LOLS has strong theoretical guarantees, though in pathological cases, it may take exponential time to converge (Chang et al., 2015b) .",
"cite_spans": [
{
"start": 692,
"end": 713,
"text": "(Chang et al., 2015b)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "14:",
"sec_num": null
},
{
"text": "The inner loop of the evaluation phase performs roll-ins, interventions and rollouts. Roll-ins ensure that the policy is (eventually) trained under the distribution of states it tends to encounter at test time. Interventions and rollouts force \u03c0 i to explore the effect of currently disfavored actions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14:",
"sec_num": null
},
{
"text": "Unlike most applications of LOLS and related algorithms, such as SEARN (Daum\u00e9 III, 2006) and DAG-GER , executing the policy is a major bottleneck in training. Because our dynamic programming parser explores many possibilities (unlike a greedy, transition-based decoder) its trajectories are quite long. This not only slows down each rollout: it means we must do more rollouts.",
"cite_spans": [
{
"start": 71,
"end": 88,
"text": "(Daum\u00e9 III, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient rollouts",
"sec_num": "5"
},
{
"text": "In our case, the trajectory has length T =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient rollouts",
"sec_num": "5"
},
{
"text": "n\u2022(n+1) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient rollouts",
"sec_num": "5"
},
{
"text": "\u2212 1 \u2212 n for a sentence of length n, where T is also the number of pruning decisions: one for each span other than the root and width-1 spans. LOLS must then perform T rollouts on this example. This means that to evaluate policy \u03c0 i , we must parse each sentence in the minibatch hundreds of times (e.g., 189 for n = 20, 434 for n = 30, and 779 for n = 40).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient rollouts",
"sec_num": "5"
},
{
"text": "We can regard each policy \u03c0 as defining a pruning mask m, an array that maps each of the T spans (i, k) to a decision m ik (1 = keep, 0 = prune). Each rollout tries flipping a different bit in this mask. We could spend less time on each sentence by sampling only some of its T rollouts (see \u00a76). Regardless, the rollouts we do on a given sentence are related: in this section we show how to get further speedups by sharing work among them. In \u00a75.2, we leverage the fact that rollouts will be similar to one another (differing by a single pruning decision). In \u00a75.3, we show that the reward of all T rollouts can be computed simultaneously by dynamic programming under some assumptions about the structure of the reward function (described later). We found these algorithms to be crucial to training in a \"reasonable\" amount of time (see the empirical comparison in \u00a77.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Efficient rollouts",
"sec_num": "5"
},
{
"text": "It is convenient to present our efficient rollout algorithms in terms of the hypergraph structure of Alg. 1 (Klein and Manning, 2001; Huang, 2008; Li and Eisner, 2009; Eisner and Blatz, 2007) . A hypergraph describes the information flow among related quantities in a dynamic programming algorithm. Many computational tricks apply generically to hypergraphs.",
"cite_spans": [
{
"start": 108,
"end": 133,
"text": "(Klein and Manning, 2001;",
"ref_id": "BIBREF28"
},
{
"start": 134,
"end": 146,
"text": "Huang, 2008;",
"ref_id": "BIBREF25"
},
{
"start": 147,
"end": 167,
"text": "Li and Eisner, 2009;",
"ref_id": "BIBREF31"
},
{
"start": 168,
"end": 191,
"text": "Eisner and Blatz, 2007)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Parsing as hypergraphs",
"sec_num": "5.1"
},
{
"text": "A hypergraph edge e (or hyperedge) is a \"generalized arrow\" e.head \u227a e.Tail with one output and a list of inputs. We regard each quantity \u03b2 ikx , m ik , or G(. . .) in Alg. 1 as the value of a corresponding hypergraph vertex\u03b2 ikx ,\u1e41 ik , or\u0120(. . .). Thus, value(v) = v for any vertexv. Each\u1e41 ik 's value is computed by the policy \u03c0 or chosen by a rollout intervention. Each\u0120's value is given by the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Parsing as hypergraphs",
"sec_num": "5.1"
},
{
"text": "Values of\u03b2 ikx , by contrast, are computed at line 19 if k \u2212 i > 1. To record the dependence of \u03b2 ikx on other quantities, our hypergraph includes the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Parsing as hypergraphs",
"sec_num": "5.1"
},
{
"text": "hyperedge\u03b2 ikx \u227a (\u03b2 ijy ,\u03b2 jkz ,\u1e41 ik ,\u0121) for each 0 \u2264 i < j < k \u2264 n and (x \u2192 y z) \u2208 rules(G), where\u0121 denotes the vertex\u0120(x \u2192 y z | w, i, j, k).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Parsing as hypergraphs",
"sec_num": "5.1"
},
{
"text": "If k \u2212 i = 1, then values of \u03b2 ikx are instead computed at line 6, which does not access any other \u03b2 values or the pruning mask. Thus our hypergraph includes the hyperedge v ikx \u227a(\u0121) whenever i = k \u22121, 0 \u2264 i < k \u2264 n, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Parsing as hypergraphs",
"sec_num": "5.1"
},
{
"text": "(x \u2192 w k ) \u2208 rules(G), wit\u1e23 g =\u0120(x \u2192 w k | w, k).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Parsing as hypergraphs",
"sec_num": "5.1"
},
{
"text": "With this setup, the value \u03b2 ikx is the maximum score of any derivation of vertex\u03b2 ikx (a tree rooted a\u1e6b \u03b2 ikx , representing a subderivation), where the score of a derivation is the product of its leaf values. Alg. 1 computes it by considering hyperedges\u03b2 ikx \u227a T and the previously computed values of the vertices in the tail T . For a vertexv, we write In(v) and Out(v) for its sets of incoming and outgoing hyperedges. Our algorithms follow these hyperedges implicitly, without the overhead of materializing or storing them.",
"cite_spans": [
{
"start": 356,
"end": 361,
"text": "In(v)",
"ref_id": null
},
{
"start": 366,
"end": 372,
"text": "Out(v)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background: Parsing as hypergraphs",
"sec_num": "5.1"
},
{
"text": "Change propagation is an efficient method for incrementally re-evaluating a computation under a change to its inputs (Acar and Ley-Wild, 2008; Filardo and Eisner, 2012) . In our setting, each roll-in at Alg. 2 line 6 evaluates the reward r(PARSE(G, x i , \u03c0)) from (1), which involves computing an entire parse chart via Alg. 1. The inner loop at line 10 performs T interventions per roll-in, which ask how reward would have changed if one bit in the pruning mask m had been different. Rather than reparsing from scratch (T times) to determine this, we can simply adjust the initial roll-in computation (T times).",
"cite_spans": [
{
"start": 117,
"end": 142,
"text": "(Acar and Ley-Wild, 2008;",
"ref_id": "BIBREF0"
},
{
"start": 143,
"end": 168,
"text": "Filardo and Eisner, 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "CP is efficient when only a small fraction of the computation needs to be adjusted. In principle, flipping a single pruning bit can change up to 50% of the chart, so one might expect the bookkeeping overhead of CP to outweigh the gains. In practice, however, 90% of the interventions change < 10% of the \u03b2 values in the chart. The reason is that \u03b2 ikx is a maximum over many quantities, only one of which \"wins.\" Changing a given \u03b2 ijy rarely affects this maximum, and so changes are unlikely to propagate from verte\u1e8b \u03b2 ijy to\u03b2 ikx . Since changes are not very contagious, the \"epidemic of changes\" does not spread far.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "Alg. 3 provides pseudocode for updating the highest-scoring derivation found by Alg. 1. We remark that the RECOMPUTE is called only when we flip a bit from keep to prune, which removes hyperedges and potentially decreases vertex values. The reverse flip only adds hyperedges, which increases vertex values via a running max (lines 12-14).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "After determining the effect of flipping a bit, we must restore the original chart before trying a different bit (the next rollout). The simplest approach is to call Alg. 3 again to flip the bit back. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "Algorithm 3 Change propagation algorithm 1: Global: Alg. 1's vertex values/witnesses (roll-in) 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "procedure CHANGE(v, v) 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "Change the value of a leaf vertexv to v The naive rollout algorithm runs the parser T timesonce for each variation of the pruning mask. The reader may be reminded of the finite difference approximation to the gradient of a function, which also measures the effects from perturbing each input value individually. In fact, for certain reward functions, the naive algorithm can be precisely regarded as computing a gradient-and thus we can use a more efficient algorithm, back-propagation, which finds the entire gradient vector of reward as fast (in the big-O sense) as computing the reward once. The overall algorithm is O(|E| + T ) where |E| is the total number of hyperedges, whereas the naive algorithm is O(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "|E |\u2022T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "where |E | \u2264 |E| is the maximum number of hyperedges actually visited on any rollout. What accuracy measure must we use? Let r(d) denote the recall of a derivation d-the fraction of gold constituents that appear as vertices in the derivation. A simple accuracy metric would be 1-best recall, the recall r( d) of the highest-scoring derivation d that was not pruned. In this section, we relax that to ex-pected recall, 6r = d p(d)r(d). Here we interpret the pruned hypergraph's values as an unnormalized probability distribution over derivations, where the probability p(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "d) =p(d)/Z of a derivation is propor- tional to its scorep(d) = u\u2208leaves(d) value(u).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "Thoughr is not quite our evaluation metric, it captures more information about the parse forest, and so may offer some regularizing effect when used in a training criterion (see \u00a77.1). In any case,r is close to r( d) when probability mass is concentrated on a few derivations, which is common with heavy pruning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "We can re-expressr asr/Z, wher\u1ebd",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r = dp (d)r(d) Z = dp (d)",
"eq_num": "(2)"
}
],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "These can be efficiently computed by dynamic programming (DP), specifically by a variant of the inside algorithm (Li and Eisner, 2009) . Sincep(d) is a product of rule weights and pruning mask bits at d's leaves ( \u00a75.1), each appearing at most once, bothr and Z vary linearly in any one of these inputs provided that all other inputs are held constant. Thus, the exact effect onr or Z of changing an input m ik can be found from the partial derivatives with respect to it.",
"cite_spans": [
{
"start": 113,
"end": 134,
"text": "(Li and Eisner, 2009)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "In particular, if we increased m ik by \u2206 \u2208 {\u22121, 1} (to flip this bit), the new value ofr would be exactl\u1ef9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r + \u2206 \u2022 \u2202r/\u2202m ik Z + \u2206 \u2022 \u2202Z/\u2202m ik",
"eq_num": "(3)"
}
],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "It remains to compute these partial derivatives. All partials can be jointly computed by back-propagation, which equivalent to another dynamic program known as the outside algorithm (Eisner, 2016) .",
"cite_spans": [
{
"start": 182,
"end": 196,
"text": "(Eisner, 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "The inside algorithm only needs to visit the |E | unpruned edges, but the outside algorithm must also visit some pruned edges, to determine the effect of \"unpruning\" them (changing their m ik input from 0 to 1) by finding \u2202r/\u2202m ik and \u2202Z/\u2202m ik . On the other hand, these partials are 0 when some other input to the hyperedge is 0. This case is common when the hypergraph is heavily pruned (|E | |E|), and means that back-propagation need not descend further through that hyperedge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "Note that the DP method computes only the accuracies of rollouts-not the runtimes. In this paper, we will combine DP with a very simple runtime measure that is trivial to roll out (see \u00a77). An alternative would be to use CP to roll out the runtimes. This is very efficient: to measure just runtime, CP only needs to update the record of which constituents or edges are built, and not their scores, so the changes are easier to compute than in \u00a75.2, and peter out more quickly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Change propagation (CP)",
"sec_num": "5.2"
},
{
"text": "Setup: We use the standard English parsing setup: the Penn Treebank (Marcus et al., 1993) with the standard train/dev/test split, and standard tree normalization. 8 For efficiency during training, we restrict the length of sentences to \u2264 40. We do not restrict the length of test sentences. We experiment with two grammars: coarse, the \"no frills\" left-binarized treebank grammar, and fine, a variant of the Berkeley split-merge level-6 grammar (Petrov et al., 2006) as provided by Dunlop (2014, ch. 5 ). The parsing algorithms used during training are described in \u00a75. Our test-time parsing algorithm uses the left-child loop implementation of CKY (Dunlop et al., 2010) . All algorithms allow unary rules (though not chains). We evaluate accuracy at test time with the F 1 score from the official EVALB script (Sekine and Collins, 1997) .",
"cite_spans": [
{
"start": 68,
"end": 89,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF32"
},
{
"start": 445,
"end": 466,
"text": "(Petrov et al., 2006)",
"ref_id": "BIBREF37"
},
{
"start": 482,
"end": 501,
"text": "Dunlop (2014, ch. 5",
"ref_id": null
},
{
"start": 649,
"end": 670,
"text": "(Dunlop et al., 2010)",
"ref_id": "BIBREF14"
},
{
"start": 811,
"end": 837,
"text": "(Sekine and Collins, 1997)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parser details 7",
"sec_num": "6"
},
{
"text": "Training: Note that we never retrain the grammar weights-we train only the pruning policy. To TRAIN our classifiers (Alg. 2 line 13), we use L 2 -regularized logistic regression, trained with L-BFGS optimization. We always rescale the example weights in the training set to sum to 1 (otherwise as LOLS proceeds, dataset aggregation overwhelms the regularizer). For the baseline (defined in next section), we determine the regularization coefficient by sweeping {2 \u221211 , 2 \u221212 , 2 \u221213 , 2 \u221214 , 2 \u221215 } and picking the best value (2 \u221213 ) based on the dev frontier. We re-used this regularization parameter for LOLS. The number of LOLS iterations is determined by a 6-day training-time limit 9 (meaning some jobs run many fewer iterations than others). For LOLS minibatch size we use 10K on the coarse grammar and 5K on the fine grammar. At line 15 of Alg. 2, we return the policy that maximized reward on development data, using the reward function from training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parser details 7",
"sec_num": "6"
},
{
"text": "Features: We use similar features to Bodenstab et al. 2011, but we have removed features that depend on part-of-speech tags. We use the following 16 feature templates for span (i, k) with 1 < k\u2212i < N : bias, sentence length, boundary words, conjunctions of boundary words, conjunctions of word shapes, span shape, width bucket. Shape features map a word or phrase into a string of character classes (uppercase, lowercase, numeric, spaces); we truncate substrings of identical classes to length two; punctuation chars are never modified in any way. Width buckets use the following partition: 2, 3, 4, 5, [6, 10] , [11, 20] , [21, \u221e). We use feature hashing (Weinberger et al., 2009) with MurmurHash3 (Appleby, 2008) and project to 2 22 features. Conjunctions are taken at positions (i\u22121, i), (k, k +1), (i\u22121, k +1) and (i, k). We use special begin and end symbols when a template accesses positions beyond the sentence boundary. Hall et al. (2014) give examples motivating our feature templates and show experimentally that they are effective in multiple languages. Boundary words are strong surface cues for phrase boundaries. Span shape features are also useful as they (minimally) check for matched parentheses and quotation marks.",
"cite_spans": [
{
"start": 603,
"end": 606,
"text": "[6,",
"ref_id": null
},
{
"start": 607,
"end": 610,
"text": "10]",
"ref_id": null
},
{
"start": 613,
"end": 617,
"text": "[11,",
"ref_id": null
},
{
"start": 618,
"end": 621,
"text": "20]",
"ref_id": null
},
{
"start": 656,
"end": 681,
"text": "(Weinberger et al., 2009)",
"ref_id": "BIBREF48"
},
{
"start": 699,
"end": 714,
"text": "(Appleby, 2008)",
"ref_id": null
},
{
"start": 928,
"end": 946,
"text": "Hall et al. (2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parser details 7",
"sec_num": "6"
},
{
"text": "Reward functions and surrogates: Each user has a personal reward function. In this paper, we choose to specify our true reward as accuracy \u2212 \u03bb \u2022 runtime, where accuracy is given by labeled F 1 percentage and runtime by mega-pushes (mpush), millions of calls per sentence to lines 6 and 19 of Alg. 1, which is in practice proportional to seconds per sentence (correlation > 0.95) and is more replicable. We evaluate accordingly (on test data)-but during LOLS training we approximate these metrics. We compare:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental design and results",
"sec_num": "7"
},
{
"text": "\u2022 r CP (fast): Use change propagation ( \u00a75.2) to compute accuracy on a sentence as F 1 of just that sentence, and to approximate runtime as ||\u03b2|| 0 , the number of constituents that were built. 10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental design and results",
"sec_num": "7"
},
{
"text": "\u2022 r DP (faster): Use dynamic programming ( \u00a75.3) to approximate accuracy on a sentence as expected recall. 11 This time we approximate runtime more crudely as ||m|| 0 , the number of nonzeros in the pruning mask for the sentence (i.e., the number of spans whose constituents the policy would be willing to keep if they were built).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental design and results",
"sec_num": "7"
},
{
"text": "We use these surrogates because they admit efficient rollout algorithms. Less important, they preserve the training objective (1) as an average over sentences. (Our true F 1 metric on a corpus cannot be computed in this way, though it could reasonably be estimated by averaging over minibatches of sentences in (1).)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental design and results",
"sec_num": "7"
},
{
"text": "Controlled experimental design: Our baseline system is an adaptation of Bodenstab et al. 2011to learning-to-prune, as described in \u00a73 and \u00a76. Our goal is to determine whether such systems can be improved by LOLS training. We repeat the following design for both reward surrogates (r CP and r DP ) and for both grammars (coarse and fine).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental design and results",
"sec_num": "7"
},
{
"text": "x We start by training a number of baseline models by sweeping the asymmetric weighting parameter. For the coarse grammar we train 8 such models, and for the fine grammar 12.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental design and results",
"sec_num": "7"
},
{
"text": "y For each baseline policy, we estimate a value of \u03bb for which that policy is optimal (among baseline policies) according to surrogate reward. 12",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental design and results",
"sec_num": "7"
},
{
"text": "10 When using rCP, we speed up LOLS by doing \u2264 2n rollouts per sentence of length n. We sample these uniformly without replacement from the T possible rollouts ( \u00a75), and compensate by upweighting the resulting training examples by T /(2n).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental design and results",
"sec_num": "7"
},
{
"text": "11 Considering all nodes in the binarized tree, except for the root, width-1 constituents, and children of unary rules. 12 We estimate \u03bb by first fitting a parametric model yi =",
"cite_spans": [
{
"start": 120,
"end": 122,
"text": "12",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental design and results",
"sec_num": "7"
},
{
"text": "h(xi) ymax \u2022 sigmoid(a \u2022 log(xi + c) + b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental design and results",
"sec_num": "7"
},
{
"text": "to the baseline runtime-accuracy measurements on dev data (shown in green in Fig. 2 ) by minimizing mean squared error. We then use the fitted curve's slope h to estimate each \u03bbi = h (xi), where xi is the runtime of baseline i. The resulting choice of reward function y \u2212 \u03bbi \u2022 x increases along the green arrow in Fig. 2 , and is indeed maximized (subject to y \u2264 h(x), and in the region where h is concave) at x = xi. As a sanity check, notice since \u03bbi is a derivative of the function y = h(x), its units are in units of y (accuracy) per unit of x (runtime), as appropriate for use in the expression y \u2212 \u03bbi \u2022 x. Indeed, this procedure will construct the same reward function regardless of the units we use to express x. Our specific parametric model h is a sigmoidal curve, with z For each baseline policy, we run LOLS with the same surrogate reward function (defined by \u03bb) for which that baseline policy was optimal. We initialize LOLS by setting \u03c0 0 to the baseline policy. Furthermore, we include the baseline policy's weighted training set Q 0 in the at line 13. Fig. 2 shows that LOLS learns to improve on the baseline, as evaluated on development data.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 83,
"text": "Fig. 2",
"ref_id": "FIGREF2"
},
{
"start": 314,
"end": 320,
"text": "Fig. 2",
"ref_id": "FIGREF2"
},
{
"start": 1067,
"end": 1073,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experimental design and results",
"sec_num": "7"
},
{
"text": "{ But do these surrogate reward improvements also improve our true reward? For each baseline policy, we use dev data to estimate a value of \u03bb for which that policy is optimal according to our true reward function. We use blind test data to compare the baseline policy to its corresponding LOLS policy on this true reward function, testing significance with a paired permutation test. The improvements hold up, as shown in Fig. 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 422,
"end": 428,
"text": "Fig. 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Experimental design and results",
"sec_num": "7"
},
{
"text": "The rationale behind this design is that a user who actually wishes to maximize accuracy\u2212\u03bb\u2022runtime, for some specific \u03bb, could reasonably start by choosing the best baseline policy for this reward function, and then try to improve that baseline by running LOLS with the same reward function. Our experiments show this procedure works for a range of \u03bb values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental design and results",
"sec_num": "7"
},
{
"text": "In the real world, a user's true objective might instead be some nonlinear function of runtime and accuracy. For example, when accuracy is \"good enough,\" it may be more important to improve runtime, and vice-versa. LOLS could be used with such a nonlinear reward function as well. In fact, a user does not even have to quantify their global preferences by writing down such a function. Rather, they could select manually among the baseline policies, choosing one with an attractive speed-accuracy tradeoff, and then specify \u03bb to indicate a local direction of desired improvement (like the green arrows in Fig. 2) , modifying this direction periodically as LOLS runs.",
"cite_spans": [],
"ref_spans": [
{
"start": 605,
"end": 612,
"text": "Fig. 2)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experimental design and results",
"sec_num": "7"
},
{
"text": "As previous work has shown, learning to prune gives us excellent parsers with less than < 2% overhead accuracy \u2192 ymax asymptotically as runtime \u2192 \u221e. It obtains an excellent fit by placing accuracy and runtime on the loglogit scale-that is, log(xi + c) and logit(yi/ymax) transforms are used to convert our bounded random variables xi and yi to unbounded ones-and then assuming they are linearly related.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7.1"
},
{
"text": "2 9 1 6 2 6 8 8 2 4 5 1 2 2 2 3 1 9 9 5 1 7 5 5 1 5 0 9 1 2 8 0 runtime (avg constituents built) accuracy (expected binarized recall) 3 3 5 6 8 3 1 5 8 1 2 9 5 8 3 2 7 5 6 6 2 5 5 4 4 2 3 4 6 0 2 1 3 5 1 1 9 3 0 0 1 7 2 6 0 1 5 1 7 0 1 2 9 8 8 1 0 8 8 7 runtime (avg constituents built) x The green curve shows the performance of the baseline policies. y For each baseline policy, a green arrow points along the gradient of surrogate reward, as defined by the \u03bb that would identify that baseline as optimal. (In case a user wants a different value of \u03bb but is unwilling to search for a better baseline policy outside our set, the green cones around each baseline arrow show the range of \u03bbs that would select that baseline from our set.) z The LOLS trajectory is shown as a series of purple points, and the purple arrow points from the baseline policy to the policy selected by LOLS with early stopping ( \u00a76). This improves surrogate reward if the purple arrow has a positive inner product with the green arrow. LOLS cannot move exactly in the direction of the green arrow because it is constrained to find points that correspond to actual parsers. Typically, LOLS ends up improving accuracy, either along with runtime or at the expense of runtime.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7.1"
},
{
"text": "for deciding what to prune (i.e., pruning feature extraction and span classification). Even the baseline pruner has access to features unavailable to the grammar, and so it learns to override the grammar, improving an unpruned coarse parser's accuracy from 61.1 to as high as 70.1% F 1 on test data (i.e., beneficial search error). It is also 8.1x faster! 13 LOLS simply does a better job at figuring out where to prune, raising accuracy 2.1 points to 72.2 (while maintaining a 7.4x speedup). Where pruning is more aggressive, 13 We measure runtime as best of 10 runs (recommended by Dunlop (2014) ). All parser timing experiments were performed on a Linux laptop with the following specs: Intel\u00ae Core\u2122 i5-2540M 2.60GHz CPU, 8GB memory, 32K/256K/3072K L1/L2/L3 cache. Code is written in the Cython language.",
"cite_spans": [
{
"start": 527,
"end": 529,
"text": "13",
"ref_id": null
},
{
"start": 584,
"end": 597,
"text": "Dunlop (2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7.1"
},
{
"text": "LOLS has even more impact on accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7.1"
},
{
"text": "Even on the fine grammar, where there is less room to improve accuracy, the most accurate LOLS system improves an unpruned parser by +0.16% F 1 with a 8.6x speedup. For comparison, the most accurate baseline drops \u22120.03% F 1 with a 9.7x speedup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7.1"
},
{
"text": "With the fine grammar, we do not see much improvement over the baseline in the accuracy > 85 regions. This is because the supervision specified by asymmetric weighting is similar to what LOLS surmises via rollouts. However, in lower-accuracy regions we see that LOLS can significantly improve reward over its baseline policy. This is because the baseline supervision does not teach which plausible In no case was there a statistically significant decrease. In 4 cases (marked with '\u2212') the policy chosen by early stopping was the initial baseline policy. We also report words per second \u00d710 3 (kw/s).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7.1"
},
{
"text": "constituents are \"safest\" to prune, nor can it learn strategies such as \"skip all long sentences.\" We discuss why LOLS does not help as much in the high accuracy regions further in \u00a77.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7.1"
},
{
"text": "In a few cases in Fig. 2 , LOLS finds no policy that improves surrogate reward on dev data. In these cases, surrogate reward does improve slightly on training data (not shown), but early stopping just keeps the initial (baseline) policy since it is just as good on dev data. Adding a bit of additional random exploration might help break out of this initialization.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 24,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7.1"
},
{
"text": "Interestingly, the r DP LOLS policies find higheraccuracy policies than the corresponding r CP policies, despite a greater mismatch in surrogate accuracy definitions. We suspect that r DP 's approach of trying to improve expected accuracy may provide a useful regularizing effect, which smooths out the reward signal and provides a useful bias ( \u00a75.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7.1"
},
{
"text": "The most pronounced qualitative difference due to LOLS training is substantially lower rates of parse failure in the mid-to high\u03bb-range on both grammars (not shown). Since LOLS does end-to-end training, it can advise the learner that a certain pruning decision catastrophically results in no parse being found.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7.1"
},
{
"text": "Part of the contribution of this paper is faster algorithms for performing LOLS rollouts during training ( \u00a75). Compared to the naive strategy of running the parser from scratch T times, r CP achieves speedups of 4.9-6.6x on the coarse grammar and 1.9-2.4x on the fine grammar. r DP is even faster, 10.4-11.9x on coarse and 10.5-13.8x on fine. Most of the speedup comes from longer sentences, which take up most of the runtime for all methods. Our new algorithms enable us to train on fairly long sentences (\u2264 40). We note that our implementations of r CP and r DP are not as highly optimized as our test-time parser, so there may be room for improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training speed and convergence",
"sec_num": "7.2"
},
{
"text": "Orthogonal to the cost per rollout is the number of training iterations. LOLS may take many steps to converge if trajectories are long (i.e., T is large) because each iteration of LOLS training attempts to improve the current policy by a single action. In our setting, T is quite large (discussed extensively in \u00a75), but we are able to circumvent slow convergence by initializing the policy (via the baseline method). This means that LOLS can focus on fine-tuning a policy which is already quite good. In fact, in 4 cases, LOLS did not improve from its initial policy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training speed and convergence",
"sec_num": "7.2"
},
{
"text": "We find that when \u03bb is large-the cases where we get meaningful improvements because the initial policy is far from locally optimal-LOLS steadily and smoothly improves the surrogate reward on both training and development data. Because these are fast parsers, LOLS was able to run on the order of 10 (fine grammar) or 100 (coarse grammar) epochs within our 6-day limit; usually it was still improving when we terminated it. By contrast, for the slower and more accurate small-\u03bb parsers (which completed fewer training epochs), LOLS still improves surrogate reward on training data, but without systematically improving on development data-often the reward on development fluctuates, and early stopping simply picks the best of this small set of \"random\" variants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training speed and convergence",
"sec_num": "7.2"
},
{
"text": "In \u00a73, we argued that LOLS gives a more appropriate training signal for pruning than the baseline method of consulting the gold parse, because it uses rollouts to measure the full effect of each pruning decision in the context of the other decisions made by the policy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Understanding the LOLS training signal",
"sec_num": "7.3"
},
{
"text": "To better understand the results of our previous experiments, we analyze how often a rollout does determine that the baseline supervision for a span is suboptimal, and how suboptimal it is in those cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Understanding the LOLS training signal",
"sec_num": "7.3"
},
{
"text": "We specifically consider LOLS rollouts that evaluate the r CP surrogate (because r DP is a cruder approximation to true reward). These rollouts Q i tell us what actions LOLS is trying to improve in its current policy \u03c0 i for a given \u03bb, although there is no guarantee that the learner in \u00a74 will succeed at classifying Q i correctly (due to limited features, regularization, and the effects of dataset aggregation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Understanding the LOLS training signal",
"sec_num": "7.3"
},
{
"text": "We define regret of the baseline oracle. Let Note that regret(s) \u2265 0 for all s, and let diff(s) be the event that regret(s) > 0 strictly. We are interested in analyzing the expected regret over all gold and non-gold spans, which we break down as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Understanding the LOLS training signal",
"sec_num": "7.3"
},
{
"text": "E[regret] = p(diff) (4) \u2022 p(gold | diff) \u2022 E[regret | gold, diff] + p(\u00ac gold | diff) \u2022 E[regret | \u00ac gold, diff]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Understanding the LOLS training signal",
"sec_num": "7.3"
},
{
"text": "where expectations are taken over s \u223c ROLL-IN(\u03c0) .",
"cite_spans": [
{
"start": 38,
"end": 48,
"text": "ROLL-IN(\u03c0)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Understanding the LOLS training signal",
"sec_num": "7.3"
},
{
"text": "Empirical analysis of regret: To show where the benefit of the LOLS oracle comes from, Fig. 4 graphs the various quantities that enter into the definition (4) of baseline regret, for different \u03c0, \u03bb, and grammar. The LOLS oracle evolves along with the policy \u03c0, since it identifies the best action given \u03c0. We thus evaluate the oracle baseline against two LOLS oracles: the one used at the start of LOLS training (derived from the initial policy \u03c0 1 that was trained on baseline supervision), and the one obtained at the end (derived from the LOLS-trained policy \u03c0 * selected by early stopping). These comparisons are shown by solid and dashed lines respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 93,
"text": "Fig. 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Understanding the LOLS training signal",
"sec_num": "7.3"
},
{
"text": "Class imbalance (black curves): In all graphs, the aggregate curves primarily reflect the non-gold spans, since only 8% of spans are gold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Understanding the LOLS training signal",
"sec_num": "7.3"
},
{
"text": "The top graphs show that a substantial fraction of the gold spans should be pruned (whereas the baseline tries to keep them all), although the middle row shows that the benefit of pruning them is small. In most of these cases, pruning a gold span improves speed but leaves accuracy unchanged-because that gold span was missed anyway by the highest-scoring parse. Such cases become both more frequent and more beneficial as \u03bb increases and we prune more heavily. In a minority of cases, however, pruning a gold span also improves accuracy (through beneficial search error).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold spans (gold curves):",
"sec_num": null
},
{
"text": "Non-gold spans (purple curves): Conversely, the top graphs show that a few non-gold spans should be kept (whereas the baseline tries to prune them all), and the middle row shows a large benefit from keeping them. They are needed to recover from catastrophic errors and get a mostly-correct parse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold spans (gold curves):",
"sec_num": null
},
{
"text": "Coarse vs. fine (left vs. right): The two grammars differ mainly for small \u03bb, and this difference comes especially from the top row. With a fine grammar and small \u03bb, the baseline parses are more accurate, so LOLS has less room for improvement: fewer gold spans go unused, and fewer non-gold spans are needed for recovery.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold spans (gold curves):",
"sec_num": null
},
{
"text": "Effect of \u03bb: Aggressive pruning (large \u03bb) reduces accuracy, so its effect on the top row is similar to that of using a coarse grammar. Aggressive pruning also has an effect on the middle row: there is more benefit to be derived from pruning unused gold spans (surprisingly), and especially from keeping those non-gold spans that are helpful (presumably they enable recovery from more severe parse errors). These effects are considerably sharper with r DP reward (not shown here), which more smoothly evaluates the entire weighted pruned parse forest rather than trying to coordinate actions to ensure a good single 1-best tree; the baseline oracle is excellent at choosing the action that gets the better forest when the forest is mostly present (small \u03bb) but not when it is mostly pruned (large \u03bb).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold spans (gold curves):",
"sec_num": null
},
{
"text": "Effect on retraining the policy: The black lines in the bottom graphs show the overall regret (on training data) if we were to perfectly follow the baseline oracle rather than the LOLS oracle. In practice, retraining the policy to match the oracle will not match it perfectly in either case. Thus the baseline method has a further disadvantage: when it trains a policy, its training objective weights all gold or all non-gold examples equally, whereas LOLS invests greater effort in matching the oracle on those states where doing so would give greater downstream reward.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold spans (gold curves):",
"sec_num": null
},
{
"text": "Our experiments have focused on using LOLS to improve a reasonable baseline. Fig. 5 shows that our resulting parser fits reasonably among state-of-the-art constituency parsers trained and tested on the Penn Treebank. These parsers include a variety of techniques that improve speed or accuracy. Many are quite orthogonal to our work here-e.g., the SpMV method (which is necessary for Bodenstab's parser to beat ours) is a set of cache-efficient optimizations (Dunlop, 2014) that could be added to our parser (just as it was added to Bodenstab's), while Hall et al. (2014) and Fern\u00e1ndez-Gonz\u00e1lez and Martins (2015) replace the grammar with faster scoring models that have more conditional independence. Overall, other fast parsers could also be trained using LOLS, so that they quickly find parses that are accurate, or at least helpful to the accuracy of some downstream task. Pruning methods 14 can use classifiers not only to select spans but also to prune at other granularities (Roark and Hollingshead, 2008; Bodenstab et al., 2011) . Prioritization methods do not prune substructures, but instead delay their processing until they are needed-if ever (Caraballo and Charniak, 1998) .",
"cite_spans": [
{
"start": 459,
"end": 473,
"text": "(Dunlop, 2014)",
"ref_id": "BIBREF15"
},
{
"start": 553,
"end": 571,
"text": "Hall et al. (2014)",
"ref_id": "BIBREF23"
},
{
"start": 576,
"end": 613,
"text": "Fern\u00e1ndez-Gonz\u00e1lez and Martins (2015)",
"ref_id": "BIBREF20"
},
{
"start": 982,
"end": 1012,
"text": "(Roark and Hollingshead, 2008;",
"ref_id": "BIBREF40"
},
{
"start": 1013,
"end": 1036,
"text": "Bodenstab et al., 2011)",
"ref_id": "BIBREF3"
},
{
"start": 1155,
"end": 1185,
"text": "(Caraballo and Charniak, 1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 77,
"end": 83,
"text": "Fig. 5",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Related work",
"sec_num": "8"
},
{
"text": "This paper focuses on learning pruning heuristics that have trainable parameters. In the same way, Stoyanov and Eisner (2012) learn to turn off unneeded factors in a graphical model, and Jiang et al. (2012) and Berant and Liang (2015) train prioritization heuristics (using policy gradient). In both of those 2012 papers, we explicitly sought to maximize accuracy \u2212 \u03bb \u2022 runtime as we do here. Some previous \"coarse-to-fine\" work does not optimize heuris-System F1 words/sec Dyer et al. (2016a) ; Dyer et al. (2016b) 93.3 - Zhu et al. (2013) 90.4 1290 Fern\u00e1ndez-Gonz\u00e1lez and Martins (2015) tics directly but rather derives heuristics for pruning (Charniak et al., 2006; Petrov and Klein, 2007; Weiss and Taskar, 2010; Rush and Petrov, 2012) or prioritization (Klein and Manning, 2003; Pauls and Klein, 2009 ) from a coarser version of the model. Combining these automatic methods with LOLS would require first enriching their heuristics with trainable parameters, or parameterizing the coarse-to-fine hierarchy itself as in the \"feature pruning\" work of He et al. (2013) and Strubell et al. (2015) . Dynamic features are ones that depend on previous actions. In our setting, a policy could in principle benefit from considering the full state of the chart at Alg. 1 line 14. While coarse-to-fine methods implicitly use certain dynamic features, training with dynamic features is a fairly new goal that is challenging to treat efficiently. It has usually been treated with some form of simple imitation learning, using a heuristic training signal much as in our baseline (Jiang, 2014; He et al., 2013) . LOLS would be a more principled way to train such features, but for efficiency, our present paper restricts to static features that only access the state via \u03c0(w, i, k). This permits our fast CP and DP rollout algorithms. It also reduces the time and space cost of dataset aggregation. 15 LOLS attempts to do end-to-end training of a sequential decision-making system, without falling back on black-box optimization tools (Och, 2003; Chung and Galley, 2012) that ignore the sequential structure. In NLP, sequential decisions are more commonly trained with step-by-step supervision 15 LOLS repeatedly evaluates actions given (w, i, k). We consolidate the resulting training examples by summing their reward vectors r, so the aggregated dataset does not grow over time. (Kuhlmann et al., 2011) , using methods such as local classification (Punyakanok and Roth, 2001) or beam search with early update (Collins and Roark, 2004) . LOLS tackles the harder setting where the only training signal is a joint assessment of the entire sequence of actions. It is an alternative to policy gradient, which does not scale well to our long trajectories because of high variance in the estimated gradient and because random exploration around (even good) pruning policies most often results in no parse at all. LOLS uses controlled comparisons, resulting in more precise \"credit assignment\" and tighter exploration.",
"cite_spans": [
{
"start": 99,
"end": 125,
"text": "Stoyanov and Eisner (2012)",
"ref_id": "BIBREF46"
},
{
"start": 187,
"end": 206,
"text": "Jiang et al. (2012)",
"ref_id": "BIBREF26"
},
{
"start": 211,
"end": 234,
"text": "Berant and Liang (2015)",
"ref_id": "BIBREF2"
},
{
"start": 474,
"end": 493,
"text": "Dyer et al. (2016a)",
"ref_id": "BIBREF16"
},
{
"start": 496,
"end": 515,
"text": "Dyer et al. (2016b)",
"ref_id": "BIBREF17"
},
{
"start": 523,
"end": 540,
"text": "Zhu et al. (2013)",
"ref_id": "BIBREF51"
},
{
"start": 551,
"end": 588,
"text": "Fern\u00e1ndez-Gonz\u00e1lez and Martins (2015)",
"ref_id": "BIBREF20"
},
{
"start": 645,
"end": 668,
"text": "(Charniak et al., 2006;",
"ref_id": "BIBREF8"
},
{
"start": 669,
"end": 692,
"text": "Petrov and Klein, 2007;",
"ref_id": "BIBREF36"
},
{
"start": 693,
"end": 716,
"text": "Weiss and Taskar, 2010;",
"ref_id": "BIBREF49"
},
{
"start": 717,
"end": 739,
"text": "Rush and Petrov, 2012)",
"ref_id": "BIBREF43"
},
{
"start": 758,
"end": 783,
"text": "(Klein and Manning, 2003;",
"ref_id": "BIBREF29"
},
{
"start": 784,
"end": 805,
"text": "Pauls and Klein, 2009",
"ref_id": "BIBREF35"
},
{
"start": 1053,
"end": 1069,
"text": "He et al. (2013)",
"ref_id": "BIBREF24"
},
{
"start": 1074,
"end": 1096,
"text": "Strubell et al. (2015)",
"ref_id": "BIBREF47"
},
{
"start": 1569,
"end": 1582,
"text": "(Jiang, 2014;",
"ref_id": "BIBREF27"
},
{
"start": 1583,
"end": 1599,
"text": "He et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 1888,
"end": 1890,
"text": "15",
"ref_id": null
},
{
"start": 2024,
"end": 2035,
"text": "(Och, 2003;",
"ref_id": "BIBREF34"
},
{
"start": 2036,
"end": 2059,
"text": "Chung and Galley, 2012)",
"ref_id": "BIBREF10"
},
{
"start": 2183,
"end": 2185,
"text": "15",
"ref_id": null
},
{
"start": 2370,
"end": 2393,
"text": "(Kuhlmann et al., 2011)",
"ref_id": "BIBREF30"
},
{
"start": 2439,
"end": 2466,
"text": "(Punyakanok and Roth, 2001)",
"ref_id": "BIBREF39"
},
{
"start": 2500,
"end": 2525,
"text": "(Collins and Roark, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "8"
},
{
"text": "We would be remiss not to note that current transition-based parsers-for constituency parsing (Zhu et al., 2013; Crabb\u00e9, 2015) as well as dependency parsing (Chen and Manning, 2014) -are both incredibly fast and surprisingly accurate. This may appear to undermine the motivation for our work, or at least for its application to fast parsing. 16 However, transition-based parsers do not produce marginal probabilities of substructures, which can be useful features for downstream tasks. Indeed, the transitionbased approach is essentially greedy and so it may fail on tasks with more ambiguity than parsing. Current transition-based parsers also require step-by-step supervision, whereas our method can also be used to train in the presence of incomplete supervision, latent structure, or indirect feedback. Our method could also be used immediately to speed up dynamic programming methods for MT, synchronous parsing, parsing with non-context-free grammar formalisms, and other structured prediction problems for which transition systems have not (yet) been designed.",
"cite_spans": [
{
"start": 94,
"end": 112,
"text": "(Zhu et al., 2013;",
"ref_id": "BIBREF51"
},
{
"start": 113,
"end": 126,
"text": "Crabb\u00e9, 2015)",
"ref_id": "BIBREF12"
},
{
"start": 157,
"end": 181,
"text": "(Chen and Manning, 2014)",
"ref_id": "BIBREF9"
},
{
"start": 342,
"end": 344,
"text": "16",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "8"
},
{
"text": "We presented an approach to learning pruning policies that optimizes end-to-end performance on a userspecified speed-accuracy tradeoff. We developed two novel algorithms for efficiently measuring how varying policy actions affects reward. In the case of parsing, given a performance criterion and a good baseline policy for that criterion, the learner consistently manages to find a higher-reward policy. We hope this work inspires a new generation of fast and accurate structured prediction models with tunable runtimes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "Transactions of the Association for Computational Linguistics, vol. 5, pp. 263-278, 2017. Action Editor: Marco Kuhlmann.Submission batch: 5/2016; Revision batch: 9/2016; Published 8/2017. c 2017 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Parsers may break ties randomly or use Monte Carlo methods. The reward function r can be nondeterministic when it involves wallclock time or human judgments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our experiments do not make use of the current state of the chart. We discuss this decision in \u00a78.4 Alg. 2 is simpler than inChang et al. (2015b) because it omits oracle rollouts, which we do not use in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our implementation uses a slightly faster method which accumulates an \"undo list\" of changes that it makes to the chart to quickly revert the modified chart to the original roll-in state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In theory, we could anneal from expected to 1-best recall(Smith and Eisner, 2006). We experimented extensively with annealing but found it to be too numerically unstable for our purposes, even with high-precision arithmetic libraries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Code for experiments is available at http://github. com/timvieira/learning-to-prune.8 Data train/dev/test split (by section) 2-21 / 22 / 23. Normalization operations: Remove function tags, traces, spurious unary edges (X \u2192 X), and empty subtrees left by other operations. Relabel ADVP and PRT|ADVP tags to PRT.9 On the 7 th day, LOLS rested and performance was good.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We focus here on parsing, but pruning is generally useful in structured prediction. E.g.,Xu et al. (2013) train a classifier to prune (latent) alignments in a machine translation system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Of course, LOLS can also train transition-based parsers(Chang et al., 2015a), or even vary their beam width dynamically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This material is based in part on research sponsored by the National Science Foundation under Grant No. 0964681 and DARPA under agreement number FA8750-13-2-0017 (DEFT program). We'd like to thank Nathaniel Wesley Filardo, Adam Teichert, Matt Gormley and Hal Daum\u00e9 III for helpful discussions. Finally, we thank TACL action editor Marco Kuhlmann and the anonymous reviewers and copy editor for suggestions that improved this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Self-adjusting computation with Delta ML",
"authors": [
{
"first": "A",
"middle": [],
"last": "Umut",
"suffix": ""
},
{
"first": "Ruy",
"middle": [],
"last": "Acar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ley-Wild",
"suffix": ""
}
],
"year": 2008,
"venue": "Advanced Functional Programming",
"volume": "",
"issue": "",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Umut A. Acar and Ruy Ley-Wild. 2008. Self-adjusting computation with Delta ML. In Pieter Koopman and Doaitse Swierstra, editors, Advanced Functional Pro- gramming, pages 1-38.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Imitation learning of agenda-based semantic parsers",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "545--558",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant and Percy Liang. 2015. Imitation learn- ing of agenda-based semantic parsers. Transactions of the Association for Computational Linguistics, 3:545- 558.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Beam-width prediction for efficient CYK parsing",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Bodenstab",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Dunlop",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Bodenstab, Aaron Dunlop, Keith Hall, and Brian Roark. 2011. Beam-width prediction for efficient CYK parsing. In Proceedings of the Conference of the Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Prioritization and Pruning: Efficient Inference with Weighted Context-Free Grammars",
"authors": [
{
"first": "Nathan",
"middle": [
"Matthew"
],
"last": "Bodenstab",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Matthew Bodenstab. 2012. Prioritization and Pruning: Efficient Inference with Weighted Context- Free Grammars. Ph.D. thesis, Oregon Health and Sci- ence University.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "New figures of merit for best-first probabilistic chart parsing",
"authors": [
{
"first": "Sharon",
"middle": [
"A"
],
"last": "Caraballo",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "2",
"pages": "275--298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon A. Caraballo and Eugene Charniak. 1998. New figures of merit for best-first probabilistic chart parsing. Computational Linguistics, 24(2):275-298.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning to search for dependencies",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Langford",
"suffix": ""
}
],
"year": 2015,
"venue": "Computing Research Repository",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.05615"
]
},
"num": null,
"urls": [],
"raw_text": "Kai-Wei Chang, He He, Hal Daum\u00e9 III, and John Lang- ford. 2015a. Learning to search for dependencies. Computing Research Repository, arXiv:1503.05615.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning to search better than your teacher",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Akshay",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Alekh",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Langford",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daum\u00e9 III, and John Langford. 2015b. Learning to search better than your teacher. In Proceedings of the International Conference on Machine Learning.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multilevel coarse-to-fine PCFG parsing",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Micha",
"middle": [],
"last": "Elsner",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Austerweil",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Ellis",
"suffix": ""
},
{
"first": "Isaac",
"middle": [],
"last": "Haxton",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Shrivaths",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Pozar",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Vu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics and Human Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak, Mark Johnson, Micha Elsner, Joseph Austerweil, David Ellis, Isaac Haxton, Catherine Hill, R. Shrivaths, Jeremy Moore, Michael Pozar, and Theresa Vu. 2006. Multilevel coarse-to-fine PCFG parsing. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics and Human Language Technology.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A fast and accurate dependency parser using neural networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Direct error rate minimization for statistical machine translation",
"authors": [
{
"first": "Tagyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tagyoung Chung and Michel Galley. 2012. Direct error rate minimization for statistical machine translation. In Proceedings of the Workshop on Statistical Machine Translation.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Incremental parsing with the perceptron algorithm",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the Conference of the Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multilingual discriminative lexicalized phrase structure parsing",
"authors": [
{
"first": "",
"middle": [],
"last": "Benoit Crabb\u00e9",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benoit Crabb\u00e9. 2015. Multilingual discriminative lexi- calized phrase structure parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Practical Structured Learning Techniques for Natural Language Processing",
"authors": [
{
"first": "Iii",
"middle": [],
"last": "Harold Charles Daum\u00e9",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harold Charles Daum\u00e9 III. 2006. Practical Structured Learning Techniques for Natural Language Processing. Ph.D. thesis, University of Southern California.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Reducing the grammar constant: An analysis of CYK parsing efficiency",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Dunlop",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Bodenstab",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Dunlop, Nathan Bodenstab, and Brian Roark. 2010. Reducing the grammar constant: An analysis of CYK parsing efficiency. Technical report, CSLU-2010-02, OHSU.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Efficient Latent-Variable Grammars: Learning and Inference",
"authors": [
{
"first": "Aaron Joseph",
"middle": [],
"last": "Dunlop",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Joseph Dunlop. 2014. Efficient Latent-Variable Grammars: Learning and Inference. Ph.D. thesis, Ore- gon Health and Science University.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Recurrent neural network grammars",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Computing Research Repository",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016a. Recurrent neural net- work grammars. Computing Research Repository, arxiv:1602.07776.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Recurrent neural network grammars",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Adhiguna",
"middle": [],
"last": "Kuncoro",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016b. Recurrent neural network grammars. In Proceedings of the Conference of the North American Chapter of the Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Program transformations for optimization of parsing algorithms and other weighted logic programs",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Blatz",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Conference on Formal Grammar",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner and John Blatz. 2007. Program transforma- tions for optimization of parsing algorithms and other weighted logic programs. In Proceedings of the Con- ference on Formal Grammar. CSLI Publications.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Inside-outside and forward-backward algorithms are just backprop",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the EMNLP Workshop on Structured Prediction for NLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner. 2016. Inside-outside and forward-backward algorithms are just backprop. In Proceedings of the EMNLP Workshop on Structured Prediction for NLP.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Parsing as reduction",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "-Gonz\u00e1lez",
"middle": [],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Fern\u00e1ndez-Gonz\u00e1lez and Andr\u00e9 F. T. Martins. 2015. Parsing as reduction. In Proceedings of the Conference of the Association for Computational Lin- guistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A flexible solver for finite arithmetic circuits",
"authors": [
{
"first": "Nathaniel",
"middle": [],
"last": "Wesley Filardo",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2012,
"venue": "Technical Communications of the International Conference on Logic Programming",
"volume": "17",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathaniel Wesley Filardo and Jason Eisner. 2012. A flexible solver for finite arithmetic circuits. In Techni- cal Communications of the International Conference on Logic Programming, volume 17 of Leibniz Interna- tional Proceedings in Informatics (LIPIcs).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Efficient, feature-based, conditional random field parsing",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kleeman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, conditional random field parsing. In Proceedings of the Conference of the Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Less grammar, more features",
"authors": [
{
"first": "David",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Hall, Greg Durrett, and Dan Klein. 2014. Less grammar, more features. In Proceedings of the Confer- ence of the Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Dynamic feature selection for dependency parsing",
"authors": [
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "He He, Hal Daum\u00e9 III, and Jason Eisner. 2013. Dy- namic feature selection for dependency parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Advanced dynamic programming in semiring and hypergraph frameworks",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2008,
"venue": "Material accompanying tutorials at COLING'08 and NAACL'09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang. 2008. Advanced dynamic programming in semiring and hypergraph frameworks. Material accom- panying tutorials at COLING'08 and NAACL'09.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learned prioritization for trading off accuracy and speed",
"authors": [
{
"first": "Jiarong",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Teichert",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiarong Jiang, Adam Teichert, Hal Daum\u00e9 III, and Jason Eisner. 2012. Learned prioritization for trading off accuracy and speed. In Advances in Neural Information Processing Systems.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Efficient Non-deterministic Search in Structured Prediction: A Case Study in Syntactic Parsing",
"authors": [
{
"first": "Jiarong",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiarong Jiang. 2014. Efficient Non-deterministic Search in Structured Prediction: A Case Study in Syntactic Parsing. Ph.D. thesis, University of Maryland.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Parsing and hypergraphs",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2001,
"venue": "International Workshop on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2001. Parsing and hypergraphs. In International Workshop on Parsing Technologies.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A* parsing: Fast exact Viterbi parse selection",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics and Human Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2003. A* pars- ing: Fast exact Viterbi parse selection. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics and Human Language Technology.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Dynamic programming algorithms for transition-based dependency parsers",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Kuhlmann, Carlos G\u00f3mez-Rodr\u00edguez, and Giorgio Satta. 2011. Dynamic programming algorithms for transition-based dependency parsers. In Proceedings of the Conference of the Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "First-and second-order expectation semirings with applications to minimumrisk training on translation forests",
"authors": [
{
"first": "Zhifei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhifei Li and Jason Eisner. 2009. First-and second-order expectation semirings with applications to minimum- risk training on translation forests. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Building a large annotated corpus of English: The Penn treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beat- rice Santorini. 1993. Building a large annotated corpus of English: The Penn treebank. Computational Lin- guistics, 19(2).",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Probabilistic CFG with latent annotations",
"authors": [
{
"first": "Takuya",
"middle": [],
"last": "Matsuzaki",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takuya Matsuzaki, Yusuke Miyao, and Jun'ichi Tsujii. 2005. Probabilistic CFG with latent annotations. In Proceedings of the Conference of the Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the Conference of the Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Hierarchical search for parsing",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Pauls",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics and Human Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Pauls and Dan Klein. 2009. Hierarchical search for parsing. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics and Human Language Technology.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Improved inference for unlexicalized parsing",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics and Human Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of the Confer- ence of the North American Chapter of the Association for Computational Linguistics and Human Language Technology.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Learning accurate, compact, and interpretable tree annotation",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Romain",
"middle": [],
"last": "Thibaux",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and inter- pretable tree annotation. In Proceedings of the Confer- ence of the Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Coarseto-fine syntactic machine translation using language projections",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov, Aria Haghighi, and Dan Klein. 2008. Coarse- to-fine syntactic machine translation using language projections. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "The use of classifiers in sequential inference",
"authors": [
{
"first": "Vasin",
"middle": [],
"last": "Punyakanok",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2001,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vasin Punyakanok and Dan Roth. 2001. The use of clas- sifiers in sequential inference. In Advances in Neural Information Processing Systems.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Classifying chart cells for quadratic complexity context-free inference",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
},
{
"first": "Kristy",
"middle": [],
"last": "Hollingshead",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Roark and Kristy Hollingshead. 2008. Classifying chart cells for quadratic complexity context-free infer- ence. In Proceedings of the International Conference on Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Stability conditions for online learnability",
"authors": [
{
"first": "St\u00e9phane",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "J",
"middle": [
"Andrew"
],
"last": "Bagnell",
"suffix": ""
}
],
"year": 2011,
"venue": "Computing Research Repository",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1108.3154"
]
},
"num": null,
"urls": [],
"raw_text": "St\u00e9phane Ross and J. Andrew Bagnell. 2011. Stability conditions for online learnability. Computing Research Repository, arXiv:1108.3154.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A reduction of imitation learning and structured prediction to no-regret online learning",
"authors": [
{
"first": "St\u00e9phane",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "Geoff",
"middle": [
"J"
],
"last": "Gordon",
"suffix": ""
},
{
"first": "J",
"middle": [
"Andrew"
],
"last": "Bagnell",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Workshop on Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "St\u00e9phane Ross, Geoff J. Gordon, and J. Andrew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the Workshop on Artificial Intelligence and Statistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Vine pruning for efficient multi-pass dependency parsing",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush and Slav Petrov. 2012. Vine prun- ing for efficient multi-pass dependency parsing. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguis- tics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Evalb bracket scoring program",
"authors": [
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satoshi Sekine and Michael Collins. 1997. Evalb bracket scoring program. http://nlp.cs.nyu.edu/evalb.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Minimum risk annealing for training log-linear models",
"authors": [
{
"first": "David",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David A. Smith and Jason Eisner. 2006. Minimum risk annealing for training log-linear models. In Proceed- ings of the International Conference on Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Fast and accurate prediction via evidence-specific MRF structure",
"authors": [
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2006,
"venue": "ICML Workshop on Inferning: Interactions between Inference and Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Veselin Stoyanov and Jason Eisner. 2012. Fast and ac- curate prediction via evidence-specific MRF structure. In ICML Workshop on Inferning: Interactions between Inference and Learning, Edinburgh, June. 6 pages.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Learning dynamic feature selection for fast sequential prediction",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Silverstein",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Luke Vilnis, Kate Silverstein, and An- drew McCallum. 2015. Learning dynamic feature selection for fast sequential prediction. In Proceedings of the Conference of the Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Feature hashing for large scale multitask learning",
"authors": [
{
"first": "Kilian",
"middle": [],
"last": "Weinberger",
"suffix": ""
},
{
"first": "Anirban",
"middle": [],
"last": "Dasgupta",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Langford",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Attenberg",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. 2009. Feature hash- ing for large scale multitask learning. In Proceedings of the International Conference on Machine Learning.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Structured prediction cascades",
"authors": [
{
"first": "David",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Workshop on Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Weiss and Ben Taskar. 2010. Structured prediction cascades. In Proceedings of the Workshop on Artificial Intelligence and Statistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Learning to prune: Context-sensitive pruning for syntactic MT",
"authors": [
{
"first": "Wenduan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenduan Xu, Yue Zhang, Philip Williams, and Philipp Koehn. 2013. Learning to prune: Context-sensitive pruning for syntactic MT. In Proceedings of the Confer- ence of the Association for Computational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Fast and accurate shift-reduce constituent parsing",
"authors": [
{
"first": "Muhua",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jingbo",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shift-reduce constituent parsing. In Proceedings of the Conference of the Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": ") := v ; witness(v) = LEAF 5: Q := \u2205; Q.push(v) Work queue (\"agenda\") 6: while Q = \u2205 : Propagate until convergence 7:u := Q.pop() for e \u2208 Out(u) : Propagate new value ofu 11:\u1e61 := e.head; s := u \u2208e.Tail value(u ) 12: if s > value(\u1e61) : Increase value 13: value(\u1e61) := s; witness(\u1e61) := e 14: Q.push(\u1e61) 15: else if witness(\u1e61) = e and s < value(\u1e61): 16: witness(\u1e61) := NULL Value may decrease 17: Q.push(\u1e61) so, recompute upon pop 18: procedure RECOMPUTE(\u1e61) 19: for e \u2208 In(\u1e61) : Max over incoming hyperedges 20: s := u\u2208e.Tail value(u) 21: if s > value(\u1e61) : 22: value(\u1e61) = s; witness(\u1e61) = e 5.3 Dynamic programming (DP)",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Depiction of LOLS pushing out the frontier of surrogate objectives, r CP (left) and r DP (right), on dev data with coarse (top) and fine (bottom) grammars. Green elements are associated with the baseline and purple elements with LOLS.",
"num": null
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"text": "Test set results on coarse (top) and fine (bottom) grammars. Each curve or column represents a different training regimen. Accuracy is measured in F 1 percentage; runtime is measured by millions of hyperedges built per sentence. { Here, the green arrows point in the direction of true reward. Dashed lines connect each green baseline point to the two LOLS-improved points. Starred points and bold values indicate a significant improvement over the baseline reward (paired permutation test, p < 0.05).",
"num": null
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"text": "argmax a ROLLOUT(\u03c0, s, a) and regret(s) (ROLLOUT(\u03c0, s, best(s) \u2212 ROLLOUT(\u03c0, s, gold(s)))).",
"num": null
},
"FIGREF6": {
"type_str": "figure",
"uris": null,
"text": "Comparison among fast and accurate parsers. Runtimes are computed on different machines and parsers are implemented in different programming languages, so runtime is not a controlled comparison.",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">0.40</td><td>coarse grammar</td><td>fine grammar</td></tr><tr><td/><td/><td colspan=\"2\">0.35</td><td/><td>p(diff)</td></tr><tr><td colspan=\"2\">diff</td><td colspan=\"2\">0.15 0.20 0.30 0.25</td><td/><td>p(diff|gold) p(diff|\u00ac gold)</td></tr><tr><td/><td/><td colspan=\"2\">0.10</td><td/></tr><tr><td/><td/><td colspan=\"2\">0.05</td><td/></tr><tr><td/><td/><td colspan=\"2\">0.00</td><td/></tr><tr><td/><td/><td/><td>0.5</td><td/></tr><tr><td colspan=\"3\">regret | diff</td><td>0.1 0.2 0.3 0.4</td><td/><td>[regret|diff] [regret|diff, gold] [regret|diff, \u00ac gold]</td></tr><tr><td/><td/><td/><td>0.0</td><td/></tr><tr><td/><td colspan=\"3\">0.020</td><td/></tr><tr><td/><td/><td/><td/><td/><td>[regret ]</td></tr><tr><td/><td colspan=\"3\">0.015</td><td/><td>[regret|gold]</td></tr><tr><td>regret</td><td colspan=\"3\">0.010</td><td/><td>[regret|\u00ac gold]</td></tr><tr><td/><td colspan=\"3\">0.005</td><td/></tr><tr><td/><td colspan=\"3\">0.000</td><td>10 -4 \u03bb (log scale)</td><td>10 -6</td><td>10 -5 \u03bb (log scale)</td><td>10 -4</td></tr><tr><td colspan=\"4\">Figure 4:</td><td/></tr></table>",
"type_str": "table",
"text": "Comparison of the LOLS and baseline training signals based on the regret decomposition in Eq. (4) as we vary \u03c0, \u03bb, and grammar. Solid lines show where the baseline oracle is suboptimal on its own system \u03c0 1 and dashed lines show where it is suboptimal on the LOLS-improved system \u03c0 * . Each plot shows an overall quantity in black as well as that quantity broken down by gold and non-gold spans. Top: Fraction of states in which oracles differ. Middle: Expected regret per state in which oracles differ. Bottom: Expected regret per state. See \u00a77.3 for discussion.",
"num": null
}
}
}
}