ACL-OCL / Base_JSON /prefixP /json /P16 /P16-1001.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P16-1001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:58:20.101868Z"
},
"title": "Noise reduction and targeted exploration in imitation learning for Abstract Meaning Representation parsing",
"authors": [
{
"first": "James",
"middle": [],
"last": "Goodman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University College London",
"location": {}
},
"email": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {}
},
"email": "a.vlachos@sheffield.ac.uk"
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University College London",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Semantic parsers map natural language statements into meaning representations, and must abstract over syntactic phenomena, resolve anaphora, and identify word senses to eliminate ambiguous interpretations. Abstract meaning representation (AMR) is a recent example of one such semantic formalism which, similar to a dependency parse, utilizes a graph to represent relationships between concepts (Banarescu et al., 2013). As with dependency parsing, transition-based approaches are a common approach to this problem. However, when trained in the traditional manner these systems are susceptible to the accumulation of errors when they find undesirable states during greedy decoding. Imitation learning algorithms have been shown to help these systems recover from such errors. To effectively use these methods for AMR parsing we find it highly beneficial to introduce two novel extensions: noise reduction and targeted exploration. The former mitigates the noise in the feature representation, a result of the complexity of the task. The latter targets the exploration steps of imitation learning towards areas which are likely to provide the most information in the context of a large action-space. We achieve state-ofthe art results, and improve upon standard transition-based parsing by 4.7 F 1 points.",
"pdf_parse": {
"paper_id": "P16-1001",
"_pdf_hash": "",
"abstract": [
{
"text": "Semantic parsers map natural language statements into meaning representations, and must abstract over syntactic phenomena, resolve anaphora, and identify word senses to eliminate ambiguous interpretations. Abstract meaning representation (AMR) is a recent example of one such semantic formalism which, similar to a dependency parse, utilizes a graph to represent relationships between concepts (Banarescu et al., 2013). As with dependency parsing, transition-based approaches are a common approach to this problem. However, when trained in the traditional manner these systems are susceptible to the accumulation of errors when they find undesirable states during greedy decoding. Imitation learning algorithms have been shown to help these systems recover from such errors. To effectively use these methods for AMR parsing we find it highly beneficial to introduce two novel extensions: noise reduction and targeted exploration. The former mitigates the noise in the feature representation, a result of the complexity of the task. The latter targets the exploration steps of imitation learning towards areas which are likely to provide the most information in the context of a large action-space. We achieve state-ofthe art results, and improve upon standard transition-based parsing by 4.7 F 1 points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Meaning representation languages and systems have been devised for specific domains, such as ATIS for air-travel bookings (Dahl et al., 1994) and database queries (Zelle and Mooney, 1996; Figure 1: Dependency (left) and AMR graph (right) for: \"The center will bolster NATO's defenses against cyber-attacks.' Liang et al., 2013) . Such machine-interpretable representations enable many applications relying on natural language understanding. The ambition of Abstract Meaning Representation (AMR) is that it is domain-independent and useful in a variety of applications (Banarescu et al., 2013) . The first AMR parser by Flanigan et al. (2014) used graph-based inference to find a highestscoring maximum spanning connected acyclic graph. Later work by Wang et al. (2015b) was inspired by the similarity between the dependency parse of a sentence and its semantic AMR graph ( Figure 1 ). Wang et al. (2015b) start from the dependency parse and learn a transition-based parser that converts it incrementally into an AMR graph using greedy decoding. An advantage of this approach is that the initial stage of dependency parsing is well-studied and trained using larger corpora than that for which AMR annotations exist.",
"cite_spans": [
{
"start": 122,
"end": 141,
"text": "(Dahl et al., 1994)",
"ref_id": "BIBREF8"
},
{
"start": 163,
"end": 187,
"text": "(Zelle and Mooney, 1996;",
"ref_id": "BIBREF34"
},
{
"start": 188,
"end": 188,
"text": "",
"ref_id": null
},
{
"start": 309,
"end": 328,
"text": "Liang et al., 2013)",
"ref_id": "BIBREF19"
},
{
"start": 569,
"end": 593,
"text": "(Banarescu et al., 2013)",
"ref_id": "BIBREF1"
},
{
"start": 620,
"end": 642,
"text": "Flanigan et al. (2014)",
"ref_id": "BIBREF10"
},
{
"start": 751,
"end": 770,
"text": "Wang et al. (2015b)",
"ref_id": "BIBREF32"
},
{
"start": 886,
"end": 905,
"text": "Wang et al. (2015b)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 874,
"end": 882,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Greedy decoding, where the parser builds the parse while maintaining only the best hypothesis at each step, has a well-documented disadvantage: error propagation (McDonald and Nivre, 2007) . When the parser encounters states during parsing that are unlike those found during training, it is more likely to make mistakes, leading to states which are increasingly more foreign and causing errors to accumulate.",
"cite_spans": [
{
"start": 162,
"end": 188,
"text": "(McDonald and Nivre, 2007)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One way to ameliorate this problem is to employ imitation learning algorithms for structured prediction. Algorithms such as SEARN (Daum\u00e9 III et al., 2009) , DAGGER (Ross et al., 2011) , and LOLS (Chang et al., 2015) address the problem of error propagation by iteratively adjusting the training data to increasingly expose the model to training instances it is likely to encounter during test. Such algorithms have been shown to improve performance in a variety of tasks including information extraction (Vlachos and Craven, 2011) , dependency parsing (Goldberg and Nivre, 2013) , and feature selection (He et al., 2013) . In this work we build on the transition-based parsing approach of Wang et al. (2015b) and explore the applicability of different imitation algorithms to AMR parsing, which has a more complex output space than those considered previously.",
"cite_spans": [
{
"start": 130,
"end": 154,
"text": "(Daum\u00e9 III et al., 2009)",
"ref_id": "BIBREF9"
},
{
"start": 164,
"end": 183,
"text": "(Ross et al., 2011)",
"ref_id": "BIBREF26"
},
{
"start": 195,
"end": 215,
"text": "(Chang et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 504,
"end": 530,
"text": "(Vlachos and Craven, 2011)",
"ref_id": "BIBREF30"
},
{
"start": 552,
"end": 578,
"text": "(Goldberg and Nivre, 2013)",
"ref_id": "BIBREF13"
},
{
"start": 603,
"end": 620,
"text": "(He et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 689,
"end": 708,
"text": "Wang et al. (2015b)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The complexity of AMR parsing affects transition-based methods that rely on features to represent structure, since these often cannot capture the information necessary to predict the correct transition according to the gold standard. In other words, the features defined are not sufficient to \"explain\" why different actions should preferred by the model. Such instances become noise during training, resulting in lower accuracy. To address this issue, we show that the \u03b1-bound Khardon and Wachman (2007) , which drops consistently misclassified training instances, provides a simple and effective way of reducing noise and raising performance in perceptron-style classification training, and does so reliably across a range of parameter settings. This noise reduction is essential for imitation learning to gain traction in this task, and we gain 1.8 points of F 1 -Score using the DAGGER imitation learning algorithm.",
"cite_spans": [
{
"start": 478,
"end": 504,
"text": "Khardon and Wachman (2007)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "DAGGER relies on an externally specified expert (oracle) to define the correct action in each state; this defines a simple 0-1 loss function for each action. Other imitation learning algorithms (such as LOLS, SEARN) and the variant of DAGGER proposed by Vlachos and Clark (2014) (henceforth V-DAGGER) can leverage a task level loss function that does not decompose over the actions taken to construct the AMR graph. However these require extra computations to roll-out to an end-state AMR graph for each possible action not taken. The large action-space of our transition system makes these algorithms computationally infeasible, and roll-outs to an end-state for many of the possible actions will provide little additional information. Hence we modify the algorithms to target this exploration to actions where the classifier being trained is uncertain of the correct response, or disagrees with the expert. This provides a further gain of 2.7 F 1 points.",
"cite_spans": [
{
"start": 254,
"end": 278,
"text": "Vlachos and Clark (2014)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper extends imitation learning to structured prediction tasks more complex than previously attempted. In the process, we review and compare recently proposed algorithms and show how their components can be recombined and adjusted to construct a variant appropriate to the task in hand. Hence we invest some effort reviewing these algorithms and their common elements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Overall, we obtain a final F-Score of 0.70 on the newswire corpus of LDC2013E117 (Knight et al., 2014) . This is identical to the score obtained by Wang et al. (2015a) , the highest so far published. Our gain of 4.5 F 1 points from imitation learning over standard transition-based parsing is orthogonal to that of Wang et al. (2015a) from additional trained analysers, including co-reference and semantic role labellers, incorporated in the feature set. We further test on five other corpora of AMR graphs, including weblog domains, and show a consistent improvement in all cases with the application of imitation learning using DAGGER and the targeted V-DAGGER we propose here.",
"cite_spans": [
{
"start": 81,
"end": 102,
"text": "(Knight et al., 2014)",
"ref_id": "BIBREF18"
},
{
"start": 148,
"end": 167,
"text": "Wang et al. (2015a)",
"ref_id": "BIBREF31"
},
{
"start": 315,
"end": 334,
"text": "Wang et al. (2015a)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "AMR parsing is an example of the wider family of structured prediction problems, in which we seek a mapping from an input x \u2208 X to a structured output y \u2208 Y. Here x is the dependency tree, and y the AMR graph; both are graphs and we notationally replace x with s 1 and y with s T , with s 1...T \u2208 S. s i are the intermediate graph configurations (states) that the system transitions through.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based AMR parsing",
"sec_num": "2"
},
{
"text": "A transition-based parser starts with an input s 1 , and selects an action a 1 \u2208 A, using a classifier. a i converts s i into s i+1 , i.e. s i+1 = a i (s i ). We term the set of states and actions s 1 , a 1 , . . . a T \u22121 , s T a trajectory of length T . The classifier\u03c0 is trained to predict a i from s i , with\u03c0(s) = arg max a\u2208A w a \u2022 \u03a6(s), assuming a linear classifier and a feature function \u03a6(s).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based AMR parsing",
"sec_num": "2"
},
{
"text": "We require an expert, \u03c0 * , that can indicate what actions should be taken on each s i to reach the target (gold) end state. In problems like POStagging these are directly inferable from gold, as the number of actions (T ) equals the number of ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based AMR parsing",
"sec_num": "2"
},
{
"text": "terminal state s T 1 s current \u2190 s 1 ; 2 while s current not terminal do 3 a next \u2190 \u03c0(s current ) s current \u2190 a next (s current ) 4 s T \u2190 s current",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based AMR parsing",
"sec_num": "2"
},
{
"text": "tokens with a 1:1 correspondence between them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based AMR parsing",
"sec_num": "2"
},
{
"text": "In dependency parsing and AMR parsing this is not straightforward and dedicated transition systems are devised.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based AMR parsing",
"sec_num": "2"
},
{
"text": "Given a labeled training dataset D, algorithm 1 is first used to generate a trajectory for each of the inputs (d \u2208 D) with \u03c0 = \u03c0 * , the expert from which we wish to generalise. The data produced from all expert trajectories (i.e. s i,d , a i,d for all i \u2208 1 . . . T and all d \u2208 1 . . . D), are used to train the classifier\u03c0, the learned classifier, using standard supervised learning techniques. Algorithm 1 is reused to apply\u03c0 to unseen data. Our transition system (defining A, S), and feature sets are based on Wang et al. (2015b) , and are not the main focus of this paper. We introduce the key concepts here, with more details in the supplemental material.",
"cite_spans": [
{
"start": 514,
"end": 533,
"text": "Wang et al. (2015b)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based AMR parsing",
"sec_num": "2"
},
{
"text": "We initialise the state with the stack of the nodes in the dependency tree, root node at the bottom. This stack is termed \u03c3. A second stack, \u03b2 is initialised with all children of the top node in \u03c3. The state at any time is described by \u03c3, \u03b2, and the current graph (which starts as the dependency tree with one node per token). At any stage before termination some of the nodes will be labelled with words from the sentence, and others with AMR concepts. Each action manipulates the top nodes in each stack, \u03c3 0 and \u03b2 0 . We reach a terminal state when \u03c3 is empty. The objective function to maximise is the Smatch score , which calculates an F 1 -Score between the predicted and gold-target AMR graphs. Table 1 summarises the actions in A. NextNode and NextEdge form the core action set, labelling nodes and edges respectively without changing the graph structure. Swap, Reattach and ReplaceHead change graph structure, keeping it a tree. We permit a Reattach action to use parameter \u03ba equal to any node within six edges from \u03c3 0 , excluding any that would disconnect the graph or create a cycle.",
"cite_spans": [],
"ref_spans": [
{
"start": 702,
"end": 709,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Transition-based AMR parsing",
"sec_num": "2"
},
{
"text": "The Insert/InsertBelow actions insert a new node as a parent/child of \u03c3 0 . These actions are not used in Wang et al. (2015b) , but Insert is very similar to the Infer action of Wang et al. (2015a) . We do not use the Reentrance action of Wang et al. (2015b), as we found it not to add any benefit. This means that the output AMR is always a tree.",
"cite_spans": [
{
"start": 106,
"end": 125,
"text": "Wang et al. (2015b)",
"ref_id": "BIBREF32"
},
{
"start": 178,
"end": 197,
"text": "Wang et al. (2015a)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based AMR parsing",
"sec_num": "2"
},
{
"text": "Our transition system has two characteristics which provide a particular challenge: given a sentence, the trajectory length T is theoretically unbounded; and |A| can be of the order 10 3 to 10 4 . Commonly used transition-based systems have a fixed trajectory length T , which often arises naturally from the nature of the problem. In PoStagging each token requires a single action, and in syntactic parsing the total size of the graph is limited to the number of tokens in the input. The lack of a bound in T here is due to Insert actions that can grow the the graph, potentially ad infinitum, and actions like Reattach, which can move a sub-graph repeatedly back-and-forth. The action space size is due to the size of the AMR vocabulary, which for relations (edge-labels) is restricted to about 100 possible values, but for concepts (node-labels) is almost as broad as an En-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based AMR parsing",
"sec_num": "2"
},
{
"text": "Algorithm 2: Generic Imitation Learning Data: data D, expert \u03c0 * , Loss function F (s) Result: learned classifier C, trained policy\u03c0 1 Initialise C 0 ; for n = 1 to N do 2 Initialise E n = \u03c6; 3 \u03c0 Rollin = RollInP olicy(\u03c0 * , C 0...n\u22121 , n); 4 \u03c0 Rollout = RollOutP olicy(\u03c0 * , C 0...n\u22121 , n); 5 for d \u2208 D do 6 Predict trajectory\u015d 1:T with \u03c0 Rollin ; 7 for\u015d t \u2208\u015d 1:T do 8 foreach a j t \u2208 Explore(\u015d t , \u03c0 * , \u03c0 Rollin ) do 9 \u03a6 j t = \u03a6(d, a j t ,\u015d 1:t );",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based AMR parsing",
"sec_num": "2"
},
{
"text": "Predict\u015d t+1:T with \u03c0 Rollout ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based AMR parsing",
"sec_num": "2"
},
{
"text": "L j t = F (\u015d T ); foreach j do ActionCost j t = L j t \u2212min k L k t Add (\u03a6 t , ActionCost t ) to E n ; 15\u03c0 n , C n = T rain(C 1...n\u22121 , E 1 . . . E n );",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based AMR parsing",
"sec_num": "2"
},
{
"text": "glish dictionary. The large action space and unbounded T also make beam search difficult to apply since it relies on a fixed length T with commensurability of actions at the same index on different search trajectories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition-based AMR parsing",
"sec_num": "2"
},
{
"text": "Imitation learning originated in robotics, training a robot to follow the actions of a human expert (Schaal, 1999; Silver et al., 2008) . The robot moves from state to state via actions, generating a trajectory in the same manner as the transitionbased parser of Algorithm 1. In the imitation learning literature, the learning of a policy\u03c0 from just the expert generated trajectories is termed \"exact imitation\".As discussed, it is prone to error propagation, which arises because the implicit assumption of i.i.d. inputs (s i ) during training does not hold. The states in any trajectory are dependent on previous states, and on the policy used. A number of imitation learning algorithms have been proposed to mitigate error propagation, and share a common structure shown in Algorithm 2. Table 2 highlights some key differences between them.",
"cite_spans": [
{
"start": 100,
"end": 114,
"text": "(Schaal, 1999;",
"ref_id": "BIBREF27"
},
{
"start": 115,
"end": 135,
"text": "Silver et al., 2008)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 790,
"end": 797,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Imitation Learning for Structured Prediction",
"sec_num": "3"
},
{
"text": "The general algorithm firstly applies a policy \u03c0 RollIn (usually the expert, \u03c0 * , to start) to the data instances to generate a set of 'RollIn' trajectories in line 6 (we adopt the terminology of 'RollIn' and 'RollOut' trajectories from Chang et al. (2015) ). Secondly a number of 'what if' scenarios are considered, in which a different action a j t is taken from a given s t instead of the actual a t in the RollIn trajectory (line 8). Each of these exploratory actions generates a RollOut trajectory (line 10) to a terminal state, for which a loss (L) is calculated using a loss function, F (s j T ), defined on the terminal states. For a number of different exploratory actions taken from a state s t on a RollIn trajectory, the action cost (or relative loss) of each is calculated (line 13). Finally the generated s t , a j t , ActionCost j t data are used to train a classifier, using any cost-sensitive classification (CSC) method (line 15). New \u03c0 RollIn and \u03c0 RollOut are generated, and the process repeated over a number of iterations. In general the starting expert policy is progressively removed in each iteration, so that the training data moves closer and closer to the distribution encountered by just the trained classifier. This is required to reduce error propagation. For a general imitation learning algorithm we need to specify:",
"cite_spans": [
{
"start": 238,
"end": 257,
"text": "Chang et al. (2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Imitation Learning for Structured Prediction",
"sec_num": "3"
},
{
"text": "\u2022 the policy to generate the RollIn trajectory (the RollInPolicy) \u2022 the policy to generate RollOut trajectories, including rules for interpolation of learned and expert policies (the RollOutPolicy) \u2022 which one-step deviations to explore with a RollOut (the Explore function) \u2022 how RollOut data are used in the classification learning algorithm to generate\u03c0 i . (within the Train function)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Imitation Learning for Structured Prediction",
"sec_num": "3"
},
{
"text": "Exact Imitation can be considered a single iteration of this algorithm, with \u03c0 RollIn equal to the expert policy, and a 0-1 binary loss for F (0 loss for \u03c0 * (s t ), the expert action, and a loss of 1 for any other action); all one-step deviations from the expert trajectory are considered without explicit RollOut to a terminal state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Imitation Learning for Structured Prediction",
"sec_num": "3"
},
{
"text": "In SEARN (Daum\u00e9 III et al., 2009) , one of the first imitation learning algorithms in this framework, the \u03c0 RollIn and \u03c0 RollOut policies are identical within each iteration, and are a stochastic blend of the expert and all classifiers trained in previous iterations. The Explore function considers every possible one-step deviation from the RollIn trajectories, with a full RollOut to a terminal state. The T rain function uses only the training data from the most recent iteration (E n ) to train C n . LOLS extends this work to provide a deterministic learned policy (Chang et al., 2015) , wit\u0125 \u03c0 n = C n . At each iteration\u03c0 n is trained on all previously gathered data E 1...n ; \u03c0 RollIn uses the latest classifier\u03c0 n\u22121 , and each RollOut uses the same policy for all actions in the trajectory; either \u03c0 * with probability \u03b2, or\u03c0 n\u22121 otherwise. Both LOLS and SEARN use an exhaustive search of alternative actions as an Explore function. Chang et al. (2015) consider Structured Contextual Bandits (SCB) as a partial information case, the SCB modification of LOLS permits only one cost function call per RollIn (received from the external environment), so exhaustive RollOut exploration at each step is not possible. SCB-LOLS Explore picks a single step t \u2208 {1 . . . T } at random at which to make a random single-step deviation.",
"cite_spans": [
{
"start": 9,
"end": 33,
"text": "(Daum\u00e9 III et al., 2009)",
"ref_id": "BIBREF9"
},
{
"start": 570,
"end": 590,
"text": "(Chang et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 942,
"end": 961,
"text": "Chang et al. (2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Imitation Learning for Structured Prediction",
"sec_num": "3"
},
{
"text": "Another strand of work uses only the expert policy when calculating the action cost. Ross and Bagnell (2010) introduce SMILE, and later DAG-GER (Ross et al., 2011) . These do not RollOut as such, but as in exact imitation consider all one-step deviations from the RollIn trajectory and obtain a 0/1 action cost for each by asking the expert what it would do in that state. At the nth iteration the training trajectories are generated from an interpolation of \u03c0 * and\u03c0 n\u22121 , with the latter progressively increasing in importance; \u03c0 * is used with probability (1-\u03b4) n\u22121 for some decay rate \u03b4.\u03c0 n is trained using all E 1...n . Ross et al. (2011) discuss and reject calculating an action cost by completing a RollOut from each one-step deviation to a terminal state. Three reasons given are:",
"cite_spans": [
{
"start": 85,
"end": 108,
"text": "Ross and Bagnell (2010)",
"ref_id": "BIBREF24"
},
{
"start": 144,
"end": 163,
"text": "(Ross et al., 2011)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Imitation Learning for Structured Prediction",
"sec_num": "3"
},
{
"text": "1. Lack of real-world applicability, for example in robotic control.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Imitation Learning for Structured Prediction",
"sec_num": "3"
},
{
"text": "if we just have the expert's actions. 3. Time spent calculating RollOuts and calling the expert. Ross and Bagnell (2014) do incorporate RollOuts to calculate an action cost in their AGGREVATE algorithm. These RollOuts use the expert policy only, and allow a cost-sensitive classifier to be trained that can learn that some mistakes are more serious than others. As with DAGGER, the trained policy cannot become better than the expert.",
"cite_spans": [
{
"start": 97,
"end": 120,
"text": "Ross and Bagnell (2014)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lack of knowledge of the final loss function,",
"sec_num": "2."
},
{
"text": "V-DAGGER is the variant proposed by Vlachos and Clark (2014) in a semantic parsing task. It is the same as DAGGER, but with RollOuts using the same policy as RollIn. For both V-DAGGER and SEARN, the stochasticity of the RollOut means that a number of independent samples are taken for each one-step deviation to reduce the variance of the action cost, and noise in the training data. This noise reduction comes at the expense of the time needed to compute additional RollOuts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lack of knowledge of the final loss function,",
"sec_num": "2."
},
{
"text": "Algorithms with full RollOuts have particular value in the absence of an optimal (or nearoptimal) expert able to pick the best action from any state. If we have a suitable loss function, then the benefit of RollOuts may become worth the computation expended on them. For AMR parsing we have both a loss function in Smatch, and the ability to generate arbitrary RollOuts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting imitation learning to AMR",
"sec_num": "4"
},
{
"text": "We therefore use a heuristic expert. This reduces the computational cost at the expense of not always predicting the best action. An expert needs an alignment between gold AMR nodes and tokens in the parse-tree or sentence to determine the actions to convert to one from the other. These alignments are not provided in the gold AMR, and our expert uses the AMR node to token alignments of JAMR (Flanigan et al., 2014) . These alignments are not trained, but generated using regex and string matching rules. However, trajectories are in the range 50-200 actions for most training sentences, which combined with the size of |A| makes an exhaustive search of all one-step deviations expensive. Compare this to unlabeled shift-reduce parsers with 4 actions, or POS tagging with |A| \u223c 30.",
"cite_spans": [
{
"start": 394,
"end": 417,
"text": "(Flanigan et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting imitation learning to AMR",
"sec_num": "4"
},
{
"text": "To reduce this cost we note that exploring Roll-Outs for all possible alternative actions can be uninformative when the learned and expert policies agree on an action and none of the other actions score highly with the learned policy. Extending this insight we modify the Explore function in Algorithm 2 to only consider the expert action, plus all actions scored by the current learned policy that are within a threshold \u03c4 of the score for the best rated action. In the first iteration, when there is no current learned policy, we pick a number of actions (usually 10) at random for exploration. Both SCB-LOLS and AGGREVATE use partial exploration, but select the step t \u2208 1 . . . T , and the action a t at random. Here we optimise computational resources by directing the search to areas for which the trained policy is least sure of the optimal action, or disagrees with the expert.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Targeted exploration",
"sec_num": "4.1"
},
{
"text": "Using imitation learning to address error propagation of transition-based parsing provides theoretical benefit from ensuring the distribution of s t , a t in the training data is consistent with the distribution on unseen test data. Using RollOuts that mix expert and learned policies additionally permits the learned policy to exceed the performance of a poor expert. Incorporating targeted exploration strategies in the Explore function makes this computationally feasible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Targeted exploration",
"sec_num": "4.1"
},
{
"text": "Different samples for a RollOut trajectory using V-DAGGER or SEARN can give very different terminal states s T (the final AMR graph) from the same starting s t and a t due to the step-level stochasticity. The resultant high variance in the reward signal hinders effective learning. Daum\u00e9 III et al. (2009) have a similar problem, and note that an approximate cost function outperforms single Monte Carlo sampling, \"likely due to the noise induced following a single sample\".",
"cite_spans": [
{
"start": 282,
"end": 305,
"text": "Daum\u00e9 III et al. (2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Noise Reduction",
"sec_num": "4.2"
},
{
"text": "To control noise we use the \u03b1-bound discussed by Khardon and Wachman (2007) . This excludes a training example (i.e. an individual tuple s i , a i ) from future training once it has been misclassified \u03b1 times in training. We find that this simple idea avoids the need for multiple RollOut samples.",
"cite_spans": [
{
"start": 49,
"end": 75,
"text": "Khardon and Wachman (2007)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Noise Reduction",
"sec_num": "4.2"
},
{
"text": "An attraction of LOLS is that it randomly selects either expert or learned policy for each Roll-Out, and then applies this consistently to the whole trajectory. Using LOLS should reduce noise without increasing the sample size. Unfortunately the unbounded T of our transition system leads to problems if we drop the expert from the RollIn or RollOut policy mix too quickly, with many trajectories never terminating. Ultimately\u03c0 learns to stop doing this, but even with targeted exploration training time is prohibitive and our LOLS experiments failed to provide results. We find that V-DAGGER with an \u03b1-bound works as a good compromise, keeping the expert involved in RollIn, and speeding up learning overall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noise Reduction",
"sec_num": "4.2"
},
{
"text": "Another approach we try is a form of focused costing (Vlachos and Craven, 2011) . Instead of using the learned policy for \u03b2% of steps in the RollOut, we use it for the first b steps, and then revert to the expert. This has several potential advantages: the heuristic expert is faster than scoring all possible actions; it focuses the impact of the exploratory step on immediate actions/effects so that mistakes\u03c0 makes on a distant part of the graph do not affect the action cost; it reduces noise for the same reason. We increase b in each iteration so that the expert is asymptotically removed from RollOuts, a function otherwise supported by the decay parameter, \u03b4.",
"cite_spans": [
{
"start": 53,
"end": 79,
"text": "(Vlachos and Craven, 2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Noise Reduction",
"sec_num": "4.2"
},
{
"text": "Applying imitation learning to a transition system with unbounded T can and does cause problems in early iterations, with RollIn or RollOut trajectories failing to complete while the learned policy,\u03c0, is still relatively poor. To ensure every trajectory completes we add action constraints to the system. These avoid the most pathological scenarios, such as disallowing a Reattach of a previously Reattached sub-graph. These constraints are only needed in the first few iterations until\u03c0 learns, via the action costs, to avoid these scenarios. They are listed in the Supplemental Material. As a final failsafe we insert a hard-stop on any trajectory once T > 300.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition System adaptations",
"sec_num": "4.3"
},
{
"text": "To address the size of |A|, we only consider a subset of AMR concepts when labelling a node. Wang et al. (2015b) use all concepts that occur in the training data in the same sentence as the lemma of the node, leading to hundreds or thousands of possible actions from some states. We use the smaller set of concepts that were assigned by the expert to the lemma of the current node any- Table 3 : DAGGER with \u03b1-bound. All figures are F-Scores on the validation set. 5 iterations of classifier training take place after each DAgger iteration. A decay rate (\u03b4) for \u03c0 * of 0.3 was used.",
"cite_spans": [
{
"start": 93,
"end": 112,
"text": "Wang et al. (2015b)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 386,
"end": 393,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transition System adaptations",
"sec_num": "4.3"
},
{
"text": "where in the training data. We obtain these assignments from an initial application of the expert to the full training data. We add actions to use the actual word or lemma of the current node to increase generalisation, plus an action to append '-01' to 'verbify' an unseen word. This is similar to the work of Werling et al. (2015) in word to AMR concept mapping, and is useful since 38% of the test AMR concepts do not exist in the training data (Flanigan et al., 2014) .",
"cite_spans": [
{
"start": 311,
"end": 332,
"text": "Werling et al. (2015)",
"ref_id": "BIBREF33"
},
{
"start": 448,
"end": 471,
"text": "(Flanigan et al., 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transition System adaptations",
"sec_num": "4.3"
},
{
"text": "Full details of the heuristics of the expert policy, features used and pre-processing are in Supplemental Material. All code is available at https://github.com/hopshackle/ dagger-AMR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transition System adaptations",
"sec_num": "4.3"
},
{
"text": "Smatch uses heuristics to control the combinatorial explosion of possible mappings between the input and output graphs, but is still too computationally expensive to be calculated for every RollOut during training. We retain Smatch for reporting all final results, but use 'Na\u00efve Smatch' as an approximation during training. This skips the combinatorial mapping of nodes between predicted and target AMR graphs. Instead, for each graph we compile a list of:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Na\u00efve Smatch as Loss Function",
"sec_num": "4.4"
},
{
"text": "\u2022 Node labels, e.g. name \u2022 Node-Edge-Node label concatenations, e.g. leave-01:ARG0:room \u2022 Node-Edge label concatenations, e.g. leave-01:ARG0, ARG0:room The loss is the number of entries that appear in only one of the lists. We do not convert to an F 1 score, as retaining the absolute number of mistakes is proportional to the size of the graph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Na\u00efve Smatch as Loss Function",
"sec_num": "4.4"
},
{
"text": "The flexibility of the transition system means multiple different actions from a given state s i can lead, via different RollOut trajectories, to the same target s T . This can result in many actions having the best action cost, reducing the signal in the training data and giving poor learning. To encourage short trajectories we break these ties with a penalty of T /5 to Na\u00efve Smatch. Multiple routes of the same length still exist, and are preferred equally. Note that the ordering of the stack of dependency tree nodes in the transition system means we start at leaf nodes and move up the tree. This prevents sub-components of the output AMR graph being produced in an arbitrary order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Na\u00efve Smatch as Loss Function",
"sec_num": "4.4"
},
{
"text": "The main dataset used is the newswire (proxy) section of LDC2014T12 (Knight et al., 2014) . The data from years 1995-2006 form the training data, with 2007 as the validation set and 2008 as the test set. The data split is the same as that used by Flanigan et al. (2014) and Wang et al. (2015b) . 1 We first assess the impact of noise reduction using the alpha bound, and report these experiments without Rollouts (i.e. using DAGGER) to isolate the effect of noise reduction. Table 3 summarises results using exact imitation and DAGGER with the \u03b1-bound set to discard a training instance after one misclassification. This is the most extreme setting, and the one that gave best results. We try AROW (Crammer et al., 2013) , Passive-Aggressive (PA) (Crammer et al., 2006) , and perceptron (Collins, 2002) classifiers, with averaging in all cases. We see a benefit from the \u03b1-bound for exact imitation only with AROW, which is more noise-sensitive than PA or the simple perceptron. With DAGGER there is a benefit for all classifiers. In all cases the \u03b1-bound and DAGGER are synergistic; without the \u03b1-bound imitation learning works less well, if at all. \u03b1=1 was the optimal setting, with lesser benefit observed for larger values.",
"cite_spans": [
{
"start": 68,
"end": 89,
"text": "(Knight et al., 2014)",
"ref_id": "BIBREF18"
},
{
"start": 247,
"end": 269,
"text": "Flanigan et al. (2014)",
"ref_id": "BIBREF10"
},
{
"start": 274,
"end": 293,
"text": "Wang et al. (2015b)",
"ref_id": "BIBREF32"
},
{
"start": 296,
"end": 297,
"text": "1",
"ref_id": null
},
{
"start": 698,
"end": 720,
"text": "(Crammer et al., 2013)",
"ref_id": "BIBREF7"
},
{
"start": 747,
"end": 769,
"text": "(Crammer et al., 2006)",
"ref_id": "BIBREF6"
},
{
"start": 787,
"end": 802,
"text": "(Collins, 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 475,
"end": 482,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We now turn our attention to targeted exploration and focused costing, for which we use V-DAGGER as explained in section 4. For all V-1 Formally Flanigan et al. (2014; Wang et al. (2015b) use the pre-release version of this dataset (LDC2013E117). Werling et al. (2015) conducted comparative tests on the two versions, and found only a very minor changes of 0.1 to 0.2 points of F-score when using the final release.",
"cite_spans": [
{
"start": 145,
"end": 167,
"text": "Flanigan et al. (2014;",
"ref_id": "BIBREF10"
},
{
"start": 168,
"end": 187,
"text": "Wang et al. (2015b)",
"ref_id": "BIBREF32"
},
{
"start": 247,
"end": 268,
"text": "Werling et al. (2015)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Algorithmic Approach R P F Flanigan et al. (2014) Concept identification with semi-markov model followed by optimisation of constrained graph that contains all of these. 0.52 0.66 0.58 Werling et al. (2015) As Flanigan et al. (2014) , with enhanced concept identification 0.59 0.66 0.62 Wang et al. (2015b) Single stage using transition-based parsing algorithm 0.62 0.64 0.63 Pust et al. (2015) Single stage System-Based Machine Translation --0.66 Peng et al. (2015) Hyperedge replacement grammar 0.57 0.59 0.58 Artzi et al. (2015) Combinatory Categorial Grammar induction 0.66 0.67 0.66 Wang et al. (2015a) Extensions to action space and features in Wang et al. (2015b) 0.69 0.71 0.70 This work",
"cite_spans": [
{
"start": 27,
"end": 49,
"text": "Flanigan et al. (2014)",
"ref_id": "BIBREF10"
},
{
"start": 185,
"end": 206,
"text": "Werling et al. (2015)",
"ref_id": "BIBREF33"
},
{
"start": 210,
"end": 232,
"text": "Flanigan et al. (2014)",
"ref_id": "BIBREF10"
},
{
"start": 287,
"end": 306,
"text": "Wang et al. (2015b)",
"ref_id": "BIBREF32"
},
{
"start": 376,
"end": 394,
"text": "Pust et al. (2015)",
"ref_id": "BIBREF22"
},
{
"start": 448,
"end": 466,
"text": "Peng et al. (2015)",
"ref_id": "BIBREF21"
},
{
"start": 512,
"end": 531,
"text": "Artzi et al. (2015)",
"ref_id": "BIBREF0"
},
{
"start": 588,
"end": 607,
"text": "Wang et al. (2015a)",
"ref_id": "BIBREF31"
},
{
"start": 651,
"end": 670,
"text": "Wang et al. (2015b)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Authors",
"sec_num": null
},
{
"text": "Imitation Learning with transition-based parsing 0.68 0.73 0.70 Table 4 : Comparison of previous work on the AMR task. R, P and F are Recall, Precision and F-Score.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 71,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Authors",
"sec_num": null
},
{
"text": "DAGGER experiments we use AROW with regularisation parameter C=1000, and \u03b4=0.3. Figure 2 shows results by iteration of reducing the number of RollOuts explored. Only the expert action, plus actions that score close to the bestscoring action (defined by the threshold) are used for RollOuts. Using the action cost information from RollOuts does surpass simple DAGGER, and unsurprisingly more exploration is better. Figure 3 shows the same data, but by total computational time spent 2 . This adjusts the picture, as small amounts of exploration give a faster benefit, albeit not always reaching the same peak performance. As a baseline, three iterations of V-DAGGER without targeted exploration (threshold = \u221e) takes 9600 minutes on the same hardware to give an F-Score of 0.652 on the validation set. Figure 4 shows the improvement using focused costing. The 'n/m' setting sets b, the number of initial actions taken by\u03c0 in a RollOut to n, and then increases this by m at each iteration. We gain an increase of 2.9 points from 0.682 to 0.711. In all the settings tried, focused costing improves the results, and requires progressive removal of the expert to achieve the best score.",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 88,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 414,
"end": 422,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 801,
"end": 809,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Authors",
"sec_num": null
},
{
"text": "We use the classifier from the Focused Costing 5/5 run to achieve an F-Score on the held-out test set of 0.70, equal to the best published result so far (Wang et al., 2015a) . Our gain of 4.7 points from imitation learning over standard transition-based parsing is orthogonal to that of Wang et al. (2015a) using exact imitation with additional trained analysers; they experience a gain of 2 points from using a Charniak parser (Charniak and Johnson, 2005) trained on the full OntoNotes corpus instead of the Stanford parser used here and in Wang et al. (2015b) , and a further gain of 2 points from a semantic role labeller. Using DAGGER with this system we obtained an F-Score of 0.60 in the Semeval 2016 task on AMR parsing, one standard deviation above the mean of all entries. (Goodman et al., 2016) Finally we test on all components of the LDC2014T12 corpus as shown in Table 5 , which include both newswire and weblog data, as well as the freely available AMRs for The Little Prince, (lpp) 3 . For each we use exact imitation, DAG-GER, and V-DAGGER on the train/validation/splits specified in the corpus. In all cases, imitation learning without RollOuts (DAGGER) improves on exact imitation, and incorporating RollOuts (V-DAGGER) provides an additional benefit. Rao et al. (2015) use SEARN on the same datasets, but with a very different transition system. We show their results for comparison.",
"cite_spans": [
{
"start": 153,
"end": 173,
"text": "(Wang et al., 2015a)",
"ref_id": "BIBREF31"
},
{
"start": 287,
"end": 306,
"text": "Wang et al. (2015a)",
"ref_id": "BIBREF31"
},
{
"start": 428,
"end": 456,
"text": "(Charniak and Johnson, 2005)",
"ref_id": "BIBREF4"
},
{
"start": 542,
"end": 561,
"text": "Wang et al. (2015b)",
"ref_id": "BIBREF32"
},
{
"start": 782,
"end": 804,
"text": "(Goodman et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 1270,
"end": 1287,
"text": "Rao et al. (2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 876,
"end": 883,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Authors",
"sec_num": null
},
{
"text": "Our expert achieves a Smatch F-Score of 0.94 on the training data. This explains why DAG-GER, which assumes a good expert, is effective. Introducing RollOuts provides additional theoretical benefits from a non-decomposable loss function that can take into account longer-term impacts of an action. This provides much more information than the 0/1 binary action cost in DAGGER, and we can use Na\u00efve Smatch as an approximation to our actual objective function during training. This informational benefit comes at the cost of increased noise and computational expense, which we control with targeted exploration and focused costing. We gain 2.7 points in F-Score, at the cost of 80-100x more computation. In problems with a less good expert, the gain from exploration could be much greater. Similarly, if designing an expert for a task is time-consuming, then it may be a better investment to rely on exploration with a poor expert to achieve the same result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Authors",
"sec_num": null
},
{
"text": "Other strategies have been used to mitigate the error propagation problem in transition-based parsing. A common approach is to use beam search through state-space for each action choice to find a better approximation of the long-term score of the action, e.g. Zhang and Clark (2008) . Goldberg and Elhadad (2010) remove the determinism of the sequence of actions to create easy-first parsers, which postpone uncertain, error-prone decisions until more information is available. This contrasts with working inflexibly left-to-right along a sentence, or bottom-to-top up a tree. Goldberg and Nivre (2012) introduce dynamic experts that are complete in that they will respond from any state, not just those on the perfect trajectory assuming no earlier mistakes; any expert used with an imitation learning algorithm needs to be complete in this sense. Their algorithm takes exploratory steps off the expert trajectory to augment the training data collected in a fashion very similar to DAGGER. Honnibal et al. (2013) use a non-monotonic parser that allows actions that are inconsistent with previous actions. When such an action is taken it amends the results of previous actions to ensure post-hoc consistency. Our parser is nonmonotonic, and we have the same problem encountered by Honnibal et al. (2013) with many different actions from a state s i able to reach the target s T , following different \"paths up the mountain\". This leads to poor learning. To resolve this with fixed T they break ties with a monotonic parser, so that actions that do not require later correction are scored higher in the training data. In our variable T environment, adding a penalty to the size of T is sufficient (section 4.4).",
"cite_spans": [
{
"start": 260,
"end": 282,
"text": "Zhang and Clark (2008)",
"ref_id": "BIBREF35"
},
{
"start": 285,
"end": 312,
"text": "Goldberg and Elhadad (2010)",
"ref_id": "BIBREF11"
},
{
"start": 577,
"end": 602,
"text": "Goldberg and Nivre (2012)",
"ref_id": "BIBREF12"
},
{
"start": 991,
"end": 1013,
"text": "Honnibal et al. (2013)",
"ref_id": "BIBREF16"
},
{
"start": 1281,
"end": 1303,
"text": "Honnibal et al. (2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Vlachos and Clark (2014) use V-DAGGER to give a benefit of 4.8 points of F-Score in a domain-specific semantic parsing problem similar to AMR. Their expert is sub-optimal, with no information on alignment between words in the input sentence, and nodes in the target graph. The parser learns to link words in the input to one of the 35 node types, with the 'expert' policy aligning completely at random. This is infeasible with AMR parsing due to the much larger vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Imitation learning provides a total benefit of 4.5 points with our AMR transition-based parser over exact imitation. This is a more complex task than many previous applications of imitation learning, and we found that noise reduction was an essential pre-requisite. Using a simple 0/1 binary action cost using a heuristic expert provided a benefit of 1.8, with the remaining 2.7 points coming from RollOuts with targeted exploration, focused costing and a non-decomposable loss function that was a better approximation to our objective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "We have considered imitation learning algorithms as a toolbox that can be tailored to fit the characteristics of the task. An unbounded T meant that the LOLS RollIn was not ideal, but this could be modified to slow the loss of influence of the expert policy. We anticipate the approaches that we have found useful in the case of AMR to reduce the impact of noise, efficiently support large action spaces with targeted exploration, and cope with unbounded trajectories in the transition system will be of relevance to other structured prediction tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "experiments were run on 8-core Google Cloud n1highmem-8 machines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://amr.isi.edu/download.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Andreas Vlachos is supported by the EPSRC grant Diligent (EP/M005429/1) and Jason Naradowsky by a Google Focused Research award. We would also like to thank our anonymous reviewers for many comments that helped improve this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Broad-coverage ccg semantic parsing with amr",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1699--1710",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage ccg semantic parsing with amr. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1699-1710, Lisbon, Portugal, September. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Abstract meaning representation for sembanking",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Banarescu",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Bonial",
"suffix": ""
},
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Madalina",
"middle": [],
"last": "Georgescu",
"suffix": ""
},
{
"first": "Kira",
"middle": [],
"last": "Griffitt",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse",
"volume": "",
"issue": "",
"pages": "178--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguis- tic Annotation Workshop and Interoperability with Discourse, pages 178-186, Sofia, Bulgaria, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Smatch: an evaluation metric for semantic feature structures",
"authors": [
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL (2)",
"volume": "",
"issue": "",
"pages": "748--752",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shu Cai and Kevin Knight. 2013. Smatch: an evalua- tion metric for semantic feature structures. In ACL (2), pages 748-752.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning to search better than your teacher",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Akshay",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Alekh",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Langford",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32nd International Conference on Machine Learning (ICML-15)",
"volume": "",
"issue": "",
"pages": "2058--2066",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai-wei Chang, Akshay Krishnamurthy, Alekh Agar- wal, Hal Daum\u00e9 III, and John Langford. 2015. Learning to search better than your teacher. In Pro- ceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 2058-2066.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Coarseto-fine n-best parsing and maxent discriminative reranking",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse- to-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meet- ing on Association for Computational Linguistics, pages 173-180. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 conference on Empirical methods in natural language processing",
"volume": "10",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and exper- iments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 1-8. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Shai Shalev-Shwartz, and Yoram Singer",
"authors": [
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Dekel",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Keshet",
"suffix": ""
}
],
"year": 2006,
"venue": "The Journal of Machine Learning Research",
"volume": "7",
"issue": "",
"pages": "551--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive-aggressive algorithms. The Journal of Ma- chine Learning Research, 7:551-585.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adaptive regularization of weight vectors",
"authors": [
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kulesza",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2013,
"venue": "Mach Learn",
"volume": "91",
"issue": "",
"pages": "155--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koby Crammer, Alex Kulesza, and Mark Dredze. 2013. Adaptive regularization of weight vectors. Mach Learn, 91:155-187.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Expanding the scope of the atis task: The atis-3 corpus",
"authors": [
{
"first": "A",
"middle": [],
"last": "Deborah",
"suffix": ""
},
{
"first": "Madeleine",
"middle": [],
"last": "Dahl",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Fisher",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hunicke-Smith",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Pallett",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Pao",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Rudnicky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "43--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deborah A Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the atis task: The atis-3 corpus. In Proceedings of the work- shop on Human Language Technology, pages 43-48. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Search-based structured prediction",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Langford",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2009,
"venue": "Machine learning",
"volume": "75",
"issue": "3",
"pages": "297--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine learning, 75(3):297-325.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A discriminative graph-based parser for the abstract meaning representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Flanigan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1426--1436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A Smith. 2014. A discrim- inative graph-based parser for the abstract meaning representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Lin- guistics, pages 1426-1436. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An efficient algorithm for easy-first non-directional dependency parsing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"Elhadad"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "742--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Michael Elhadad. 2010. An effi- cient algorithm for easy-first non-directional depen- dency parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 742-750. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A dynamic oracle for arc-eager dependency parsing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2012,
"venue": "COL-ING",
"volume": "",
"issue": "",
"pages": "959--976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Joakim Nivre. 2012. A dynamic oracle for arc-eager dependency parsing. In COL- ING, pages 959-976.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Training deterministic parsers with non-deterministic oracles",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "403--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parsers with non-deterministic oracles. Transactions of the association for Computational Linguistics, 1:403-414.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Ucl+sheffield at semeval-2016 task 8: Imitation learning for amr parsing with an alphabound",
"authors": [
{
"first": "James",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Naradowsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Goodman, Andreas Vlachos, and Jason Narad- owsky. 2016. Ucl+sheffield at semeval-2016 task 8: Imitation learning for amr parsing with an alpha- bound. In Proceedings of the 10th International Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Dynamic feature selection for dependency parsing",
"authors": [
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2013,
"venue": "Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "He He, Hal Daum\u00e9 III, and Jason Eisner. 2013. Dy- namic feature selection for dependency parsing. In Empirical Methods in Natural Language Process- ing.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A non-monotonic arc-eager transition system for dependency parsing",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Honnibal",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "163--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal, Yoav Goldberg, and Mark John- son. 2013. A non-monotonic arc-eager transition system for dependency parsing. In Proceedings of the Seventeenth Conference on Computational Nat- ural Language Learning, pages 163-172. Citeseer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Noise tolerant variants of the perceptron algorithm. The journal of machine learning research",
"authors": [
{
"first": "Roni",
"middle": [],
"last": "Khardon",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Wachman",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "8",
"issue": "",
"pages": "227--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roni Khardon and Gabriel Wachman. 2007. Noise tolerant variants of the perceptron algorithm. The journal of machine learning research, 8:227-248.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Abstract meaning representation (amr) annotation release 1.0. Linguistic Data Consortium Catalog",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Baranescu",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Bonial",
"suffix": ""
},
{
"first": "Madalina",
"middle": [],
"last": "Georgescu",
"suffix": ""
},
{
"first": "Kira",
"middle": [],
"last": "Griffitt",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "2014--2026",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Knight, Laura Baranescu, Claire Bonial, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Daniel Marcu, Martha Palmer, and Nathan Schnei- der. 2014. Abstract meaning representation (amr) annotation release 1.0. Linguistic Data Consortium Catalog. LDC2014T12.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning dependency-based compositional semantics",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Michael I Jordan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "2",
"pages": "389--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Michael I Jordan, and Dan Klein. 2013. Learning dependency-based compositional seman- tics. Computational Linguistics, 39(2):389-446.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Characterizing the errors of data-driven dependency parsing models",
"authors": [
{
"first": "T",
"middle": [],
"last": "Ryan",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "122--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan T McDonald and Joakim Nivre. 2007. Charac- terizing the errors of data-driven dependency parsing models. In EMNLP-CoNLL, pages 122-131.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A synchronous hyperedge replacement grammar based approach for amr parsing",
"authors": [
{
"first": "Xiaochang",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaochang Peng, Linfeng Song, and Daniel Gildea. 2015. A synchronous hyperedge replacement gram- mar based approach for amr parsing. CoNLL 2015, page 32.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Using syntaxbased machine translation to parse english into abstract meaning representation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Pust",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1504.06665"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu, and Jonathan May. 2015. Using syntax- based machine translation to parse english into abstract meaning representation. arXiv preprint arXiv:1504.06665.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Parser for abstract meaning representation using learning to search",
"authors": [
{
"first": "Sudha",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Yogarshi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1510.07586"
]
},
"num": null,
"urls": [],
"raw_text": "Sudha Rao, Yogarshi Vyas, Hal Daume III, and Philip Resnik. 2015. Parser for abstract meaning repre- sentation using learning to search. arXiv preprint arXiv:1510.07586.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Efficient reductions for imitation learning",
"authors": [
{
"first": "St\u00e9phane",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "Drew",
"middle": [],
"last": "Bagnell",
"suffix": ""
}
],
"year": 2010,
"venue": "13th International Conference on Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "661--668",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "St\u00e9phane Ross and Drew Bagnell. 2010. Efficient reductions for imitation learning. In 13th Inter- national Conference on Artificial Intelligence and Statistics, pages 661-668.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Reinforcement and imitation learning via interactive noregret learning",
"authors": [
{
"first": "Stephane",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Bagnell",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1406.5979"
]
},
"num": null,
"urls": [],
"raw_text": "Stephane Ross and J Andrew Bagnell. 2014. Rein- forcement and imitation learning via interactive no- regret learning. arXiv preprint arXiv:1406.5979.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A reduction of imitation learning and structured prediction to no-regret online learning",
"authors": [
{
"first": "St\u00e9phane",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"J"
],
"last": "Gordon",
"suffix": ""
},
{
"first": "J Andrew",
"middle": [],
"last": "Bagnell",
"suffix": ""
}
],
"year": 2011,
"venue": "14th International Conference on Artificial Intelligence and Statistics",
"volume": "15",
"issue": "",
"pages": "627--635",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "St\u00e9phane Ross, Geoffrey J Gordon, and J Andrew Bag- nell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In 14th International Conference on Artificial Intelli- gence and Statistics, volume 15, pages 627-635.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Is imitation learning the route to humanoid robots?",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Schaal",
"suffix": ""
}
],
"year": 1999,
"venue": "Trends in cognitive sciences",
"volume": "3",
"issue": "6",
"pages": "233--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Schaal. 1999. Is imitation learning the route to humanoid robots? Trends in cognitive sciences, 3(6):233-242.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "High performance outdoor navigation from overhead data using imitation learning",
"authors": [
{
"first": "David",
"middle": [],
"last": "Silver",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bagnell",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Stentz",
"suffix": ""
}
],
"year": 2008,
"venue": "Robotics: Science and Systems IV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Silver, James Bagnell, and Anthony Stentz. 2008. High performance outdoor navigation from overhead data using imitation learning. Robotics: Science and Systems IV, Zurich, Switzerland.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A new corpus and imitation learning framework for contextdependent semantic parsing",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "547--559",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Vlachos and Stephen Clark. 2014. A new cor- pus and imitation learning framework for context- dependent semantic parsing. Transactions of the As- sociation for Computational Linguistics, 2:547-559.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Searchbased structured prediction applied to biomedical event extraction",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Craven",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "49--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Vlachos and Mark Craven. 2011. Search- based structured prediction applied to biomedical event extraction. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pages 49-57. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Boosting transition-based amr parsing with refined actions and auxiliary analyzers",
"authors": [
{
"first": "Chuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "857--862",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015a. Boosting transition-based amr parsing with refined actions and auxiliary analyzers. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 2: Short Papers), pages 857-862, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A transition-based algorithm for amr parsing",
"authors": [
{
"first": "Chuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015b. A transition-based algorithm for amr pars- ing. North American Association for Computational Linguistics, Denver, Colorado.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Robust subgraph generation improves abstract meaning representation parsing",
"authors": [
{
"first": "Keenon",
"middle": [],
"last": "Werling",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "982--991",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keenon Werling, Gabor Angeli, and Christopher D. Manning. 2015. Robust subgraph generation im- proves abstract meaning representation parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 982-991, Beijing, China, July. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Learning to parse database queries using inductive logic programming",
"authors": [
{
"first": "M",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Raymond J",
"middle": [],
"last": "Zelle",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1050--1055",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John M Zelle and Raymond J Mooney. 1996. Learn- ing to parse database queries using inductive logic programming. In Proceedings of the National Con- ference on Artificial Intelligence, pages 1050-1055.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A tale of two parsers: investigating and combining graphbased and transition-based dependency parsing using beam-search",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "562--571",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang and Stephen Clark. 2008. A tale of two parsers: investigating and combining graph- based and transition-based dependency parsing us- ing beam-search. In Proceedings of the Conference on Empirical Methods in Natural Language Pro- cessing, pages 562-571. Association for Computa- tional Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Targeted exploration with V-DAGGER by iteration.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "Targeted exploration with V-DAGGER by time.",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "Focused costing with V-DAGGER. All runs use threshold of 0.10.",
"uris": null
},
"TABREF0": {
"num": null,
"html": null,
"text": "node \u03c30 to lc. Pop \u03c30, and initialise \u03b2. Swap \u03b2 non-empty Make \u03b20 parent of \u03c30 (reverse edge) and its sub-graph. Pop \u03b20 and insert \u03b20 as \u03c31. ReplaceHead \u03b2 non-empty Pop \u03c30 and delete it from the graph. Parents of \u03c30 become parents of \u03b20. Other children of \u03c30 become children of \u03b20. Insert \u03b20 at the head of \u03c3 and re-initialise \u03b2.",
"content": "<table><tr><td colspan=\"3\">Action Name Param. Pre-conditions</td><td>Outcome of action</td></tr><tr><td>NextEdge</td><td>lr</td><td>\u03b2 non-empty</td><td>Set label of edge (\u03c30, \u03b20) to lr. Pop \u03b20.</td></tr><tr><td colspan=\"4\">NextNode Set concept of Reattach lc \u03b2 empty \u03ba \u03b2 non-empty Pop \u03b20 and delete edge (\u03c30, \u03b20). Attach \u03b20 as a child of \u03ba. If \u03ba has</td></tr><tr><td/><td/><td/><td>already been popped from \u03c3 then re-insert it as \u03c31.</td></tr><tr><td>DeleteNode</td><td/><td>\u03b2 empty; leaf \u03c30</td><td>Pop \u03c30 and delete it from the graph.</td></tr><tr><td>Insert</td><td>lc</td><td/><td>Insert a new node \u03b4 with AMR concept lc as the parent of \u03c30, and insert</td></tr><tr><td/><td/><td/><td>\u03b4 into \u03c3.</td></tr><tr><td>InsertBelow</td><td/><td/><td>Insert a new node \u03b4 with AMR concept lc as a child of \u03c30.</td></tr></table>",
"type_str": "table"
},
"TABREF1": {
"num": null,
"html": null,
"text": "",
"content": "<table><tr><td>: Action Space for the transition-based graph parsing algorithm</td></tr><tr><td>Algorithm 1: Greedy transition-based parsing</td></tr><tr><td>Data: policy \u03c0, start state s 1</td></tr><tr><td>Result:</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"num": null,
"html": null,
"text": "lists previous AMR work on the same dataset.",
"content": "<table><tr><td/><td colspan=\"3\">Validation F-Score</td><td colspan=\"2\">Test F-Score</td></tr><tr><td>Dataset</td><td>EI</td><td>D</td><td>V-D</td><td colspan=\"2\">V-D Rao et al</td></tr><tr><td>proxy</td><td colspan=\"4\">0.670 0.686 0.704 0.70</td><td>0.61</td></tr><tr><td>dfa</td><td colspan=\"4\">0.495 0.532 0.546 0.50</td><td>0.44</td></tr><tr><td>bolt</td><td colspan=\"4\">0.456 0.468 0.524 0.52</td><td>0.46</td></tr><tr><td>xinhua</td><td colspan=\"4\">0.598 0.623 0.683 0.62</td><td>0.52</td></tr><tr><td>lpp</td><td colspan=\"4\">0.540 0.546 0.564 0.55</td><td>0.52</td></tr></table>",
"type_str": "table"
},
"TABREF5": {
"num": null,
"html": null,
"text": "",
"content": "<table><tr><td>: Comparison of Exact Imitation (EI), DAGGER (D),</td></tr><tr><td>V-DAGGER (V-D) on all components of the LDC2014T12</td></tr><tr><td>corpus.</td></tr></table>",
"type_str": "table"
}
}
}
}