ACL-OCL / Base_JSON /prefixP /json /P12 /P12-1014.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P12-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:29:24.046857Z"
},
"title": "Learning High-Level Planning from Text",
"authors": [
{
"first": "S",
"middle": [
"R K"
],
"last": "Branavan",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "Massachusetts Institute of Technology",
"location": {}
},
"email": "branavan@csail.mit.edu"
},
{
"first": "Nate",
"middle": [],
"last": "Kushman",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "Massachusetts Institute of Technology",
"location": {}
},
"email": "nkushman@csail.mit.edu"
},
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "Massachusetts Institute of Technology",
"location": {}
},
"email": "taolei@csail.mit.edu"
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "Massachusetts Institute of Technology",
"location": {}
},
"email": "regina@csail.mit.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Comprehending action preconditions and effects is an essential step in modeling the dynamics of the world. In this paper, we express the semantics of precondition relations extracted from text in terms of planning operations. The challenge of modeling this connection is to ground language at the level of relations. This type of grounding enables us to create high-level plans based on language abstractions. Our model jointly learns to predict precondition relations from text and to perform high-level planning guided by those relations. We implement this idea in the reinforcement learning framework using feedback automatically obtained from plan execution attempts. When applied to a complex virtual world and text describing that world, our relation extraction technique performs on par with a supervised baseline, yielding an F-measure of 66% compared to the baseline's 65%. Additionally, we show that a high-level planner utilizing these extracted relations significantly outperforms a strong, text unaware baseline-successfully completing 80% of planning tasks as compared to 69% for the baseline. 1",
"pdf_parse": {
"paper_id": "P12-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "Comprehending action preconditions and effects is an essential step in modeling the dynamics of the world. In this paper, we express the semantics of precondition relations extracted from text in terms of planning operations. The challenge of modeling this connection is to ground language at the level of relations. This type of grounding enables us to create high-level plans based on language abstractions. Our model jointly learns to predict precondition relations from text and to perform high-level planning guided by those relations. We implement this idea in the reinforcement learning framework using feedback automatically obtained from plan execution attempts. When applied to a complex virtual world and text describing that world, our relation extraction technique performs on par with a supervised baseline, yielding an F-measure of 66% compared to the baseline's 65%. Additionally, we show that a high-level planner utilizing these extracted relations significantly outperforms a strong, text unaware baseline-successfully completing 80% of planning tasks as compared to 69% for the baseline. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Understanding action preconditions and effects is a basic step in modeling the dynamics of the world. For example, having seeds is a precondition for growing wheat. Not surprisingly, preconditions have been extensively explored in various sub-fields of AI. However, existing work on action models has largely focused on tasks and techniques specific to individual sub-fields with little or no interconnection between them. In NLP, precondition relations have been studied in terms of the linguistic mechanisms A pickaxe, which is used to harvest stone, can be made from wood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(a) Low Level Actions for: wood \u2192 pickaxe \u2192 stone step 1: move from (0,0) to (2,0) step 2: chop tree at: (2,0) step 3: get wood at: (2,0) step 4: craft plank from wood step 5: craft stick from plank step 6: craft pickaxe from plank and stick \u2022 \u2022 \u2022 step N-1: pickup tool: pickaxe step N:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "harvest stone with pickaxe at: (5,5) (b) that realize them, while in classical planning, these relations are viewed as a part of world dynamics. In this paper, we bring these two parallel views together, grounding the linguistic realization of these relations in the semantics of planning operations. The challenge and opportunity of this fusion comes from the mismatch between the abstractions of human language and the granularity of planning primitives. Consider, for example, text describing a virtual world such as Minecraft 2 and a formal description of that world using planning primitives. Due to the mismatch in granularity, even the simple relations between wood, pickaxe and stone described in the sentence in Figure 1a results in dozens of lowlevel planning actions in the world, as can be seen in Figure 1b . While the text provides a high-level description of world dynamics, it does not provide sufficient details for successful plan execution. On the other hand, planning with low-level actions does not suffer from this limitation, but is computationally intractable for even moderately complex tasks. As a consequence, in many practical domains, planning algorithms rely on manually-crafted high-level abstractions to make search tractable (Ghallab et al., 2004; Lekav\u00fd and N\u00e1vrat, 2007) .",
"cite_spans": [
{
"start": 1258,
"end": 1280,
"text": "(Ghallab et al., 2004;",
"ref_id": "BIBREF12"
},
{
"start": 1281,
"end": 1305,
"text": "Lekav\u00fd and N\u00e1vrat, 2007)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 721,
"end": 730,
"text": "Figure 1a",
"ref_id": "FIGREF0"
},
{
"start": 810,
"end": 819,
"text": "Figure 1b",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The central idea of our work is to express the semantics of precondition relations extracted from text in terms of planning operations. For instance, the precondition relation between pickaxe and stone described in the sentence in Figure 1a indicates that plans which involve obtaining stone will likely need to first obtain a pickaxe. The novel challenge of this view is to model grounding at the level of relations, in contrast to prior work which focused on objectlevel grounding. We build on the intuition that the validity of precondition relations extracted from text can be informed by the execution of a low-level planner. 3 This feedback can enable us to learn these relations without annotations. Moreover, we can use the learned relations to guide a high level planner and ultimately improve planning performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 231,
"end": 240,
"text": "Figure 1a",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We implement these ideas in the reinforcement learning framework, wherein our model jointly learns to predict precondition relations from text and to perform high-level planning guided by those relations. For a given planning task and a set of candidate relations, our model repeatedly predicts a sequence of subgoals where each subgoal specifies an attribute of the world that must be made true. It then asks the low-level planner to find a plan between each consecutive pair of subgoals in the sequence. The observed feedback -whether the lowlevel planner succeeded or failed at each step -is utilized to update the policy for both text analysis and high-level planning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our algorithm in the Minecraft virtual world, using a large collection of user-generated online documents as our source of textual information. Our results demonstrate the strength of our relation extraction technique -while using planning feedback as its only source of supervision, it achieves a precondition relation extraction accuracy on par with that of a supervised SVM baseline. Specifically, it yields an F-score of 66% compared to the 65% of the baseline. In addition, we show that these extracted relations can be used to improve the performance of a high-level planner. As baselines for this evaluation, we employ the Metric-FF planner (Hoffmann and Nebel, 2001 ), 4 as well as a textunaware variant of our model. Our results show that our text-driven high-level planner significantly outperforms all baselines in terms of completed planning tasks -it successfully solves 80% as compared to 41% for the Metric-FF planner and 69% for the text unaware variant of our model. In fact, the performance of our method approaches that of an oracle planner which uses manually-annotated preconditions.",
"cite_spans": [
{
"start": 660,
"end": 685,
"text": "(Hoffmann and Nebel, 2001",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Extracting Event Semantics from Text The task of extracting preconditions and effects has previously been addressed in the context of lexical semantics (Sil et al., 2010; Sil and Yates, 2011) . These approaches combine large-scale distributional techniques with supervised learning to identify desired semantic relations in text. Such combined approaches have also been shown to be effective for identifying other relationships between events, such as causality (Girju and Moldovan, 2002; Chang and Choi, 2006; Blanco et al., 2008; Beamer and Girju, 2009; Do et al., 2011) .",
"cite_spans": [
{
"start": 152,
"end": 170,
"text": "(Sil et al., 2010;",
"ref_id": "BIBREF25"
},
{
"start": 171,
"end": 191,
"text": "Sil and Yates, 2011)",
"ref_id": "BIBREF24"
},
{
"start": 462,
"end": 488,
"text": "(Girju and Moldovan, 2002;",
"ref_id": "BIBREF13"
},
{
"start": 489,
"end": 510,
"text": "Chang and Choi, 2006;",
"ref_id": "BIBREF7"
},
{
"start": 511,
"end": 531,
"text": "Blanco et al., 2008;",
"ref_id": "BIBREF3"
},
{
"start": 532,
"end": 555,
"text": "Beamer and Girju, 2009;",
"ref_id": "BIBREF2"
},
{
"start": 556,
"end": 572,
"text": "Do et al., 2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Similar to these methods, our algorithm capitalizes on surface linguistic cues to learn preconditions from text. However, our only source of supervision is the feedback provided by the planning task which utilizes the predictions. Additionally, we not only identify these relations in text, but also show they are valuable in performing an external task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work fits into the broad area of grounded language acquisition, where the goal is to learn linguistic analysis from a situated context (Oates, 2001; Siskind, 2001; Yu and Ballard, 2004; Fleischman and Roy, 2005; Mooney, 2008a; Mooney, 2008b; Branavan et al., 2009; Liang et al., 2009; Vogel and Jurafsky, 2010) . Within this line of work, we are most closely related to the reinforcement learning approaches that learn language by interacting with an external environment (Branavan et al., 2009; Branavan et al., 2010; Vogel and Jurafsky, 2010; Branavan et al., 2011) .",
"cite_spans": [
{
"start": 139,
"end": 152,
"text": "(Oates, 2001;",
"ref_id": "BIBREF23"
},
{
"start": 153,
"end": 167,
"text": "Siskind, 2001;",
"ref_id": "BIBREF26"
},
{
"start": 168,
"end": 189,
"text": "Yu and Ballard, 2004;",
"ref_id": "BIBREF32"
},
{
"start": 190,
"end": 215,
"text": "Fleischman and Roy, 2005;",
"ref_id": "BIBREF10"
},
{
"start": 216,
"end": 230,
"text": "Mooney, 2008a;",
"ref_id": "BIBREF20"
},
{
"start": 231,
"end": 245,
"text": "Mooney, 2008b;",
"ref_id": "BIBREF21"
},
{
"start": 246,
"end": 268,
"text": "Branavan et al., 2009;",
"ref_id": "BIBREF4"
},
{
"start": 269,
"end": 288,
"text": "Liang et al., 2009;",
"ref_id": "BIBREF18"
},
{
"start": 289,
"end": 314,
"text": "Vogel and Jurafsky, 2010)",
"ref_id": "BIBREF29"
},
{
"start": 476,
"end": 499,
"text": "(Branavan et al., 2009;",
"ref_id": "BIBREF4"
},
{
"start": 500,
"end": 522,
"text": "Branavan et al., 2010;",
"ref_id": "BIBREF5"
},
{
"start": 523,
"end": 548,
"text": "Vogel and Jurafsky, 2010;",
"ref_id": "BIBREF29"
},
{
"start": 549,
"end": 571,
"text": "Branavan et al., 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Semantics via Language Grounding",
"sec_num": null
},
{
"text": "A pickaxe, which is used to harvest stone, can be made from wood. The key distinction of our work is the use of grounding to learn abstract pragmatic relations, i.e. to learn linguistic patterns that describe relationships between objects in the world. This supplements previous work which grounds words to objects in the world (Branavan et al., 2009; Vogel and Jurafsky, 2010) . Another important difference of our setup is the way the textual information is utilized in the situated context. Instead of getting step-by-step instructions from the text, our model uses text that describes general knowledge about the domain structure. From this text, it extracts relations between objects in the world which hold independently of any given task. Task-specific solutions are then constructed by a planner that relies on these relations to perform effective high-level planning.",
"cite_spans": [
{
"start": 328,
"end": 351,
"text": "(Branavan et al., 2009;",
"ref_id": "BIBREF4"
},
{
"start": 352,
"end": 377,
"text": "Vogel and Jurafsky, 2010)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text (input):",
"sec_num": null
},
{
"text": "Hierarchical Planning It is widely accepted that high-level plans that factorize a planning problem can greatly reduce the corresponding search space (Newell et al., 1959; Bacchus and Yang, 1994) . Previous work in planning has studied the theoretical properties of valid abstractions and proposed a number of techniques for generating them (Jonsson and Barto, 2005; Wolfe and Barto, 2005; Mehta et al., 2008; Barry et al., 2011) . In general, these techniques use static analysis of the lowlevel domain to induce effective high-level abstractions. In contrast, our focus is on learning the abstraction from natural language. Thus our technique is complementary to past work, and can benefit from human knowledge about the domain structure.",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "(Newell et al., 1959;",
"ref_id": "BIBREF22"
},
{
"start": 172,
"end": 195,
"text": "Bacchus and Yang, 1994)",
"ref_id": "BIBREF0"
},
{
"start": 341,
"end": 366,
"text": "(Jonsson and Barto, 2005;",
"ref_id": "BIBREF16"
},
{
"start": 367,
"end": 389,
"text": "Wolfe and Barto, 2005;",
"ref_id": "BIBREF31"
},
{
"start": 390,
"end": 409,
"text": "Mehta et al., 2008;",
"ref_id": "BIBREF19"
},
{
"start": 410,
"end": 429,
"text": "Barry et al., 2011)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text (input):",
"sec_num": null
},
{
"text": "Our task is two-fold. First, given a text document describing an environment, we wish to extract a set of precondition/effect relations implied by the text. Second, we wish to use these induced relations to determine an action sequence for completing a given task in the environment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "We formalize our task as illustrated in Figure 2 . As input, we are given a world defined by the tuple S, A, T , where S is the set of possible world states, A is the set of possible actions and T is a deterministic state transition function. Executing action a in state s causes a transition to a new state s according to T (s | s, a). States are represented using propositional logic predicates x i \u2208 X, where each state is simply a set of such predicates, i.e. s \u2282 X.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "The objective of the text analysis part of our task is to automatically extract a set of valid precondition/effect relationships from a given document d. Given our definition of the world state, preconditions and effects are merely single term predicates, x i , in this world state. We assume that we are given a seed mapping between a predicate x i , and the word types in the document that reference it (see Table 3 for examples). Thus, for each predicate pair x k , x l , we want to utilize the text to predict whether x k is a precondition for x l ; i.e., x k \u2192 x l . For example, from the text in Figure 2 , we want to predict that possessing a pickaxe is a precondition for possessing stone. Note that this relation implies the reverse as well, i.e. x l can be interpreted as the effect of an action sequence performed on state x k .",
"cite_spans": [],
"ref_spans": [
{
"start": 410,
"end": 417,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 602,
"end": 610,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "Each planning goal g \u2208 G is defined by a starting state s g 0 , and a final goal state s g f . This goal state is represented by a set of predicates which need to be made true. In the planning part of our task our objective is to find a sequence of actions a that connect s g 0 to s g f . Finally, we assume document d does not contain step-by-step instructions for any individual task, but instead describes general facts about the given world that are useful for a wide variety of tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3"
},
{
"text": "The key idea behind our model is to leverage textual descriptions of preconditions and effects to guide the construction of high level plans. We define a highlevel plan as a sequence of subgoals, where each subgoal is represented by a single-term predicate, x i , that needs to be set in the corresponding world state -e.g. have(wheat)=true. Thus the set of possible subgoals is defined by the set of all possible single-term predicates in the domain. In contrast to low-level plans, the transition between these subgoals can involve multiple low-level actions. Our algorithm for textually informed high-level planning operates in four steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "1. Use text to predict the preconditions of each subgoal. These predictions are for the entire domain and are not goal specific.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "2. Given a planning goal and the induced preconditions, predict a subgoal sequence that achieves the given goal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "3. Execute the predicted sequence by giving each pair of consecutive subgoals to a low-level planner. This planner, treated as a black-box, computes the low-level plan actions necessary to transition from one subgoal to the next.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "4. Update the model parameters, using the lowlevel planner's success or failure as the source of supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "We formally define these steps below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "Modeling Precondition Relations Given a document d, and a set of subgoal pairs x i , x j , we want to predict whether subgoal x i is a precondition for x j . We assume that precondition relations are generally described within single sentences. We first use our seed grounding in a preprocessing step where we extract all predicate pairs where both predicates are mentioned in the same sentence. We call this set the Candidate Relations. Note that this set will contain many invalid relations since co-occurrence in a sentence does not necessarily imply a valid precondition relation. 5 Thus for each sentence, w k , associated with a given Candidate Relation, x i \u2192 x j , our task is to predict whether the sentence indicates the relation. We model this decision via a log linear distribution as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(x i \u2192 x j | w k , q k ; \u03b8 c ) \u221d e \u03b8c\u2022\u03c6c(x i ,x j , w k ,q k ) ,",
"eq_num": "(1)"
}
],
"section": "Model",
"sec_num": "4"
},
{
"text": "where \u03b8 c is the vector of model parameters. We compute the feature function \u03c6 c using the seed 5 In our dataset only 11% of Candidate Relations are valid. ",
"cite_spans": [
{
"start": 96,
"end": 97,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "for i = 1 \u2022 \u2022 \u2022 T do Sample valid preconditions: C \u2190 \u2205 foreach x i , x j \u2208 C all do foreach Sentence w k containing x i and x j do v \u223c p(x i \u2192 x j | w k , q k ; \u03b8 c ) if v = 1 then C = C \u222a x i , x j end end Predict subgoal sequences for each task g. foreach g \u2208 G do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "Sample subgoal sequence x as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "for t = 1 \u2022 \u2022 \u2022 n do Sample next subgoal: x t \u223c p(x | x t\u22121 , s g 0 , s g f , C; \u03b8 x ) Construct low-level subtask from x t\u22121 to x t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "Execute low-level planner on subtask end Update subgoal prediction model using Eqn. 2 end Update text precondition model using Eqn. 3 end Algorithm 1: A policy gradient algorithm for parameter estimation in our model. grounding, the sentence w k , and a given dependency parse q k of the sentence. Given these per-sentence decisions, we predict the set of all valid precondition relations, C, in a deterministic fashion. We do this by considering a precondition x i \u2192 x j as valid if it is predicted to be valid by at least one sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "Modeling Subgoal Sequences Given a planning goal g, defined by initial and final goal states s g 0 and s g f , our task is to predict a sequence of subgoals x which will achieve the goal. We condition this decision on our predicted set of valid preconditions C, by modeling the distribution over sequences x as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "p( x | s g 0 , s g f , C; \u03b8 x ) = n t=1 p(x t | x t\u22121 , s g 0 , s g f , C; \u03b8 x ), p(x t | x t\u22121 , s g 0 , s g f , C; \u03b8 x ) \u221d e \u03b8x\u2022\u03c6x(xt,x t\u22121 ,s g 0 ,s g f ,C) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "Here we assume that subgoal sequences are Markovian in nature and model individual subgoal predictions using a log-linear model. Note that in con-trast to Equation 1 where the predictions are goalagnostic, these predictions are goal-specific. As before, \u03b8 x is the vector of model parameters, and \u03c6 x is the feature function. Additionally, we assume a special stop symbol, x \u2205 , which indicates the end of the subgoal sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "Parameter Update Parameter updates in our model are done via reinforcement learning. Specifically, once the model has predicted a subgoal sequence for a given goal, the sequence is given to the low-level planner for execution. The success or failure of this execution is used to compute the reward signal r for parameter estimation. This predict-execute-update cycle is repeated until convergence. We assume that our reward signal r strongly correlates with the correctness of model predictions. Therefore, during learning, we need to find the model parameters that maximize expected future reward (Sutton and Barto, 1998) . We perform this maximization via stochastic gradient ascent, using the standard policy gradient algorithm (Williams, 1992; Sutton et al., 2000) . We perform two separate policy gradient updates, one for each model component. The objective of the text component of our model is purely to predict the validity of preconditions. Therefore, subgoal pairs x k , x l , where x l is reachable from x k , are given positive reward. The corresponding parameter update, with learning rate \u03b1 c , takes the following form:",
"cite_spans": [
{
"start": 598,
"end": 622,
"text": "(Sutton and Barto, 1998)",
"ref_id": "BIBREF27"
},
{
"start": 731,
"end": 747,
"text": "(Williams, 1992;",
"ref_id": "BIBREF30"
},
{
"start": 748,
"end": 768,
"text": "Sutton et al., 2000)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "\u2206\u03b8 c \u2190 \u03b1 c r \u03c6 c (x i , x j , w k , q k ) \u2212 E p(x i \u2192x j |\u2022) \u03c6 c (x i , x j , w k , q k ) . (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "The objective of the planning component of our model is to predict subgoal sequences that successfully achieve the given planning goals. Thus we directly use plan-success as a binary reward signal, which is applied to each subgoal decision in a sequence. This results in the following update:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2206\u03b8 x \u2190 \u03b1 x r t \u03c6 x (x t , x t\u22121 , s g 0 , s g f , C) \u2212 E p(x t |\u2022) \u03c6 x (x t , x t\u22121 , s g 0 , s g f , C) ,",
"eq_num": "(3)"
}
],
"section": "Model",
"sec_num": "4"
},
{
"text": "where t indexes into the subgoal sequence and \u03b1 x is the learning rate. Table 1 : A comparison of complexity between Minecraft and some domains used in the IPC-2011 sequential satisficing track. In the Minecraft domain, the number of objects, predicate types, and actions is significantly larger.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "4"
},
{
"text": "We apply our method to Minecraft, a grid-based virtual world. Each grid location represents a tile of either land or water and may also contain resources. Users can freely move around the world, harvest resources and craft various tools and objects from these resources. The dynamics of the world require certain resources or tools as prerequisites for performing a given action, as can be seen in Figure 3 . For example, a user must first craft a bucket before they can collect milk.",
"cite_spans": [],
"ref_spans": [
{
"start": 398,
"end": 406,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Applying the Model",
"sec_num": "5"
},
{
"text": "Defining the Domain In order to execute a traditional planner on the Minecraft domain, we define the domain using the Planning Domain Definition Language (PDDL) (Fox and Long, 2003) . This is the standard task definition language used in the International Planning Competitions (IPC). 6 We define as predicates all aspects of the game state -for example, the location of resources in the world, the resources and objects possessed by the player, and the player's location. Our subgoals x i and our task goals s g f map directly to these predicates. This results in a domain with significantly greater complexity than those solvable by traditional low-level planners. Table 1 compares the complexity of our domain with some typical planning domains used in the IPC.",
"cite_spans": [
{
"start": 161,
"end": 181,
"text": "(Fox and Long, 2003)",
"ref_id": "BIBREF11"
},
{
"start": 285,
"end": 286,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Applying the Model",
"sec_num": "5"
},
{
"text": "Low-level Planner As our low-level planner we employ Metric-FF (Hoffmann and Nebel, 2001) , the state-of-the-art baseline used in the 2008 International Planning Competition. Metric-FF is a forward-chaining heuristic state space planner. Its main heuristic is to simplify the task by ignoring operator delete lists. The number of actions in the solution for this simplified task is then used as the goal distance estimate for various search strategies.",
"cite_spans": [
{
"start": 63,
"end": 89,
"text": "(Hoffmann and Nebel, 2001)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Applying the Model",
"sec_num": "5"
},
{
"text": "Features The two components of our model leverage different types of information, and as a result, they each use distinct sets of features. The text component features \u03c6 c are computed over sentences and their dependency parses. The Stanford parser (de Marneffe et al., 2006) was used to generate the dependency parse information for each sentence. Examples of these features appear in Table 2 . The sequence prediction component takes as input both the preconditions induced by the text component as well as the planning state and the previous subgoal. Thus \u03c6 x contains features which check whether two subgoals are connected via an induced precondition relation, in addition to features which are simply the Cartesian product of domain predicates.",
"cite_spans": [
{
"start": 249,
"end": 275,
"text": "(de Marneffe et al., 2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 386,
"end": 393,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Applying the Model",
"sec_num": "5"
},
{
"text": "Datasets As the text description of our virtual world, we use documents from the Minecraft Wiki, 7 the most popular information source about the game. Our manually constructed seed grounding of predicates contains 74 entries, examples of which can be seen in Table 3 . We use this seed grounding to identify a set of 242 sentences that reference predicates in the Minecraft domain. This results in a set of 694 Candidate Relations. We also manually annotated the relations expressed in the text, identifying 94 of the Candidate Relations as valid. Our corpus contains 979 unique word types and is composed of sentences with an average length of 20 words. We test our system on a set of 98 problems that involve collecting resources and constructing objects in the Minecraft domain -for example, fishing, cooking and making furniture. To assess the complexity of these tasks, we manually constructed high-level plans for these goals and solved them using the Metric-FF planner. On average, the execu-7 http://www.minecraftwiki.net/wiki/Minecraft Wiki/ Words Dependency Types Dependency Type \u00d7 Direction Word \u00d7 Dependency Type Word \u00d7 Dependency Type \u00d7 Direction tion of the sequence of low-level plans takes 35 actions, with 3 actions for the shortest plan and 123 actions for the longest. The average branching factor is 9.7, leading to an average search space of more than 10 34 possible action sequences. For evaluation purposes we manually identify a set of Gold Relations consisting of all precondition relations that are valid in this domain, including those not discussed in the text.",
"cite_spans": [],
"ref_spans": [
{
"start": 259,
"end": 266,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6"
},
{
"text": "We use our manual annotations to evaluate the type-level accuracy of relation extraction. To evaluate our high-level planner, we use the standard measure adopted by the IPC. This evaluation measure simply assesses whether the planner completes a task within a predefined time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": null
},
{
"text": "Baselines To evaluate the performance of our relation extraction, we compare against an SVM classifier 8 trained on the Gold Relations. We test the SVM baseline in a leave-one-out fashion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": null
},
{
"text": "To evaluate the performance of our text-aware high-level planner, we compare against five baselines. The first two baselines -FF and No Textdo not use any textual information. The FF baseline directly runs the Metric-FF planner on the given task, while the No Text baseline is a variant of our model that learns to plan in the reinforcement learning framework. It uses the same state-level features",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": null
},
{
"text": "Seeds for growing wheat can be obtained by breaking tall grass (false nega tive)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2718 \u2718",
"sec_num": null
},
{
"text": "Sticks are the only building material required to craft a fence or ladder. Figure 4 : Examples of precondition relations predicted by our model from text. Check marks () indicate correct predictions, while a cross () marks the incorrect one -in this case, a valid relation that was predicted as invalid by our model. Note that each pair of highlighted noun phrases in a sentence is a Candidate Relation, and pairs that are not connected by an arrow were correctly predicted to be invalid by our model.",
"cite_spans": [],
"ref_spans": [
{
"start": 75,
"end": 83,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "\u2718 \u2718",
"sec_num": null
},
{
"text": "Figure 5: The performance of our model and a supervised SVM baseline on the precondition prediction task. Also shown is the F-Score of the full set of Candidate Relations which is used unmodified by All Text, and is given as input to our model. Our model's F-score, averaged over 200 trials, is shown with respect to learning iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "100 150 50",
"sec_num": "200"
},
{
"text": "as our model, but does not have access to text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "100 150 50",
"sec_num": "200"
},
{
"text": "The All Text baseline has access to the full set of 694 Candidate Relations. During learning, our full model refines this set of relations, while in contrast the All Text baseline always uses the full set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "100 150 50",
"sec_num": "200"
},
{
"text": "The two remaining baselines constitute the upper bound on the performance of our model. The first, Manual Text, is a variant of our model which directly uses the links derived from manual annotations of preconditions in text. The second, Gold, has access to the Gold Relations. Note that the connections available to Manual Text are a subset of the Gold links, because the text does not specify all relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "100 150 50",
"sec_num": "200"
},
{
"text": "Experimental Details All experimental results are averaged over 200 independent runs for both our model as well as the baselines. Each of these trials is run for 200 learning iterations with a maximum subgoal sequence length of 10. To find a low-level plan between each consecutive pair of subgoals, our high-level planner internally uses Metric-FF. We give Metric-FF a one-minute timeout to find such a low-level plan. To ensure that the comparison between the high-level planners and the FF baseline is fair, the FF baseline is allowed a runtime of 2,000 minutes. This is an upper bound on the time that our high-level planner can take over the 200 learning iterations, with subgoal sequences of length at most 10 and a one minute timeout. Lastly, during learning we initialize all parameters to zero, use a fixed learning rate of 0.0001, and encourage our model to explore the state space by using the standard -greedy exploration strategy (Sutton and Barto, 1998) .",
"cite_spans": [
{
"start": 943,
"end": 967,
"text": "(Sutton and Barto, 1998)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "100 150 50",
"sec_num": "200"
},
{
"text": "Relation Extraction Figure 5 shows the performance of our method on identifying preconditions in text. We also show the performance of the supervised SVM baseline. As can be seen, after 200 learning iterations, our model achieves an F-Measure of 66%, equal to the supervised baseline. These results support our hypothesis that planning feedback is a powerful source of supervision for analyzing a given text corpus. Figure 4 shows some examples of sentences and the corresponding extracted relations.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 28,
"text": "Figure 5",
"ref_id": null
},
{
"start": 416,
"end": 424,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "Planning Performance As shown in Table 4 our text-enriched planning model outperforms the textfree baselines by more than 10%. Moreover, the performance improvement of our model over the All Text baseline demonstrates that the accuracy of the extracted text relations does indeed impact planning performance. A similar conclusion can be reached by comparing the performance of our model and the Manual Text baseline. The difference in performance of 2.35% between Manual Text and Gold shows the importance of the precondition information that is missing from the text. Note that Gold itself does not complete all tasks -this is largely because the Markov assumption made by our model does not hold for all tasks. 9 Figure 6 breaks down the results based on the difficulty of the corresponding planning task. We measure problem complexity in terms of the low-level steps needed to implement a manually constructed high-level plan. Based on this measure, we divide the problems into two sets. As can be seen, all of the high-level planners solve almost all of the easy problems. However, performance varies greatly on the more challenging tasks, directly correlating with planner sophistication. On these tasks our model outperforms the No Text baseline by 28% and the All Text baseline by 11%. Figure 7 shows the top five positive features for our model and the SVM baseline. Both models picked up on the words that indicate precondition relations in this domain. For instance, the word use often occurs in sentences that describe the resources required to make an object, such as \"bricks are items used to craft brick blocks\". In addition to lexical features, dependency information is also given high weight by both learners. An example path has word \"craft\" path has dependency type \"partmod\" path has word \"equals\" path has word \"use\" path has dependency type \"xsubj\" path has word \"use\" path has word \"fill\" path has dependency type \"dobj\" path has dependency type \"xsubj\" path has word \"craft\" Figure 7 : The top five positive features on words and dependency types learned by our model (above) and by SVM (below) for precondition prediction.",
"cite_spans": [],
"ref_spans": [
{
"start": 33,
"end": 40,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 715,
"end": 723,
"text": "Figure 6",
"ref_id": "FIGREF4"
},
{
"start": 1293,
"end": 1301,
"text": "Figure 7",
"ref_id": null
},
{
"start": 1999,
"end": 2007,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "of this is a feature that checks for the direct object dependency type. This analysis is consistent with prior work on event semantics which shows lexicosyntactic features are effective cues for learning text relations (Blanco et al., 2008; Beamer and Girju, 2009; Do et al., 2011) .",
"cite_spans": [
{
"start": 219,
"end": 240,
"text": "(Blanco et al., 2008;",
"ref_id": "BIBREF3"
},
{
"start": 241,
"end": 264,
"text": "Beamer and Girju, 2009;",
"ref_id": "BIBREF2"
},
{
"start": 265,
"end": 281,
"text": "Do et al., 2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Analysis",
"sec_num": null
},
{
"text": "In this paper, we presented a novel technique for inducing precondition relations from text by grounding them in the semantics of planning operations. While using planning feedback as its only source of supervision, our method for relation extraction achieves a performance on par with that of a supervised baseline. Furthermore, relation grounding provides a new view on classical planning problems which enables us to create high-level plans based on language abstractions. We show that building highlevel plans in this manner significantly outperforms traditional techniques in terms of task completion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "The code, data and experimental setup for this work are available at http://groups.csail.mit.edu/rbg/code/planning",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.minecraft.net/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "If a planner can find a plan to successfully obtain stone after obtaining a pickaxe, then a pickaxe is likely a precondition for stone. Conversely, if a planner obtains stone without first obtaining a pickaxe, then it is likely not a precondition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The state-of-the-art baseline used in the 2008 International Planning Competition. http://ipc.informatik.uni-freiburg.de/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://ipc.icaps-conference.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "SVM light(Joachims, 1999) with default parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "When a given task has two non-trivial preconditions, our model will choose to satisfy one of the two first, and the Markov assumption blinds it to the remaining precondition, preventing it from determining that it must still satisfy the other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors acknowledge the support of the NSF (CAREER grant IIS-0448168, grant IIS-0835652), the DARPA Machine Reading Program (FA8750-09-C-0172, PO#4910018860), and Batelle (PO#300662). Thanks to Amir Globerson, Tommi Jaakkola, Leslie Kaelbling, George Konidaris, Dylan Hadfield-Menell, Stefanie Tellex, the MIT NLP group, and the ACL reviewers for their suggestions and comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Downward refinement and the efficiency of hierarchical problem solving",
"authors": [
{
"first": "Fahiem",
"middle": [],
"last": "Bacchus",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 1994,
"venue": "Artificial Intell",
"volume": "71",
"issue": "1",
"pages": "43--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fahiem Bacchus and Qiang Yang. 1994. Downward refinement and the efficiency of hierarchical problem solving. Artificial Intell., 71(1):43-100.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "DetH*: Approximate hierarchical solution of large markov decision processes",
"authors": [
{
"first": "Jennifer",
"middle": [
"L"
],
"last": "Barry",
"suffix": ""
},
{
"first": "Leslie",
"middle": [
"Pack"
],
"last": "Kaelbling",
"suffix": ""
},
{
"first": "Toms",
"middle": [],
"last": "Lozano-Prez",
"suffix": ""
}
],
"year": 2011,
"venue": "IJCAI'11",
"volume": "",
"issue": "",
"pages": "1928--1935",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer L. Barry, Leslie Pack Kaelbling, and Toms Lozano-Prez. 2011. DetH*: Approximate hierarchi- cal solution of large markov decision processes. In IJCAI'11, pages 1928-1935.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using a bigram event model to predict causal potential",
"authors": [
{
"first": "Brandon",
"middle": [],
"last": "Beamer",
"suffix": ""
},
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of CICLing",
"volume": "",
"issue": "",
"pages": "430--441",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brandon Beamer and Roxana Girju. 2009. Using a bi- gram event model to predict causal potential. In Pro- ceedings of CICLing, pages 430-441.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Causal relation extraction",
"authors": [
{
"first": "Eduardo",
"middle": [],
"last": "Blanco",
"suffix": ""
},
{
"first": "Nuria",
"middle": [],
"last": "Castell",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Moldovan",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the LREC'08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduardo Blanco, Nuria Castell, and Dan Moldovan. 2008. Causal relation extraction. In Proceedings of the LREC'08.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Reinforcement learning for mapping instructions to actions",
"authors": [
{
"first": "S",
"middle": [
"R"
],
"last": "Branavan",
"suffix": ""
},
{
"first": "Harr",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "82--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S.R.K Branavan, Harr Chen, Luke Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of ACL, pages 82-90.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Reading between the lines: Learning to map high-level instructions to commands",
"authors": [
{
"first": "S",
"middle": [
"R"
],
"last": "Branavan",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "1268--1277",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S.R.K Branavan, Luke Zettlemoyer, and Regina Barzilay. 2010. Reading between the lines: Learning to map high-level instructions to commands. In Proceedings of ACL, pages 1268-1277.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning to win by reading manuals in a montecarlo framework",
"authors": [
{
"first": "S",
"middle": [
"R K"
],
"last": "Branavan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Silver",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "268--277",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. R. K. Branavan, David Silver, and Regina Barzilay. 2011. Learning to win by reading manuals in a monte- carlo framework. In Proceedings of ACL, pages 268- 277.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Incremental cue phrase learning and bootstrapping method for causality extraction using cue phrase and word pair probabilities",
"authors": [
{
"first": "Du-Seong",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Key-Sun",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2006,
"venue": "Inf. Process. Manage",
"volume": "42",
"issue": "3",
"pages": "662--678",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Du-Seong Chang and Key-Sun Choi. 2006. Incremen- tal cue phrase learning and bootstrapping method for causality extraction using cue phrase and word pair probabilities. Inf. Process. Manage., 42(3):662-678.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Generating typed dependency parses from phrase structure parses",
"authors": [
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2006,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In LREC 2006.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Minimally supervised event causality identification",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2011,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Q. Do, Y. Chan, and D. Roth. 2011. Minimally super- vised event causality identification. In EMNLP, 7.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Intentional context in situated natural language learning",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Fleischman",
"suffix": ""
},
{
"first": "Deb",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of CoNLL",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Fleischman and Deb Roy. 2005. Intentional context in situated natural language learning. In Pro- ceedings of CoNLL, pages 104-111.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Pddl2.1: An extension to pddl for expressing temporal planning domains",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Fox",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Long",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Artificial Intelligence Research",
"volume": "20",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Fox and Derek Long. 2003. Pddl2.1: An ex- tension to pddl for expressing temporal planning do- mains. Journal of Artificial Intelligence Research, 20:2003.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automated Planning: theory and practice",
"authors": [
{
"first": "Malik",
"middle": [],
"last": "Ghallab",
"suffix": ""
},
{
"first": "Dana",
"middle": [
"S"
],
"last": "Nau",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Traverso",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malik Ghallab, Dana S. Nau, and Paolo Traverso. 2004. Automated Planning: theory and practice. Morgan Kaufmann.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Text mining for causal relations",
"authors": [
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
},
{
"first": "Dan",
"middle": [
"I"
],
"last": "Moldovan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedigns of FLAIRS",
"volume": "",
"issue": "",
"pages": "360--364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roxana Girju and Dan I. Moldovan. 2002. Text mining for causal relations. In Proceedigns of FLAIRS, pages 360-364.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The FF planning system: Fast plan generation through heuristic search",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Nebel",
"suffix": ""
}
],
"year": 2001,
"venue": "JAIR",
"volume": "14",
"issue": "",
"pages": "253--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Hoffmann and Bernhard Nebel. 2001. The FF plan- ning system: Fast plan generation through heuristic search. JAIR, 14:253-302.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Advances in kernel methods. chapter Making large-scale support vector machine learning practical",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "169--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 1999. Advances in kernel meth- ods. chapter Making large-scale support vector ma- chine learning practical, pages 169-184. MIT Press.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A causal approach to hierarchical decomposition of factored mdps",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "Jonsson",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Barto",
"suffix": ""
}
],
"year": 2005,
"venue": "Advances in Neural Information Processing Systems",
"volume": "13",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders Jonsson and Andrew Barto. 2005. A causal approach to hierarchical decomposition of factored mdps. In Advances in Neural Information Processing Systems, 13:10541060, page 22. Press.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Expressivity of strips-like and htn-like planning",
"authors": [
{
"first": "Mari\u00e1n",
"middle": [],
"last": "Lekav\u00fd",
"suffix": ""
},
{
"first": "Pavol",
"middle": [],
"last": "N\u00e1vrat",
"suffix": ""
}
],
"year": 2007,
"venue": "Lecture Notes in Artificial Intelligence",
"volume": "4496",
"issue": "",
"pages": "121--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mari\u00e1n Lekav\u00fd and Pavol N\u00e1vrat. 2007. Expressivity of strips-like and htn-like planning. Lecture Notes in Artificial Intelligence, 4496:121-130.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning semantic correspondences with less supervision",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "91--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervi- sion. In Proceedings of ACL, pages 91-99.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automatic discovery and transfer of maxq hierarchies",
"authors": [
{
"first": "Neville",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "Soumya",
"middle": [],
"last": "Ray",
"suffix": ""
},
{
"first": "Prasad",
"middle": [],
"last": "Tadepalli",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Dietterich",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th international conference on Machine learning, ICML '08",
"volume": "",
"issue": "",
"pages": "648--655",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neville Mehta, Soumya Ray, Prasad Tadepalli, and Thomas Dietterich. 2008. Automatic discovery and transfer of maxq hierarchies. In Proceedings of the 25th international conference on Machine learning, ICML '08, pages 648-655.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning language from its perceptual context",
"authors": [
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ECML/PKDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raymond J. Mooney. 2008a. Learning language from its perceptual context. In Proceedings of ECML/PKDD.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning to connect language and perception",
"authors": [
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "1598--1601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raymond J. Mooney. 2008b. Learning to connect lan- guage and perception. In Proceedings of AAAI, pages 1598-1601.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The processes of creative thinking. Paper P-1320",
"authors": [
{
"first": "A",
"middle": [],
"last": "Newell",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Shaw",
"suffix": ""
},
{
"first": "H",
"middle": [
"A"
],
"last": "Simon",
"suffix": ""
}
],
"year": 1959,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Newell, J.C. Shaw, and H.A. Simon. 1959. The pro- cesses of creative thinking. Paper P-1320. Rand Cor- poration.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Grounding knowledge in sensors: Unsupervised learning for language and planning",
"authors": [
{
"first": "James Timothy",
"middle": [],
"last": "Oates",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Timothy Oates. 2001. Grounding knowledge in sensors: Unsupervised learning for language and planning. Ph.D. thesis, University of Massachusetts Amherst.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Extracting STRIPS representations of actions and events",
"authors": [
{
"first": "Avirup",
"middle": [],
"last": "Sil",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Yates",
"suffix": ""
}
],
"year": 2011,
"venue": "Recent Advances in Natural Language Learning (RANLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avirup Sil and Alexander Yates. 2011. Extract- ing STRIPS representations of actions and events. In Recent Advances in Natural Language Learning (RANLP).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Extracting action and event semantics from web text",
"authors": [
{
"first": "Avirup",
"middle": [],
"last": "Sil",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Yates",
"suffix": ""
}
],
"year": 2010,
"venue": "AAAI 2010 Fall Symposium on Commonsense Knowledge (CSK)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avirup Sil, Fei Huang, and Alexander Yates. 2010. Ex- tracting action and event semantics from web text. In AAAI 2010 Fall Symposium on Commonsense Knowl- edge (CSK).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic",
"authors": [
{
"first": "Jeffrey",
"middle": [
"Mark"
],
"last": "Siskind",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Artificial Intelligence Research",
"volume": "15",
"issue": "",
"pages": "31--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Mark Siskind. 2001. Grounding the lexical se- mantics of verbs in visual perception using force dy- namics and event logic. Journal of Artificial Intelli- gence Research, 15:31-90.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Reinforcement Learning: An Introduction",
"authors": [
{
"first": "Richard",
"middle": [
"S"
],
"last": "Sutton",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"G"
],
"last": "Barto",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard S. Sutton and Andrew G. Barto. 1998. Rein- forcement Learning: An Introduction. The MIT Press.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Policy gradient methods for reinforcement learning with function approximation",
"authors": [
{
"first": "Richard",
"middle": [
"S"
],
"last": "Sutton",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcallester",
"suffix": ""
},
{
"first": "Satinder",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Yishay",
"middle": [],
"last": "Mansour",
"suffix": ""
}
],
"year": 2000,
"venue": "Advances in NIPS",
"volume": "",
"issue": "",
"pages": "1057--1063",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in NIPS, pages 1057-1063.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Learning to follow navigational directions",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "806--814",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Vogel and Daniel Jurafsky. 2010. Learning to follow navigational directions. In Proceedings of the ACL, pages 806-814.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 1992,
"venue": "Machine Learning",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforcement learning. Machine Learning, 8.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Identifying useful subgoals in reinforcement learning by local graph partitioning",
"authors": [
{
"first": "Alicia",
"middle": [
"P"
],
"last": "Wolfe",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"G"
],
"last": "Barto",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Twenty-Second International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "816--823",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alicia P. Wolfe and Andrew G. Barto. 2005. Identify- ing useful subgoals in reinforcement learning by local graph partitioning. In In Proceedings of the Twenty- Second International Conference on Machine Learn- ing, pages 816-823.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "On the integration of grounding language and learning objects",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Dana",
"middle": [
"H"
],
"last": "Ballard",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "488--493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Yu and Dana H. Ballard. 2004. On the integration of grounding language and learning objects. In Pro- ceedings of AAAI, pages 488-493.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Text description of preconditions and effects (a), and the low-level actions connecting them (b).",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "A high-level plan showing two subgoals in a precondition relation. The corresponding sentence is shown above.",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Input: A document d, Set of planning tasks G, Set of candidate precondition relations C all , Reward function r(), Number of iterations T Initialization:Model parameters \u03b8 x = 0 and \u03b8 c = 0.",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "Example of the precondition dependencies present in the Minecraft domain.",
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"text": "Percentage of problems solved by various models on Easy and Hard problem sets.",
"num": null,
"type_str": "figure"
},
"TABREF2": {
"text": "Example text features. A subgoal pair x i , x j is first mapped to word tokens using a small grounding table. Words and dependencies are extracted along paths between mapped target words. These are combined with path directions to generate the text features.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td colspan=\"2\">Domain Predicate Noun Phrases</td></tr><tr><td>have(plank)</td><td>wooden plank, wood plank</td></tr><tr><td>have(stone)</td><td>stone, cobblestone</td></tr><tr><td>have(iron)</td><td>iron ingot</td></tr></table>"
},
"TABREF3": {
"text": "Examples in our seed grounding table. Each predicate is mapped to one or more noun phrases that describe it in the text.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF5": {
"text": "Percentage of tasks solved successfully by our model and the baselines. All performance differences between methods are statistically significant at p \u2264 .01.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
}
}
}
}