| { |
| "paper_id": "P16-1011", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:00:04.513595Z" |
| }, |
| "title": "Incremental Acquisition of Verb Hypothesis Space towards Physical World Interaction", |
| "authors": [ |
| { |
| "first": "Lanbo", |
| "middle": [], |
| "last": "She", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "shelanbo@cse.msu.edu" |
| }, |
| { |
| "first": "Joyce", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Chai", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "jchai@cse.msu.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "As a new generation of cognitive robots start to enter our lives, it is important to enable robots to follow human commands and to learn new actions from human language instructions. To address this issue, this paper presents an approach that explicitly represents verb semantics through hypothesis spaces of fluents and automatically acquires these hypothesis spaces by interacting with humans. The learned hypothesis spaces can be used to automatically plan for lower-level primitive actions towards physical world interaction. Our empirical results have shown that the representation of a hypothesis space of fluents, combined with the learned hypothesis selection algorithm, outperforms a previous baseline. In addition, our approach applies incremental learning, which can contribute to lifelong learning from humans in the future.", |
| "pdf_parse": { |
| "paper_id": "P16-1011", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "As a new generation of cognitive robots start to enter our lives, it is important to enable robots to follow human commands and to learn new actions from human language instructions. To address this issue, this paper presents an approach that explicitly represents verb semantics through hypothesis spaces of fluents and automatically acquires these hypothesis spaces by interacting with humans. The learned hypothesis spaces can be used to automatically plan for lower-level primitive actions towards physical world interaction. Our empirical results have shown that the representation of a hypothesis space of fluents, combined with the learned hypothesis selection algorithm, outperforms a previous baseline. In addition, our approach applies incremental learning, which can contribute to lifelong learning from humans in the future.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "As a new generation of cognitive robots start to enter our lives, it is important to enable robots to follow human commands (Tellex et al., 2014; Thomason et al., 2015) and to learn new actions from human language instructions (Cantrell et al., 2012; Mohan et al., 2013) . To achieve such a capability, one of the fundamental challenges is to link higher-level concepts expressed by human language to lower-level primitive actions the robot is familiar with. While grounding language to perception (Gorniak and Roy, 2007; Chen and Mooney, 2011; Kim and Mooney, 2012; Artzi and Zettlemoyer, 2013; Tellex et al., 2014; Liu et al., 2014; Liu and Chai, 2015) has received much attention in recent years, less work has addressed grounding language to robotic action. Actions are often expressed by verbs or verb phrases. Most semantic representations for verbs are based on argument frames (e.g., thematic roles which capture participants of an action). For example, suppose a human directs a robot to \"fill the cup with milk\". The robot will need to first create a semantic representation for the verb \"fill\" where \"the cup\" and \"milk\" are grounded to the respective objects in the environment . Suppose the robot is successful in this first step, it still may not be able to execute the action \"fill\" as it does not know how this higher-level action corresponds to its lower-level primitive actions.", |
| "cite_spans": [ |
| { |
| "start": 124, |
| "end": 145, |
| "text": "(Tellex et al., 2014;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 146, |
| "end": 168, |
| "text": "Thomason et al., 2015)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 227, |
| "end": 250, |
| "text": "(Cantrell et al., 2012;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 251, |
| "end": 270, |
| "text": "Mohan et al., 2013)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 498, |
| "end": 521, |
| "text": "(Gorniak and Roy, 2007;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 522, |
| "end": 544, |
| "text": "Chen and Mooney, 2011;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 545, |
| "end": 566, |
| "text": "Kim and Mooney, 2012;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 567, |
| "end": 595, |
| "text": "Artzi and Zettlemoyer, 2013;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 596, |
| "end": 616, |
| "text": "Tellex et al., 2014;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 617, |
| "end": 634, |
| "text": "Liu et al., 2014;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 635, |
| "end": 654, |
| "text": "Liu and Chai, 2015)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In robotic systems, operations usually consist of multiple segments of lower-level primitive actions (e.g., move to, open gripper, and close gripper) which are executed both sequentially and concurrently. Task scheduling provides the order or schedule for executions of different segments of actions and action planning provides the plan for executing each individual segment. Primitive actions are often predefined in terms of how they change the state of the physical world. Given a goal, task scheduling and action planning will derive a sequence of primitive actions that can change the initial environment to the goal state. The goal state of the physical world becomes a driving force for robot actions. Thus, beyond semantic frames, modeling verb semantics through their effects on the state of the world may provide a link to connect higher-level language and lowerlevel primitive actions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Motivated by this perspective, we have developed an approach where each verb is explicitly represented by a hypothesis space of fluents (i.e., desired goal states) of the physical world, which is incrementally acquired and updated through interacting with humans. More specifically, given a human command, if there is no knowledge about the corresponding verb (i.e., no existing hypothesis space for that verb), the robot will initiate a learning process by asking human partners to demonstrate the sequence of actions that is necessary to accomplish this command. Based on this demonstration, a hypothesis space of fluents for that verb frame will be automatically acquired. If there is an existing hypothesis space for the verb, the robot will select the best hypothesis that is most relevant to the current situation and plan for the sequence of lower-level actions. Based on the outcome of the actions (e.g., whether it has successfully executed the command), the corresponding hypothesis space will be updated. Through this fashion, a hypothesis space for each encountered verb frame is incrementally acquired and updated through continuous interactions with human partners. In this paper, to focus our effort on representations and learning algorithms, we adopted an existing benchmark dataset (Misra et al., 2015) to simulate the incremental learning process and interaction with humans.", |
| "cite_spans": [ |
| { |
| "start": 1300, |
| "end": 1320, |
| "text": "(Misra et al., 2015)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Compared to previous works (She et al., 2014b; Misra et al., 2015) , our approach has three unique characteristics. First, rather than a single goal state associated with a verb, our approach captures a space of hypotheses which can potentially account for a wider range of novel situations when the verb is applied. Second, given a new situation, our approach can automatically identify the best hypothesis that fits the current situation and plan for lower-level actions accordingly. Third, through incremental learning and acquisition, our approach has a potential to contribute to life-long learning from humans. This paper provides details on the hypothesis space representation, the induction and inference algorithms, as well as experiments and evaluation results.", |
| "cite_spans": [ |
| { |
| "start": 27, |
| "end": 46, |
| "text": "(She et al., 2014b;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 47, |
| "end": 66, |
| "text": "Misra et al., 2015)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our work here is motivated by previous linguistic studies on verbs, action modeling in AI, and recent advances in grounding language to actions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Previous linguistic studies (Hovav and Levin, 2008; Hovav and Levin, 2010) propose action verbs can be divided into two types: manner verbs that \"specify as part of their meaning a manner of carrying out an action\" (e.g., nibble, rub, laugh, run, swim), and result verbs that \"specify the coming about of a result state\" (e.g., clean, cover, empty, fill, chop, cut, open, enter) . Re-cent work has shown that explicitly modeling resulting change of state for action verbs can improve grounded language understanding . Motivated by these studies, this paper focuses on result verbs and uses hypothesis spaces to explicitly represent the result states associated with these verbs.", |
| "cite_spans": [ |
| { |
| "start": 28, |
| "end": 51, |
| "text": "(Hovav and Levin, 2008;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 52, |
| "end": 74, |
| "text": "Hovav and Levin, 2010)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 328, |
| "end": 378, |
| "text": "clean, cover, empty, fill, chop, cut, open, enter)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In AI literature on action modeling, action schemas are defined with preconditions and effects. Thus, representing verb semantics for action verbs using resulting states can be connected to the agent's underlying planning modules. Different from earlier works in the planning community that learn action models from example plans (Wang, 1995; Yang et al., 2007) and from interactions (Gil, 1994) , our goal here is to explore the representation of verb semantics and its acquisition through language and action.", |
| "cite_spans": [ |
| { |
| "start": 330, |
| "end": 342, |
| "text": "(Wang, 1995;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 343, |
| "end": 361, |
| "text": "Yang et al., 2007)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 384, |
| "end": 395, |
| "text": "(Gil, 1994)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "There has been some work in the robotics community to translate natural language to robotic operations (Kress-Gazit et al., 2007; Jia et al., 2014; Spangenberg and Henrich, 2015) , but not for the purpose of learning new actions. To support action learning, previously we have developed a system where the robot can acquire the meaning of a new verb (e.g., stack) by following human's step-by-step language instructions (She et al., 2014a; She et al., 2014b) . By performing the actions at each step, the robot is able to acquire the desired goal state associated with the new verb. Our empirical results have shown that representing acquired verbs by resulting states allow the robot to plan for primitive actions in novel situations. Moreover, recent work (Misra et al., 2014; Misra et al., 2015) has presented an algorithm for grounding higher-level commands such as \"microwave the cup\" to lowerlevel robot operations, where each verb lexicon is represented as the desired resulting states. Their empirical evaluations once again have shown the advantage of representing verbs as desired states in robotic systems. Different from these previous works, we represent verb semantics through a hypothesis space of fluents (rather than a single hypothesis). In addition, we present an incremental learning approach for inducing the hypothesis space and selecting the best hypothesis.", |
| "cite_spans": [ |
| { |
| "start": 103, |
| "end": 129, |
| "text": "(Kress-Gazit et al., 2007;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 130, |
| "end": 147, |
| "text": "Jia et al., 2014;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 148, |
| "end": 178, |
| "text": "Spangenberg and Henrich, 2015)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 420, |
| "end": 439, |
| "text": "(She et al., 2014a;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 440, |
| "end": 458, |
| "text": "She et al., 2014b)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 758, |
| "end": 778, |
| "text": "(Misra et al., 2014;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 779, |
| "end": 798, |
| "text": "Misra et al., 2015)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "An overview of our incremental learning framework is shown in Figure 1 . Given a language Figure 1 : An incremental process of verb acquisition (i.e. learning) and application (i.e. inference).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 62, |
| "end": 70, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 90, |
| "end": 98, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "An Incremental Learning Framework", |
| "sec_num": "3" |
| }, |
| { |
| "text": "command L i (e.g. \"fill the cup with water.\") and an environment E i (e.g. a simulated environment shown in Figure 1 ), the goal is to identify a sequence of lower-level robotic actions to perform the command. Similar to previous works (Pasula et al., 2007; Mouro et al., 2012) , the environment E i is represented by a conjunction of grounded state fluents, where each fluent describes either the property of an object or relations (e.g. spatial) between objects. The language command L i is first translated to an intermediate representation of grounded verb frame v i through semantic parsing and referential grounding (e.g. for \"fill the cup\", the argument the cup is grounded to Cup1 in the scene). The system knowledge of each verb frame (e.g., fill(x)) is represented by a Hypothesis Space H, where each hypothesis (i.e. a node) is a description of possible fluents -or, in other words, resulting states -that are attributed to executing the verb command. Given a verb frame v i and an environment E i , a Hypothesis Selector will choose an optimal hypothesis from space H to describe the expected resulting state of executing v i in E i . Given this goal state and the current environment, a symbolic planner such as the STRIPS planner (Fikes and Nilsson, 1971 ) is used to generate an action sequence for the agent to execute. If the action sequence correctly performs the command (e.g. as evaluated by a human partner), the hypothesis selector will be updated with the success of its prediction. On the other hand, if the action has never been encountered (i.e., the system has no knowledge about this verb and thus the corresponding space is empty) or the predicted action sequence is incorrect, the human partner will provide an action sequence A i that can correctly perform command v i in the current environment. Using A i as the ground truth information, Figure 2 : An example hypothesis space for the verb frame fill(x). The bottom node captures the state changes after executing the fill command in the environment. Anchored by the bottom node, the hypothesis space is generated in a bottom-up fashion. Each node represents a potential goal state. The highlighted nodes are pruned during induction, as they are not consistent with the bottom node.", |
| "cite_spans": [ |
| { |
| "start": 236, |
| "end": 257, |
| "text": "(Pasula et al., 2007;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 258, |
| "end": 277, |
| "text": "Mouro et al., 2012)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1244, |
| "end": 1268, |
| "text": "(Fikes and Nilsson, 1971", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 108, |
| "end": 116, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 1871, |
| "end": 1879, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "An Incremental Learning Framework", |
| "sec_num": "3" |
| }, |
| { |
| "text": "the system will not only update the hypothesis selector, but will also update the existing space of v i . The updated hypothesis space is treated as system knowledge of v i , which will be used in future interaction. Through this procedure, a hypothesis space for each verb frame v i is continually and incrementally updated through human-robot interaction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An Incremental Learning Framework", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To bridge human language and robotic actions, previous works have studied representing the semantics of a verb with a single resulting state (She et al., 2014b; Misra et al., 2015) . One problem of this representation is that when the verb is applied in a new situation, if any part of the resulting state cannot be satisfied, the symbolic planner will not be able to generate a plan for lower-level actions to execute this verb command. The planner is also not able to determine whether the failed part of state representation is even necessary. In fact, this effect is similar to the over-fitting problem. For example, given a sequence of actions of performing fill(x), the induced hypothesis could be \"Has( be applicable. Nevertheless, the first two terms Has(x, W ater) \u2227 Grasping(x) may already be sufficient to generate a plan for completing the verb command.", |
| "cite_spans": [ |
| { |
| "start": 141, |
| "end": 160, |
| "text": "(She et al., 2014b;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 161, |
| "end": 180, |
| "text": "Misra et al., 2015)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "State Hypothesis Space", |
| "sec_num": "4" |
| }, |
| { |
| "text": "x, W ater) \u2227 Grasping(x) \u2227 In(x, o 1 ) \u2227 \u00ac(On(x, o 2 ))\",", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "State Hypothesis Space", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To handle this over-fitting problem, we propose a hierarchical hypothesis space to represent verb semantics, as shown in Figure 2 . The space is organized based on a specific-to-general hierarchical structure. Formally, a hypothesis space H for a verb frame is defined as: N, E , where each n i \u2208 N is a hypothesis node and each e ij \u2208 E is a directed edge pointing from parent n i to child n j , meaning node n j is more general than n i and has one less constraint.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 121, |
| "end": 129, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "State Hypothesis Space", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In Figure 2 , the bottom hypothesis (n 1 ) is Has(x, W ater) \u2227 Grasping(x) \u2227 In(x, o1) \u2227 \u00ac(On(x, o2)). A hypothesis n i represents a conjunction of parameterized state fluents l k :", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 11, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "State Hypothesis Space", |
| "sec_num": "4" |
| }, |
| { |
| "text": "n i := \u2227 l k , and l k := [\u00ac] pred k (x k 1 [, x k 2 ])", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "State Hypothesis Space", |
| "sec_num": "4" |
| }, |
| { |
| "text": "A fluent l k is composed of a predicate (e.g. object status: Has, or spatial relation: On) and a set of argument variables. It can be positive or negative. Take the bottom node in Figure 2 as an example, it contains four fluents including one negative term (i.e. \u00ac(On(x, o 2 ))) and three positive terms. During inference, the parameters will be grounded to the environment to check whether this hypothesis is applicable.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 180, |
| "end": 188, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "State Hypothesis Space", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Given an initial environment E i , a language command which contains the verb frame v i , and a corresponding action sequence A i , {E i , v i , A i } forms a training instance for hypothesis space induction. First, based on different heuristics, a base hypothesis is generated by comparing the state difference between the final and the initial environment. Second, a hypothesis space H is induced on top of this Base Hypothesis in a bottom-up fashion. And during induction some nodes are pruned. Third, if the system has existing knowledge for the same verb frame (i.e. an existing hypothesis space H t for the same verb frame), this newly induced space will be merged with previous knowledge. Next we explain each step in detail.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Space Induction", |
| "sec_num": "5" |
| }, |
| { |
| "text": "One key concept in the space induction is the Base Hypothesis (e.g. the bottom node in Figure 2 ), which provides a foundation for building a space. As shown in Figure 3 , given a verb frame v i and a working environment E i , the action sequence A i given by a human will change the initial environment E i to a final environment E i . The state changes are highlighted in Figure 3 . Suppose a state change can be described by n fluents. Then the first question is which of these n fluents should be included in the base hypothesis. To gain some understanding on what would be a good representation, we applied different heuristics of choosing fluents to form a base hypothesis as shown in Figure 3:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 87, |
| "end": 95, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 161, |
| "end": 169, |
| "text": "Figure 3", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 374, |
| "end": 382, |
| "text": "Figure 3", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 691, |
| "end": 697, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Base Hypothesis Induction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 H1 argonly : only includes the changed states associated with the argument objects specified in the frame (e.g., in Figure 3 , Kettle1 is the only argument).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 118, |
| "end": 126, |
| "text": "Figure 3", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Base Hypothesis Induction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 H2 manip : includes the changed states of all the objects that have been manipulated in the action sequence taught by the human.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Base Hypothesis Induction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 H3 argrelated : includes the changed states of all the objects related to the argument objects in the final environment. An object o is considered as \"related to\" an argument object if there is a state fluent that includes both o and an argument object in one predicate. (e.g. Stove is related to the argument object Kettle1 through On(Kettle1, Stove)). (0) ,...,t (k) ] from t by removing each single fluent foreach i = 0 ... k do if t (i) is consistent with t then Append t (i) to T ; Add t (i) to N if not already in; Add link t \u2192 t (i) to E if not already in; else Prune t (i) and any node that can be generalized from t (i) end end end Output: Hypothesis space H Algorithm 1: A single hypothesis space induction algorithm. H is a space initialized with a base hypothesis and an empty set of links. T is a temporary container of candidate hypotheses.", |
| "cite_spans": [ |
| { |
| "start": 356, |
| "end": 359, |
| "text": "(0)", |
| "ref_id": null |
| }, |
| { |
| "start": 367, |
| "end": 370, |
| "text": "(k)", |
| "ref_id": null |
| }, |
| { |
| "start": 495, |
| "end": 498, |
| "text": "(i)", |
| "ref_id": null |
| }, |
| { |
| "start": 538, |
| "end": 541, |
| "text": "(i)", |
| "ref_id": null |
| }, |
| { |
| "start": 579, |
| "end": 582, |
| "text": "(i)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Base Hypothesis Induction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 H4 all : includes all the fluents whose values are changed from E i to E i (e.g. all the four highlighted state fluents in E i ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Base Hypothesis Induction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "First we define the consistency between two hypotheses:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Single Space Induction", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Definition. Hypotheses h 1 and h 2 are consistent, if and only if the action sequence A 1 generated from a symbolic planner based on goal state h 1 is exactly the same as the action sequence A 2 generated based on goal state h 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Single Space Induction", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Given a base hypothesis, the space induction process is a while-loop generalizing hypotheses in a bottom-up fashion, which stops when no hypotheses can be further generalized. As shown in Algorithm 1, a hypothesis node t can firstly be generalized to a set of immediate children [t (0) ,...,t (k) ] by removing a single fluent from t. For example, the base hypothesis n 1 in Figure 2 is composed of 4 fluents, such that 4 immediate children nodes can potentially be generated. If a child node t (i) is consistent with its parent t (i.e. determined based on the consistency defined previously), node t (i) and a link t \u2192 t (i) are added to the space H. The node t (i) is also added to a temporary hypothesis container waiting to be further generalized. On the other hand, some children hypotheses can be inconsistent with their parents. For example, the gray node (n 2 ) in Figure 2 is a child node that is inconsistent with its parent (n 1 ). As n 2 does not explicitly specify Has(x, W ater) as part of its goal state, the symbolic planner generates less steps to achieve goal state n 2 than goal state n 1 . This implies that the semantics of achieving n 2 may be different than those for achieving n 1 . Such hypotheses that are inconsistent with their parents are pruned. In addition, if t (i) is inconsistent with its parent t, any children of t (i) are also inconsistent with t (e.g. children of n 2 in Figure 2 are also gray nodes, meaning they are inconsistent with the base hypothesis). Through pruning, the size of entire space can be greatly reduced.", |
| "cite_spans": [ |
| { |
| "start": 293, |
| "end": 296, |
| "text": "(k)", |
| "ref_id": null |
| }, |
| { |
| "start": 622, |
| "end": 625, |
| "text": "(i)", |
| "ref_id": null |
| }, |
| { |
| "start": 1351, |
| "end": 1354, |
| "text": "(i)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 375, |
| "end": 383, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 873, |
| "end": 881, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 1409, |
| "end": 1417, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Single Space Induction", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In the resulting hypothesis space, every single hypothesis is consistent with the base hypothesis. By only keeping consistent hypotheses via pruning, we can remove fluents that are not representative of the main goal associated with the verb.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Single Space Induction", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "If the robot has existing knowledge (i.e. hypothesis space H t ) for a verb frame, the induced hypothesis space H from a new instance of the same verb will be merged with the existing space H t . Currently, a new space H t+1 is generated where the nodes of H t+1 are the union of H and H t , and links in H t+1 are generated by checking the parent-child relationship between nodes. In future work, more space merging operations will be explored, and human feedback will be incorporated into the induction process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Space Merging", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Hypothesis selection is applied when the agent intends to execute a command. Given a verb frame extracted from the language command, the agent will first select the best hypothesis (describing the goal state) from the existing knowledge base, and then apply a symbolic planner to generate an action sequence to achieve the goal. In our framework, the model of selecting the best hypothesis is incrementally learned throughout continuous interaction with humans. More specifically, given a correct action sequence (whether performed by the robot or provided by the human), a regression model is trained to capture the fitness of a hypothesis given a particular situation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Inference: Given a verb frame v i and a working environment E i , the goal of inference is to estimate how well each hypothesis h k from a space H t describes the expected result of performing v i in E i . The best fit hypothesis will be used as the goal state to generate the action sequence. Specifically, the \"goodness\" of describing command v i with hypothesis h k in environment E i is formulated as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "f (h k | v i ; E i ; H t ) = W T \u2022 \u03a6(h k , v i , E i , H t ) (1) where \u03a6(h k , v i , E i , H t )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "is a feature vector capturing multiple aspects of relations between h k , v i , E i and H t as shown in Table 1 ; and W captures the weight associated with each feature. Example global features include whether the candidate goal h k is in the top level of entire space H t and whether h k has the highest frequency. Example local features include if most of the fluents in h k are already satisfied in current scene E i (as this h k is unlikely to be a desired goal state). The features also include whether the same verb frame v i has been performed in a similar scene during previous interactions, as the corresponding hypotheses induced during that experience are more likely to be relevant and are thus preferred.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 104, |
| "end": 111, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Hypothesis Selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Parameter Estimation: Given an action sequence A i that illustrates how to correctly perform command v i in environment E i during interaction, the model weights will be incrementally updated with 1 :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "W t+1 = W t \u2212 \u03b7(\u03b1 \u2202R(W t ) \u2202W t + \u2202L(J ki , f ki ) \u2202W t )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "f ki := f (h k |v i ; E i ; H t )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "is defined in Equation 1. J ki is the dependent variable the model should approximate, where J ki := J(s i , h k ) is the Jaccard Index (details in Section 7) between hypothesis h k and a set of changed states s i (i.e. the changed states of executing the illustration action sequence", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "A i in current environment). L(J ki , f ki )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "is a squared loss function. \u03b1R(W t ) is the penalty term, and \u03b7 is the constant learning rate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Selection", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Dataset Description. To evaluate our approach, we applied the dataset made available by (Misra et al., 2015) . To support incremental learning, each utterance from every original paragraph is extracted so that each command/utterance only contains one verb and its arguments. The corresponding initial environment and an action sequence", |
| "cite_spans": [ |
| { |
| "start": 88, |
| "end": 108, |
| "text": "(Misra et al., 2015)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Features on candidate hypothesis h k and the space Ht 1. If h k belongs to the top level of Ht.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7" |
| }, |
| { |
| "text": "2. If h k has the highest frequency in Ht.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Features on h k and current situation Ei 3. Portion of fluents in h k that are already satisfied by Ei. 4. Portion of non-argument objects in h k . Examples of non-argument objects are o1 and o2 in Figure 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 198, |
| "end": 206, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Features on relations between a testing verb frame vi and previous interaction experience 5. Whether the same verb frame vi has been executed previously with the same argument objects. 6. Similarities between noun phrase descriptions used in current command and commands from interaction history. taught by a human for each command are also extracted. An example is shown in Figure 3 , where L i is a language command, E i is the initial working environment, and A i is a sequence of primitive actions to complete the command given by the human. In the original data, some sentences are not aligned with any actions, and thus cannot be used for either the learning or the evaluation. Removing these unaligned sentences resulted in a total of 991 data instances, including 165 different verb frames.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 375, |
| "end": 383, |
| "text": "Figure 3", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Among the 991 data instances, 793 were used for incremental learning (i.e., space induction and hypothesis selector learning). Specifically, given a command, if the robot correctly predicts an action sequence 2 , this correct prediction is used to update the hypothesis selector. Otherwise, the agent will require a correct action sequence from the human, which is used for hypothesis space induction as well as updating the hypothesis selector.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7" |
| }, |
| { |
| "text": "The hypothesis spaces and regression based selectors acquired at each run were evaluated on the other 20% (198) testing instances. Specifically, for each testing instance, the induced space and the hypothesis selector were applied to identify a desired goal state. Then a symbolic planner 3 was applied to predict an action sequence A (p) based on this predicted goal state. We then compared A (p) with the ground truth action sequence A (g) using the following two metrics.", |
| "cite_spans": [ |
| { |
| "start": 394, |
| "end": 397, |
| "text": "(p)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7" |
| }, |
| { |
| "text": "\u2022 IED (Instruction Edit Distance) measures (a) IED results for different configurations (b) SJI results for different configurations Figure 4 : The overall performance on the testing set with different configurations in generating the base hypothesis and in hypothesis selection. Each configuration runs five times by randomly shuffling the order of learning instances, and the averaged performance is reported. The result from Misra2015 is shown as a line. Results that are statistically significant better than Misra2015 are marked with * (paired t-test, p< 0.05). similarity between the ground truth action sequence A (g) and the predicted sequence A (p) . Specifically, the edit distance d between two action sequences A (g) and A (p) is first calculated. Then d is rescaled as IED = 1 \u2212 d/max( A (g) , A (p) ), such that IED ranges from 0 to 1 and a larger IED means the two sequences are more similar.", |
| "cite_spans": [ |
| { |
| "start": 654, |
| "end": 657, |
| "text": "(p)", |
| "ref_id": null |
| }, |
| { |
| "start": 809, |
| "end": 812, |
| "text": "(p)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 133, |
| "end": 141, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7" |
| }, |
| { |
| "text": "\u2022 SJI (State Jaccard Index). Because different action sequences could lead to a same goal state, we also use Jaccard Index to check the overlap between the changed states. Specifically, executing the ground truth action sequence A (g) in the initial scene E i results in a final environment E i . Suppose the changed states between E i and E i is c (g) . For the predicted action sequence, we can calculate another set of changed states c (p) . The Jaccard Index between c (g) and c (p) is evaluated, which also ranges from 0 to 1 and a larger SJI means the predicted state changes are more similar to the ground truth.", |
| "cite_spans": [ |
| { |
| "start": 349, |
| "end": 352, |
| "text": "(g)", |
| "ref_id": null |
| }, |
| { |
| "start": 439, |
| "end": 442, |
| "text": "(p)", |
| "ref_id": null |
| }, |
| { |
| "start": 483, |
| "end": 486, |
| "text": "(p)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Configurations. We also compared the results of using the regression based selector to select a hypothesis (i.e., RegressionBased) with the following different strategies for selecting the hypothesis:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7" |
| }, |
| { |
| "text": "\u2022 Misra2015: The state of the art system reported in (Misra et al., 2015) on the command/utterance level evaluation 4 .", |
| "cite_spans": [ |
| { |
| "start": 53, |
| "end": 73, |
| "text": "(Misra et al., 2015)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7" |
| }, |
| { |
| "text": "\u2022 MemoryBased: Given the induced space, only the base hypotheses h k s from each learning instances are used. Because these h k s don't have any relaxation, they represent purely learning from memorization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7" |
| }, |
| { |
| "text": "\u2022 MostGeneral: In this case, only those hypotheses from the top level of the hypothesis space are used, which contain the least number of fluents. These nodes are the most relaxed hypotheses in the space.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7" |
| }, |
| { |
| "text": "\u2022 MostFrequent: In this setting, the hypotheses that are most frequently observed in the learning instances are used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "7" |
| }, |
| { |
| "text": "The results of the overall performance across different configurations are shown in Figure 4 . For both of the IED and SJI (i.e. Figure 4(a) and Figure 4(b) ), the hypothesis spaces with the regression model based hypothesis selector always achieve the best performance across different configurations, and outperforms the previous approach (Misra et al., 2015) . For different base hypothesis induction strategies, the H4 all considering all the changed states achieves the best performance across all configurations. This is because H4 all keeps all of the state change information compared with other heuristics. The performance of H2 manip is similar to H4 all . The reason is that, when all the manipulated objects are considered, the resulted set of changed states will cover most of the fluents in H4 all . On the other dimension, ", |
| "cite_spans": [ |
| { |
| "start": 341, |
| "end": 361, |
| "text": "(Misra et al., 2015)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 84, |
| "end": 92, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 129, |
| "end": 140, |
| "text": "Figure 4(a)", |
| "ref_id": null |
| }, |
| { |
| "start": 145, |
| "end": 156, |
| "text": "Figure 4(b)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Overall performance", |
| "sec_num": "8.1" |
| }, |
| { |
| "text": "Figure 5 presents the incremental learning results on the testing set. To better present the results, we show the performance based on each learning cycle of 40 instances. The averaged Jaccard Index (SJI) is reported. Specifically, Figure 5(a) shows the results of configurations comparing different base hypothesis induction heuristics using regression model based hypothesis selection. After using 200 out of 840 (23.8%) learning instances, all the four curves achieve more than 80% of the overall performance. For example, for the heuristic H4 all , the final average Jaccard Index is 0.418. When 200 instances are used, the score is 0.340 (0.340/0.418\u224881%). The same number holds for the other heuristics. After 200 instances, H4 all and H2 manip consistently achieve better performance than H1 argonly and H3 argrelated . This result indicates that while change of states mostly affect the arguments of the verbs, other state changes in the environment cannot be ignored. Modeling them actually leads to better performance. Using H4 all for base hypothesis induction, Figure 5(b) shows the results of comparing different hypothesis selection strategies. The regression model based selector always outperforms other selection strategies.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 232, |
| "end": 243, |
| "text": "Figure 5(a)", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 1073, |
| "end": 1084, |
| "text": "Figure 5(b)", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Incremental Learning Results", |
| "sec_num": "8.2" |
| }, |
| { |
| "text": "Beside overall evaluation, we have also taken a closer look at individual verb frames. Most of the Figure 6 : Incremental evaluation for individual verb frames. Four frequently used verb frames are examined: place(x, y), put(x, y), take(x), and turn(x). X-axis is the number of incremental learning instances, and Y-axis is the averaged SJI computed with H4 all base hypothesis induction and regression based hypothesis selector.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 99, |
| "end": 107, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results on Frequently Used Verb Frames", |
| "sec_num": "8.3" |
| }, |
| { |
| "text": "verb frames in the data have a very low frequency, which cannot produce statistically significant results. So we only selected verb frames with frequency larger than 40 in this evaluation. For each verb frame, 60% data are used for incremental learning and 40% are for testing. For each frame, a regression based selector is trained separately. The resulting SJI curves are shown in Figure 6 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 383, |
| "end": 391, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results on Frequently Used Verb Frames", |
| "sec_num": "8.3" |
| }, |
| { |
| "text": "As shown in Figure 6 , all the four curves become steady after 8 learning instances are used. However, while some verb frames have final SJIs of more than 0.55 (i.e. take(x) and turn(x)), others have relatively lower results (e.g. results for put(x, y) are lower than 0.4). After examining the learning instances for put(x, y), we found these data are more noisy than the training data for other frames. One source of errors is the incorrect object grounding results. For example, a problematic training instance is \"put the pillow on the couch\", where the object grounding module cannot correctly ground the \"couch\" to the target object. As a result, the changed states of the second argument (i.e. the \"couch\") are incorrectly identified, which leads to incorrect prediction of desired states during inference. Another common error source is from automated parsing of utterances. The action frames generated from the parsing results could be incorrect in the first place, which would contribute to a hypothesis space for a wrong frame. These different types of errors are difficult to be recognized by the system itself. This points to the future direction of involving humans in a dialogue to learn a more reliable hypothesis space for verb semantics.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 20, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results on Frequently Used Verb Frames", |
| "sec_num": "8.3" |
| }, |
| { |
| "text": "This paper presents an incremental learning approach that represents and acquires semantics of action verbs based on state changes of the environment. Specifically, we propose a hierarchical hypothesis space, where each node in the space describes a possible effect on the world from the verb. Given a language command, the induced hypothesis space, together with a learned hypothesis selector, can be applied by the agent to plan for lower-level actions. Our empirical results have demonstrated a significant improvement in performance compared to a previous leading approach. More importantly, as our approach is based on incremental learning, it can be potentially integrated in a dialogue system to support life-long learning from humans. Our future work will extend the current approach with dialogue modeling to learn more reliable hypothesis spaces of resulting states for verb semantics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "9" |
| }, |
| { |
| "text": "The SGD regressor in the scikit-learn(Pedregosa et al., 2011) is used to perform the linear regression with L2 regularization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Currently, a prediction is considered correct if the predicted result (c (p) ) is similar to a human labeled action sequence (c (g) ) (i.e., SJI(c (g) , c (p) ) > 0.5).3 The symbolic planner implemented by(Rintanen, 2012) was utilized to generate action sequences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We applied the same system described in(Misra et al., 2015) to predict action sequences. The only difference is here we report the performance at the command level, not at the paragraph level.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was supported by IIS-1208390 and IIS-1617682 from the National Science Foundation. The authors would like to thank Dipendra K. Misra and colleagues for providing the evaluation data, and the anonymous reviewers for valuable comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Artzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "1", |
| "issue": "", |
| "pages": "49--62", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Artzi and Luke Zettlemoyer. 2013. Weakly su- pervised learning of semantic parsers for mapping instructions to actions. Transactions of the Associa- tion for Computational Linguistics, Volume1(1):49- 62.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Tell me when and why to do it! run-time planner model updates via natural language instruction", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Cantrell", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Talamadupula", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Schermerhorn", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Benton", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kambhampati", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Scheutz", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI'12)", |
| "volume": "", |
| "issue": "", |
| "pages": "471--478", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Cantrell, K. Talamadupula, P. Schermerhorn, J. Ben- ton, S. Kambhampati, and M. Scheutz. 2012. Tell me when and why to do it! run-time planner model updates via natural language instruction. In Pro- ceedings of the Seventh Annual ACM/IEEE Inter- national Conference on Human-Robot Interaction (HRI'12), pages 471-478, Boston, Massachusetts, USA, March.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Learning to interpret natural language navigation instructions from observations", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond J", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI-2011)", |
| "volume": "", |
| "issue": "", |
| "pages": "859--865", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David L Chen and Raymond J Mooney. 2011. Learn- ing to interpret natural language navigation instruc- tions from observations. In Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI- 2011), pages 859-865, San Francisco, California, USA, August.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Strips: A new approach to the application of theorem proving to problem solving", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [ |
| "E" |
| ], |
| "last": "Fikes", |
| "suffix": "" |
| }, |
| { |
| "first": "Nils", |
| "middle": [ |
| "J" |
| ], |
| "last": "Nilsson", |
| "suffix": "" |
| } |
| ], |
| "year": 1971, |
| "venue": "Proceedings of the 2nd International Joint Conference on Artificial Intelligence (IJCAI'71)", |
| "volume": "", |
| "issue": "", |
| "pages": "608--620", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard E. Fikes and Nils J. Nilsson. 1971. Strips: A new approach to the application of theorem proving to problem solving. In Proceedings of the 2nd Inter- national Joint Conference on Artificial Intelligence (IJCAI'71), pages 608-620, London, England.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Physical causality of action verbs in grounded language understanding", |
| "authors": [ |
| { |
| "first": "Qiaozi", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Malcolm", |
| "middle": [], |
| "last": "Doering", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaohua", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Joyce", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Chai", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qiaozi Gao, Malcolm Doering, Shaohua Yang, and Joyce Y. Chai. 2016. Physical causality of action verbs in grounded language understanding. In In Proceedings of the 54th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), Berlin, Germany.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Learning by experimentation: incremental refinement of incomplete planning domains", |
| "authors": [ |
| { |
| "first": "Yolanda", |
| "middle": [], |
| "last": "Gil", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Prococeedings of the Eleventh International Conference on Machine Learning (ICML'94)", |
| "volume": "", |
| "issue": "", |
| "pages": "87--95", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yolanda Gil. 1994. Learning by experimentation: incremental refinement of incomplete planning do- mains. In Prococeedings of the Eleventh Interna- tional Conference on Machine Learning (ICML'94), pages 87-95, New Brunswick, NJ, USA.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Situated language understanding as filtering perceived affordances. Cognitive", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Gorniak", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Science", |
| "volume": "31", |
| "issue": "2", |
| "pages": "197--231", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Gorniak and D. Roy. 2007. Situated language un- derstanding as filtering perceived affordances. Cog- nitive Science, Volume31(2):197-231.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Reflections on manner/result complementarity", |
| "authors": [ |
| { |
| "first": "Malka", |
| "middle": [ |
| "Rappaport" |
| ], |
| "last": "Hovav", |
| "suffix": "" |
| }, |
| { |
| "first": "Beth", |
| "middle": [], |
| "last": "Levin", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Lecture notes", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Malka Rappaport Hovav and Beth Levin. 2008. Re- flections on manner/result complementarity. Lecture notes.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Reflections on Manner / Result Complementarity. Lexical Semantics, Syntax, and Event Structure", |
| "authors": [ |
| { |
| "first": "Malka", |
| "middle": [ |
| "Rappaport" |
| ], |
| "last": "Hovav", |
| "suffix": "" |
| }, |
| { |
| "first": "Beth", |
| "middle": [], |
| "last": "Levin", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "21--38", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Malka Rappaport Hovav and Beth Levin. 2010. Re- flections on Manner / Result Complementarity. Lex- ical Semantics, Syntax, and Event Structure, pages 21-38.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Perceptive feedback for natural language control of robotic operations", |
| "authors": [ |
| { |
| "first": "Yunyi", |
| "middle": [], |
| "last": "Jia", |
| "suffix": "" |
| }, |
| { |
| "first": "Ning", |
| "middle": [], |
| "last": "Xi", |
| "suffix": "" |
| }, |
| { |
| "first": "Joyce", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Chai", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "2014 IEEE International Conference on Robotics and Automation, ICRA 2014", |
| "volume": "", |
| "issue": "", |
| "pages": "6673--6678", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yunyi Jia, Ning Xi, Joyce Y. Chai, Yu Cheng, Rui Fang, and Lanbo She. 2014. Perceptive feedback for nat- ural language control of robotic operations. In 2014 IEEE International Conference on Robotics and Au- tomation, ICRA 2014, Hong Kong, China, May 31 - June 7, 2014, pages 6673-6678.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Unsupervised pcfg induction for grounded language learning with highly ambiguous supervision", |
| "authors": [ |
| { |
| "first": "Joohyun", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing and Natural Language Learning (EMNLP-CoNLL '12)", |
| "volume": "", |
| "issue": "", |
| "pages": "433--444", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joohyun Kim and Raymond J. Mooney. 2012. Un- supervised pcfg induction for grounded language learning with highly ambiguous supervision. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing and Natural Lan- guage Learning (EMNLP-CoNLL '12), pages 433- 444, Jeju Island, Korea.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "From structured english to robot motion. In Intelligent Robots and Systems", |
| "authors": [ |
| { |
| "first": "Hadas", |
| "middle": [], |
| "last": "Kress-Gazit", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Georgios", |
| "suffix": "" |
| }, |
| { |
| "first": "George J", |
| "middle": [], |
| "last": "Fainekos", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pappas", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "IEEE/RSJ International Conference on", |
| "volume": "", |
| "issue": "", |
| "pages": "2717--2722", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hadas Kress-Gazit, Georgios E Fainekos, and George J Pappas. 2007. From structured english to robot mo- tion. In Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on, pages 2717-2722.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Learning to mediate perceptual differences in situated humanrobot dialogue", |
| "authors": [ |
| { |
| "first": "Changsong", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Joyce", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Chai", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI'15)", |
| "volume": "", |
| "issue": "", |
| "pages": "2288--2294", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Changsong Liu and Joyce Y. Chai. 2015. Learning to mediate perceptual differences in situated human- robot dialogue. In Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI'15), pages 2288-2294, Austin, Texas, USA.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Probabilistic labeling for efficient referential grounding based on collaborative discourse", |
| "authors": [ |
| { |
| "first": "Changsong", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Lanbo", |
| "middle": [], |
| "last": "She", |
| "suffix": "" |
| }, |
| { |
| "first": "Rui", |
| "middle": [], |
| "last": "Fang", |
| "suffix": "" |
| }, |
| { |
| "first": "Joyce", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Chai", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics ACL'14", |
| "volume": "2", |
| "issue": "", |
| "pages": "13--18", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Changsong Liu, Lanbo She, Rui Fang, and Joyce Y. Chai. 2014. Probabilistic labeling for efficient ref- erential grounding based on collaborative discourse. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics ACL'14 (Volume 2: Short Papers), pages 13-18, Baltimore, MD, USA.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Tell me dave: Contextsensitive grounding of natural language to mobile manipulation instructions", |
| "authors": [ |
| { |
| "first": "Dipendra", |
| "middle": [], |
| "last": "Misra", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaeyong", |
| "middle": [], |
| "last": "Sung", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashutosh", |
| "middle": [], |
| "last": "Saxena", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of Robotics: Science and Systems (RSS'14)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dipendra Misra, Jaeyong Sung, Kevin Lee, and Ashutosh Saxena. 2014. Tell me dave: Context- sensitive grounding of natural language to mo- bile manipulation instructions. In Proceedings of Robotics: Science and Systems (RSS'14), Berkeley, US.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Environment-driven lexicon induction for high-level instructions", |
| "authors": [ |
| { |
| "first": "Kejia", |
| "middle": [], |
| "last": "Dipendra Kumar Misra", |
| "suffix": "" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Tao", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashutosh", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Saxena", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing ACL-IJCNLP'15", |
| "volume": "1", |
| "issue": "", |
| "pages": "992--1002", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dipendra Kumar Misra, Kejia Tao, Percy Liang, and Ashutosh Saxena. 2015. Environment-driven lex- icon induction for high-level instructions. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing of the Asian Federation of Natural Lan- guage Processing ACL-IJCNLP'15 (Volume 1: Long Papers), pages 992-1002, Beijing, China.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "A computational model for situated task learning with interactive instruction", |
| "authors": [ |
| { |
| "first": "Shiwali", |
| "middle": [], |
| "last": "Mohan", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Kirk", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Laird", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the International conference on cognitive modeling (ICCM'13)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shiwali Mohan, James Kirk, and John Laird. 2013. A computational model for situated task learning with interactive instruction. In Proceedings of the International conference on cognitive modeling (ICCM'13).", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Learning strips operators from noisy and incomplete observations", |
| "authors": [ |
| { |
| "first": "Kira", |
| "middle": [], |
| "last": "Mouro", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [ |
| "S" |
| ], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [ |
| "A" |
| ], |
| "last": "Ronald", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Petrick", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence (UAI'12)", |
| "volume": "", |
| "issue": "", |
| "pages": "614--623", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kira Mouro, Luke S. Zettlemoyer, Ronald P. A. Pet- rick, and Mark Steedman. 2012. Learning strips operators from noisy and incomplete observations. In Proceedings of the Twenty-Eighth Conference on Uncertainty in Artificial Intelligence (UAI'12), pages 614-623, Catalina Island, CA, USA.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Learning symbolic models of stochastic domains", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Hanna", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pasula", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Luke", |
| "suffix": "" |
| }, |
| { |
| "first": "Leslie", |
| "middle": [ |
| "Pack" |
| ], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kaelbling", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Journal of Artificial Intelligence Research", |
| "volume": "29", |
| "issue": "", |
| "pages": "309--352", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hanna M Pasula, Luke S Zettlemoyer, and Leslie Pack Kaelbling. 2007. Learning symbolic models of stochastic domains. Journal of Artificial Intelli- gence Research, Volume29:309-352.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Scikit-learn: Machine learning in Python", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pedregosa", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Varoquaux", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Gramfort", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Michel", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Thirion", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Grisel", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Blondel", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Prettenhofer", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Dubourg", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Vanderplas", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Passos", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Cournapeau", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Brucher", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Perrot", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Duchesnay", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2825--2830", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Pas- sos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learn- ing in Python. Journal of Machine Learning Re- search, Volume12:2825-2830.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Planning as satisfiability: Heuristics", |
| "authors": [ |
| { |
| "first": "Jussi", |
| "middle": [], |
| "last": "Rintanen", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Artificial Intelligence", |
| "volume": "193", |
| "issue": "", |
| "pages": "45--86", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jussi Rintanen. 2012. Planning as satisfiability: Heuristics. Artificial Intelligence, Volume193:45- 86.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Teaching robots new actions through natural language instructions", |
| "authors": [ |
| { |
| "first": "Lanbo", |
| "middle": [], |
| "last": "She", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Joyce", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Chai", |
| "suffix": "" |
| }, |
| { |
| "first": "Yunyi", |
| "middle": [], |
| "last": "Jia", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaohua", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Ning", |
| "middle": [], |
| "last": "Xi", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "The 23rd IEEE International Symposium on Robot and Human Interactive Communication", |
| "volume": "14", |
| "issue": "", |
| "pages": "868--873", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lanbo She, Yu Cheng, Joyce Y. Chai, Yunyi Jia, Shaohua Yang, and Ning Xi. 2014a. Teaching robots new actions through natural language instruc- tions. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication, IEEE RO-MAN'14, pages 868-873, Edinburgh, UK.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Back to the blocks world: Learning new actions through situated human-robot dialogue", |
| "authors": [ |
| { |
| "first": "Lanbo", |
| "middle": [], |
| "last": "She", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaohua", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Yunyi", |
| "middle": [], |
| "last": "Jia", |
| "suffix": "" |
| }, |
| { |
| "first": "Joyce", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Chai", |
| "suffix": "" |
| }, |
| { |
| "first": "Ning", |
| "middle": [], |
| "last": "Xi", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)", |
| "volume": "", |
| "issue": "", |
| "pages": "89--97", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lanbo She, Shaohua Yang, Yu Cheng, Yunyi Jia, Joyce Y. Chai, and Ning Xi. 2014b. Back to the blocks world: Learning new actions through situ- ated human-robot dialogue. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 89- 97, Philadelphia, PA, U.S.A., June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Grounding of actions based on verbalized physical effects and manipulation primitives", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Spangenberg", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Henrich", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on", |
| "volume": "", |
| "issue": "", |
| "pages": "844--851", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Spangenberg and D. Henrich. 2015. Grounding of actions based on verbalized physical effects and manipulation primitives. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Con- ference on, pages 844-851, Hamburg, Germany.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Synthesizing manipulation sequences for under-specified tasks using unrolled markov random fields", |
| "authors": [ |
| { |
| "first": "Jaeyong", |
| "middle": [], |
| "last": "Sung", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [], |
| "last": "Selman", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashutosh", |
| "middle": [], |
| "last": "Saxena", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'14)", |
| "volume": "", |
| "issue": "", |
| "pages": "2970--2977", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jaeyong Sung, Bart Selman, and Ashutosh Saxena. 2014. Synthesizing manipulation sequences for under-specified tasks using unrolled markov random fields. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'14), pages 2970-2977, Chicago, IL, USA.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Learning perceptually grounded word meanings from unaligned parallel data", |
| "authors": [ |
| { |
| "first": "Stefanie", |
| "middle": [], |
| "last": "Tellex", |
| "suffix": "" |
| }, |
| { |
| "first": "Pratiksha", |
| "middle": [], |
| "last": "Thaker", |
| "suffix": "" |
| }, |
| { |
| "first": "Joshua", |
| "middle": [], |
| "last": "Joseph", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicholas", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Machine Learning", |
| "volume": "94", |
| "issue": "", |
| "pages": "151--167", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stefanie Tellex, Pratiksha Thaker, Joshua Joseph, and Nicholas Roy. 2014. Learning perceptually grounded word meanings from unaligned parallel data. Machine Learning, Volume94(2):151-167.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Learning to interpret natural language commands through human-robot dialog", |
| "authors": [ |
| { |
| "first": "Jesse", |
| "middle": [], |
| "last": "Thomason", |
| "suffix": "" |
| }, |
| { |
| "first": "Shiqi", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Stone", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 International Joint Conference on Artificial Intelligence (IJCAI)", |
| "volume": "", |
| "issue": "", |
| "pages": "1923--1929", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jesse Thomason, Shiqi Zhang, Raymond Mooney, and Peter Stone. 2015. Learning to interpret natural lan- guage commands through human-robot dialog. In Proceedings of the 2015 International Joint Confer- ence on Artificial Intelligence (IJCAI), pages 1923- 1929, Buenos Aires, Argentina.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Learning by observation and practice: An incremental approach for planning operator acquisition", |
| "authors": [ |
| { |
| "first": "Xuemei", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the Twelfth International Conference on Machine Learning (ICML'95)", |
| "volume": "", |
| "issue": "", |
| "pages": "549--557", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xuemei Wang. 1995. Learning by observation and practice: An incremental approach for planning op- erator acquisition. In Proceedings of the Twelfth International Conference on Machine Learning (ICML'95), pages 549-557, Tahoe City, California, USA.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Learning action models from plan examples using weighted max-sat", |
| "authors": [ |
| { |
| "first": "Qiang", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kangheng", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yunfei", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Artificial Intelligence", |
| "volume": "171", |
| "issue": "23", |
| "pages": "107--143", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qiang Yang, Kangheng Wu, and Yunfei Jiang. 2007. Learning action models from plan examples us- ing weighted max-sat. Artificial Intelligence, Vol- ume171(23):107 -143.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Grounded semantic role labeling", |
| "authors": [ |
| { |
| "first": "Shaohua", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Qiaozi", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Changsong", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Caiming", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| }, |
| { |
| "first": "Song-Chun", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Joyce", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Chai", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL'16)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shaohua Yang, Qiaozi Gao, Changsong Liu, Caiming Xiong, Song-Chun Zhu, and Joyce Y. Chai. 2016. Grounded semantic role labeling. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies (NAACL'16), San Diego, California.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "num": null, |
| "text": "A training instance {E i , v i , A i } for hypothesis space induction. E i is the resulting environment of executing A i in E i . The change of state in E i compared to E i is highlighted in bold. Different heuristics generate different Base Hypotheses as shown at the bottom.", |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "num": null, |
| "text": "(a) Use regression based selector to select hypothesis, and compare each base hypothesis induction heuristics.(b) Induce the base hypothesis with H4 all , and compare different hypothesis selection strategies.", |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "num": null, |
| "text": "Incremental learning results. The spaces and regression models acquired at different incremental learning cycles are evaluated on testing set. The averaged Jaccard Index is reported.the regression based hypothesis selector achieves the best performance and the MemoryBased strategy has the lowest performance. Results for Most-General and MostFrequent are between the regression based selector and MemoryBased.", |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "content": "<table/>", |
| "html": null, |
| "text": "where x is a graspable object (e.g. a cup or bowl), o 1 is any type of sink, and o 2 is any table. However, during inference, when applied to a new situation that does not have any type of sink or table, this hypothesis will not", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF1": { |
| "content": "<table/>", |
| "html": null, |
| "text": "Set initial space H : N, E with N:[h] and E:[ ], Set a set of temporary hypotheses T :[h] while T is not empty do Pop an element t from T Generate children [t", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "content": "<table><tr><td>: Current features used for incremental</td></tr><tr><td>learning of the regression model. The first two</td></tr><tr><td>are binary features and the rest are real-valued fea-</td></tr><tr><td>tures.</td></tr></table>", |
| "html": null, |
| "text": "", |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |