| { |
| "paper_id": "Q18-1004", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:10:00.479565Z" |
| }, |
| "title": "Representation Learning for Grounded Spatial Reasoning", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Janner", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Artificial Intelligence Laboratory", |
| "institution": "Massachusetts Institute of Technology", |
| "location": {} |
| }, |
| "email": "janner@csail.mit.edu" |
| }, |
| { |
| "first": "Karthik", |
| "middle": [], |
| "last": "Narasimhan", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Artificial Intelligence Laboratory", |
| "institution": "Massachusetts Institute of Technology", |
| "location": {} |
| }, |
| "email": "karthikn@csail.mit.edu" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Artificial Intelligence Laboratory", |
| "institution": "Massachusetts Institute of Technology", |
| "location": {} |
| }, |
| "email": "regina@csail.mit.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "The interpretation of spatial references is highly contextual, requiring joint inference over both language and the environment. We consider the task of spatial reasoning in a simulated environment, where an agent can act and receive rewards. The proposed model learns a representation of the world steered by instruction text. This design allows for precise alignment of local neighborhoods with corresponding verbalizations, while also handling global references in the instructions. We train our model with reinforcement learning using a variant of generalized value iteration. The model outperforms state-of-the-art approaches on several metrics, yielding a 45% reduction in goal localization error. 1 49", |
| "pdf_parse": { |
| "paper_id": "Q18-1004", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "The interpretation of spatial references is highly contextual, requiring joint inference over both language and the environment. We consider the task of spatial reasoning in a simulated environment, where an agent can act and receive rewards. The proposed model learns a representation of the world steered by instruction text. This design allows for precise alignment of local neighborhoods with corresponding verbalizations, while also handling global references in the instructions. We train our model with reinforcement learning using a variant of generalized value iteration. The model outperforms state-of-the-art approaches on several metrics, yielding a 45% reduction in goal localization error. 1 49", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Understanding spatial references in natural language is essential for successful human-robot communication and autonomous navigation. This problem is challenging because interpretation of spatial references is highly context-dependent. For instance, the instruction \"Reach the cell above the westernmost rock\" translates into different goal locations in the two environments shown in Figure 1 . Therefore, to enable generalization to new, unseen worlds, the model must jointly reason over the instruction text and environment configuration. Moreover, the richness and flexibility in verbalizing spatial references further complicates interpretation of such instructions.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 384, |
| "end": 392, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Figure 1: Sample 2D worlds and an instruction describing a goal location. The optimal path from a common start position, denoted by a white dashed line, varies considerably with changes in the map layout.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reach the cell above the westernmost rock", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, we explore the problem of spatial reasoning in the context of interactive worlds. Specifically, we assume access to a simulated environment, in which an agent can take actions to interact with the world and is rewarded for reaching the location specified by the language instruction. This feedback is the only source of supervision the model uses for interpreting spatial references.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reach the cell above the westernmost rock", |
| "sec_num": null |
| }, |
| { |
| "text": "The key modeling task here is to induce a representation that closely ties environment observations and linguistic expressions. In prior work, this issue was addressed by learning representations for each modality and then combining them, for instance, with concatenation (Misra et al., 2017) . While this approach captures high-level correspondences between instructions and maps, it does not encode de-tailed, lower-level mappings between specific positions on the map and their descriptions. As our experiments demonstrate, combining the language and environment representations in a spatially localized manner yields significant performance gains on the task.", |
| "cite_spans": [ |
| { |
| "start": 272, |
| "end": 292, |
| "text": "(Misra et al., 2017)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reach the cell above the westernmost rock", |
| "sec_num": null |
| }, |
| { |
| "text": "To this end, our model uses the instruction text to drive the learning of the environment representation. We start by converting the instruction text into a realvalued vector using a recurrent neural network with LSTM cells (Hochreiter and Schmidhuber, 1997) . Using this vector as a kernel in a convolution operation, we obtain an instruction-conditioned representation of the state. This allows the model to reason about immediate local neighborhoods in references such as \"two cells to the left of the triangle\". We further augment this design to handle global references that involve information concerning the entire map (e.g. \"the westernmost rock\"). This is achieved by predicting a global value map using an additional component of the instruction representation. The entire model is trained with reinforcement learning using the environmental reward signal as feedback.", |
| "cite_spans": [ |
| { |
| "start": 224, |
| "end": 258, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reach the cell above the westernmost rock", |
| "sec_num": null |
| }, |
| { |
| "text": "We conducted our experiments using a 2D virtual world as shown in Figure 1 . Overall, we created over 3,300 tasks across 200 maps, with instructions sourced from Mechanical Turk. We compare our model against two state-of-the-art systems adapted for our task (Misra et al., 2017; Schaul et al., 2015) . The key findings of our experiments are threefold. First, our model can more precisely interpret instructions than baseline models and find the goal location, yielding a 45% reduction in Manhattan distance error over the closest competitor. Second, the model can robustly generalize across new, unseen map layouts. Finally, we demonstrate that factorizing the instruction representation enables the model to sustain high performance when handling both local and global references.", |
| "cite_spans": [ |
| { |
| "start": 258, |
| "end": 278, |
| "text": "(Misra et al., 2017;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 279, |
| "end": 299, |
| "text": "Schaul et al., 2015)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 66, |
| "end": 74, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Reach the cell above the westernmost rock", |
| "sec_num": null |
| }, |
| { |
| "text": "Spatial reasoning in text This topic has attracted both theoretical and practical interest. From the linguistic and cognitive perspectives, research has focused on the wide range of mechanisms that speakers use to express spatial relations (Tenbrink, 2007; Viethen and Dale, 2008; Byrne and Johnson-Laird, 1989; Li and Gleitman, 2002) . The practical implications of this research are related to autonomous navigation (Moratz and Tenbrink, 2006; Levit and Roy, 2007; Tellex et al., 2011) and human-robot interaction (Skubic et al., 2004) .", |
| "cite_spans": [ |
| { |
| "start": 240, |
| "end": 256, |
| "text": "(Tenbrink, 2007;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 257, |
| "end": 280, |
| "text": "Viethen and Dale, 2008;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 281, |
| "end": 311, |
| "text": "Byrne and Johnson-Laird, 1989;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 312, |
| "end": 334, |
| "text": "Li and Gleitman, 2002)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 418, |
| "end": 445, |
| "text": "(Moratz and Tenbrink, 2006;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 446, |
| "end": 466, |
| "text": "Levit and Roy, 2007;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 467, |
| "end": 487, |
| "text": "Tellex et al., 2011)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 516, |
| "end": 537, |
| "text": "(Skubic et al., 2004)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Previous computational approaches include techniques such as proximity fields (Kelleher et al., 2006) , spatial templates (Levit and Roy, 2007) and geometrically defined mappings (Moratz and Tenbrink, 2006; Kollar et al., 2010) . More recent work in robotics has integrated text containing position information with spatial models of the environment to obtain accurate maps for navigation (Walter et al., 2013; Hemachandra et al., 2014) . Most of these approaches typically assume access to detailed geometry or other forms of domain knowledge. In contrast to these knowledge-rich approaches, we are learning spatial reference via interaction with the environment, acquiring knowledge of the environment in the process.", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 101, |
| "text": "(Kelleher et al., 2006)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 122, |
| "end": 143, |
| "text": "(Levit and Roy, 2007)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 179, |
| "end": 206, |
| "text": "(Moratz and Tenbrink, 2006;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 207, |
| "end": 227, |
| "text": "Kollar et al., 2010)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 389, |
| "end": 410, |
| "text": "(Walter et al., 2013;", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 411, |
| "end": 436, |
| "text": "Hemachandra et al., 2014)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Instruction following Spatial reasoning is a common element in many papers on instruction following (MacMahon et al., 2006; Vogel and Jurafsky, 2010; Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013; Kim and Mooney, 2013; Andreas and Klein, 2015) . As a source of supervision, these methods assume access to demonstrations, which specify the path corresponding with provided instructions. In our setup, the agent is only driven by the final rewards when the goal is achieved. This weaker source of supervision motivates development of new techniques not considered in prior work.", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 123, |
| "text": "(MacMahon et al., 2006;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 124, |
| "end": 149, |
| "text": "Vogel and Jurafsky, 2010;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 150, |
| "end": 172, |
| "text": "Chen and Mooney, 2011;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 173, |
| "end": 201, |
| "text": "Artzi and Zettlemoyer, 2013;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 202, |
| "end": 223, |
| "text": "Kim and Mooney, 2013;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 224, |
| "end": 248, |
| "text": "Andreas and Klein, 2015)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "More recently, Misra et al. (2017) proposed a neural architecture for jointly mapping instructions and visual observations (pixels) to actions in the environment. Their model separately induces text and environment representations, which are concatenated into a single vector that is used to output an action policy. While this representation captures coarse correspondences between the modalities, it doesn't encode mappings at the level of local neighborhoods, negatively impacting performance on our task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Universal value functions The idea of generalized value functions has been explored before in Schaul et al. (2015) . The technique, termed UVFA, presents a clever trick of factorizing the value function over states and goals using singular value decomposition (SVD) and then learning a regression model to predict the low-rank vectors. This results in quick and effective generalization to all goals in the same state space. However, their work stops short of exploring generalization over map layouts, which our model is designed to handle. Furthermore, our setup also involves specifying goals using natural language instructions, which is different from the coordinate-style specification used in that work.", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 114, |
| "text": "Schaul et al. (2015)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Task setup We model our task as a Markov Decision Process (MDP), where an autonomous agent is placed in an interactive environment with the capability to choose actions that can affect the world. A goal is described in text, and rewards are available to the agent correspondingly. The MDP can be represented by the tuple S, A, X, T, R , where S is the set of all possible state configurations, A is the set of actions available to the agent, X is the set of all goal specifications 2 in natural language, T (s |s, a, x) is the transition distribution, and R(s, x) is the reward function. A state s \u2208 S includes information such as the locations of different entities along with the agent's own position. In this work, T is deterministic in the environments considered; however, our methods also apply in the stochastic case.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Framework", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Text instructions Prior work has investigated human usage of different types of referring expressions to describe spatial relations (Levinson, 2003; Viethen and Dale, 2008) . In order to build a robust instruction following system, we examine several categories of spatial expressions that exhibit the wide range of natural language goal descriptions. Specifically, we consider instructions that utilize objects/entities present in the environment to describe a goal location. These instructions can be categorized into three groups: (a) Text referring to a specific entity (e.g., \"Go to the circle\").", |
| "cite_spans": [ |
| { |
| "start": 132, |
| "end": 148, |
| "text": "(Levinson, 2003;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 149, |
| "end": 172, |
| "text": "Viethen and Dale, 2008)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Framework", |
| "sec_num": "3" |
| }, |
| { |
| "text": "(b) Text specifying a location using a single refer-ent entity (e.g., \"Reach the cell above the westernmost rock\").", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Framework", |
| "sec_num": "3" |
| }, |
| { |
| "text": "(c) Text specifying a location using multiple referent entities (e.g., \"Move to the goal two squares to the left of the heart and top right of house\").", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Framework", |
| "sec_num": "3" |
| }, |
| { |
| "text": "These three categories exemplify an increasing level of complexity, with the last one having multiple levels of indirection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Framework", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In each category, we have both local and global references to objects. Local references require an understanding of spatial prepositional phrases such as 'above', 'in between' and 'next to' in order to determine the precise goal location. This comprehension is invariant to the global position of the object landmark(s) provided in the instruction. A global reference, on the other hand, contains superlatives such as 'easternmost' and 'topmost', which require reasoning over the entire map. For example, in the case of (a) above, a local reference would describe a unique object 3 (e.g., \"Go to the circle\"), whereas a global reference might require comparing the positions of all objects of a specific type (e.g., \"Go to the northernmost tree\").", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Framework", |
| "sec_num": "3" |
| }, |
| { |
| "text": "A point to note is that we do not assume any access to mapping from instructions to objects or entities in the worlds or a knowledge of spatial ontology -the system has to learn this entirely through feedback from the environment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General Framework", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Learning to reach the goal while maximizing cumulative reward can be done by using a value function V (s) (Sutton and Barto, 1998) which represents the agent's notion of expected future reward from state s. A popular algorithm to learn an optimal value function is Value Iteration (VI) (Bellman, 1957) , which uses the technique of dynamic programming.", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 130, |
| "text": "(Sutton and Barto, 1998)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 286, |
| "end": 301, |
| "text": "(Bellman, 1957)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generalized Value Iteration", |
| "sec_num": null |
| }, |
| { |
| "text": "In the standard Bellman equation, the value function is dependent solely on state. Schaul et al. (2015) proposed a value function V (s, g) describing the expected reward from being in state s given goal g, capturing that state values are goal-dependent and that a single environment can offer many such goals. We also make use of such a generalized value function, although our goals are not observed directly as Figure 2 : A schematic depiction of our model. Text instructions are represented as a vector h(t) and states as embeddings \u03c6(s). A portion of the text representation is used as a convolutional kernel on \u03c6(s), giving a text-conditioned local state representation z 1 . The remaining components are used as coefficients in a linear combination of gradient functions to give a global map-level representation z 2 . z 1 and z 2 are concatenated and input to a convolutional neural network to predict the final value map. coordinate locations or states themselves but rather described in natural language. With x denoting a textual description of a goal, our VI update equations are:", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 103, |
| "text": "Schaul et al. (2015)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 413, |
| "end": 421, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Generalized Value Iteration", |
| "sec_num": null |
| }, |
| { |
| "text": "Q(s, a, x) = R(s, x) + \u03b3 s \u2208S T (s |s, a, x)V (s , x) (1) V (s, x) = max a Q(s, a, x)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generalized Value Iteration", |
| "sec_num": null |
| }, |
| { |
| "text": "where Q is the action-value function, tracking the value of choosing action a in state s. Once an optimal value function is learned, a straightforward action policy is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generalized Value Iteration", |
| "sec_num": null |
| }, |
| { |
| "text": "(2) \u03c0(s, x) = arg max a Q(s, a, x)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Generalized Value Iteration", |
| "sec_num": null |
| }, |
| { |
| "text": "Generalization over both environment configurations and text instructions requires a model that meets two desiderata. First, it must have a flexible representation of goals, one which can encode both the local structure and global spatial attributes inherent to natural language instructions. Second, it must be compositional; the representation of language should be generalizable even though each unique instruction will only be observed with a single map during training. Namely, the learned representation for a given instruction should still be useful even if the objects on a map are rearranged or the layout is changed entirely.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To that end, our model combines the textual instructions with the map in a spatially localized manner, as opposed to prior work which joins goal representations and environment observations via simpler functions like an inner product (Schaul et al., 2015) . While our approach can more effectively learn local relations specified by language, it cannot naturally capture descriptions at the global environment level. To address this problem, we also use the language representation to predict coefficients for a basis set of gradient functions which can be combined to encode global spatial relations.", |
| "cite_spans": [ |
| { |
| "start": 234, |
| "end": 255, |
| "text": "(Schaul et al., 2015)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "More formally, inputs to our model (see Figure 2 ) consist of an environment observation s and textual description of a goal x. For simplicity, we will assume s to be a 2D matrix, although the model can easily be extended to other input representations. We first convert s to a 3D tensor by projecting each cell to a low-dimensional embedding (\u03c6) as a function of the objects contained in that cell. In parallel, the text instruction x is passed through an LSTM recurrent neural network (Hochreiter and Schmidhuber, 1997) (3)", |
| "cite_spans": [ |
| { |
| "start": 487, |
| "end": 521, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 40, |
| "end": 48, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "z 1 = \u03c8 1 (\u03c6(s); h 2 (x))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Meanwhile, the three-element global component Perform gradient descent step on loss L(\u03b8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "h 1 (x) is used to form the coefficients for a vertical and horizontal gradient along with a corresponding bias term. 4 The gradients, denoted G 1 and G 2 in Figure 2 , are matrices of the same dimensionality as the state observation with values increasing down the rows and along the columns, respectively. The axis-aligned gradients are weighted by the elements of h 1 (x) and summed to give a final global gradient spanning the entire 2D space, analogous to how steerable filters can be constructed for any orientation using a small set of basis filters (Freeman and Adelson, 1991):", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 119, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 158, |
| "end": 166, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(4) z 2 = h 1 (x)[1]\u2022G 1 +h 1 (x)[2]\u2022G 2 +h 1 (x)[3]\u2022J", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "in which J is the all-ones matrix also of the same dimensionality as the observed map.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Finally, the local and global information maps are concatenated into a single tensor, which is then processed by a convolutional neural network (CNN) with parameters \u03b8 to approximate the generalized value function:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(5) V (s, x) = \u03c8 2 ([z 1 ; z 2 ]; \u03b8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "for every state s in the map.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Reinforcement Learning Given our model'\u015d V (s, x) predictions, the resulting policy (Equation 2) can be enacted, giving a continuous trajectory of states {s t , s t+1 , . . .} on a single map and their associated rewards {r t , r t+1 , . . .} at each timestep t. We stored entire trajectories (as opposed to state transition pairs) in a replay memory D as described in Mnih et al. (2015) . The model is trained to produce an accurate value estimate by minimizing the following objective:", |
| "cite_spans": [ |
| { |
| "start": 369, |
| "end": 387, |
| "text": "Mnih et al. (2015)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "(6) L(\u0398) = E s\u223cD V (s, x; \u0398) \u2212 R(s, x) + \u03b3 max a s T (s |s, a)V (s , x; \u0398 \u2212 ) 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "where s is a state sampled from D, \u03b3 is the discount factor, \u0398 is the set of parameters of the entire model, and \u0398 \u2212 is the set of parameters of a target network copied periodically from our model. The complete training procedure is shown in Algorithm 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Puddle world navigation data In order to study generalization across a wide variety of environmental conditions and linguistic inputs, we develop an extension of the puddle world reinforcement learning benchmark (Sutton, 1996; Mankowitz et al., 2016) . States in a 10 \u00d7 10 grid are first filled with either grass or water cells, such that the grass forms one connected component. We then populate the grass region with six unique objects which appear only once per map (triangle, star, diamond, circle, heart, and spade) and four non-unique objects (rock, tree, horse, and house) which can appear any number of times on a given map. See Figure 1 for an example visualization.", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 226, |
| "text": "(Sutton, 1996;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 227, |
| "end": 250, |
| "text": "Mankowitz et al., 2016)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 637, |
| "end": 645, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Train 1566 1071 Test 399 272 Table 1 : Overall statistics of our dataset.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 29, |
| "end": 36, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "Goal positions are chosen uniformly at random from the set of grass cells, encouraging the use of spatial references to describe goal locations which do not themselves contain a unique object. We used the Mechanical Turk crowdsourcing platform (Buhrmester et al., 2011) to collect natural language descriptions of these goals. Human annotators were asked to describe the positions of these", |
| "cite_spans": [ |
| { |
| "start": 244, |
| "end": 269, |
| "text": "(Buhrmester et al., 2011)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Reach the horse below the rock and to the left of the green diamond", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Move to the square two below and one left of the star", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 Go to the cell above the bottommost horse Figure 3 : Example goal annotations collected with Mechanical Turk. goals using surrounding objects. At the end of each trial, we asked the same participants to provide goal locations given their own text instructions. This helped filter out a majority of instructions that were ambiguous or ill-specified. Table 1 provides some statistics on the data, and Figure 3 shows example annotations. In total, we collected 3308 instructions, ranging from 2 to 43 words in length, describing over 200 maps. There are 361 unique words in the annotated instructions. We do not perform any preprocessing on the raw annotations.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 44, |
| "end": 52, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 351, |
| "end": 358, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 401, |
| "end": 409, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "It is plausible that a model designed to handle only local references could not handle global ones (consider our own model without the global gradient maps). For clearer interpretation of results, we evaluate our model in two modes: trained and tested on local and global data separately, or as a combined dataset. While local instructions were obtained easily, the global instructions were collected by designing a task in which only nonunique objects were presented to the annotators. 5 This precluded simple instructions like \"go left of the object \" because there would always be more than one of each object type. Therefore, we obtained text with global properties (e.g. middle rock, leftmost tree) to sufficiently pinpoint an object. On average, we collected 31 unique local instructions and 10 unique global instructions per map.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "To quantify the diversity of our dataset, we find the five nearest instructions in the training set for every instruction in the test set, as measured by edit distance (using the word as a unit) normalized by test instruction length. For each of these pairs, we also measure the Manhattan distance between their corresponding goal locations. Figure 4 , which visu- Figure 4 : A heatmap showing the normalized instruction edit distance and goal Manhattan distance corresponding to the most similar instructions between the train and test set. For each instruction in the test set, we find the five most similar instructions in the training set. Even for those instructions which are similar, the goal locations they describe can be far apart.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 342, |
| "end": 350, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 365, |
| "end": 373, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "alizes this analysis, underscores the difficulty of this task; even when two instructions are highly similar, they might correspond to entirely different target locations. This is the case in the example in Figure 1 , which has a distance of four between the references goals.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 207, |
| "end": 215, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "Baselines We compare our model to the following baselines:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "UVFA (text) is a variant of the model described in (Schaul et al., 2015) adapted for our task. The original model made use of two MLPs to learn low dimensional embeddings of states and goals which were then combined via dot product to give value estimates. Goals were represented either as (x, y) coordinates or as states themselves. As our goals are not observed directly but described in text, we replace the goal MLP with the same LSTM as in our model. The state MLP has an identical architecture to that of the UVFA: two hidden layers of dimension 128 and ReLU activations. For consistency with the UVFA, we represent states as binary vectors denoting the presence of each type of object at every position.", |
| "cite_spans": [ |
| { |
| "start": 51, |
| "end": 72, |
| "text": "(Schaul et al., 2015)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "CNN + LSTM is a variant of the model described in Misra et al. (2017), who developed it for a language-grounded block manipulation task. It first convolves the map layout to a low-dimensional rep- Figure 5 : Reward achieved by our model and the two baselines on the training environments during reinforcement learning on both local and global instructions. Each epoch corresponds to simulation on 500 goals, with a goal simulation terminating either when the agent reaches the goal state or has taken 75 actions. resentation (as opposed to the MLP of the UVFA) and concatenates this to the LSTM's instruction embedding (as opposed to a dot product). These concatenated representations are then input to a twolayer MLP.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 197, |
| "end": 205, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "We also perform analysis to study the representational power of our model, introducing two more comparison models:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "UVFA (pos) is the original UVFA model from (Schaul et al., 2015 ), which we evaluate on our modified puddle worlds to determine the difficulty of environment generalization independently from instruction interpretation.", |
| "cite_spans": [ |
| { |
| "start": 43, |
| "end": 63, |
| "text": "(Schaul et al., 2015", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "Our model (w/o gradient) is an ablation of our model without the global gradient maps, which allows us to determine the gradients' role in representation-building.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "In additional to our reinforcement learning experiments, we train these models in a supervised setting to isolate the effects of architecture choices from other concerns inherent to reinforcement learning algorithms. For this purpose, we constructed a dataset of ground-truth value maps for all humanannotated goals using value iteration. We use the models to predict value maps for the entire grid and minimize the mean squared error (MSE) compared to the ground truth values:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "(7) L (\u0398) = (s,x) [V (s, x; \u0398) \u2212V (s, x)] 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Split Local Global", |
| "sec_num": null |
| }, |
| { |
| "text": "Our model implementation uses an LSTM with a learnable 15-dimensional embedding layer, 30 hidden units, 8-dimensional embeddings \u03c6(s), and a 3x3 kernel applied to the embeddings, giving a dimension of 72 for h 2 (t). The final CNN has layers of {3, 6, 12, 6, 3, 1} channels, all with 3x3 kernels and padding of length 1 such that the output value map prediction is equal in size to the input observation. For each map, a reward of 3 is given for reaching the correct goal specified by human annotation and a reward of \u22121 is given for falling in a puddle cell. The only terminal state is when the agent is at the goal. Rewards are discounted by a factor of 0.95. We use Adam optimization (Kingma and Ba, 2015) for training all models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation details", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We present empirical results on two different datasets -our annotated puddle world and an existing block navigation task (Bisk et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 121, |
| "end": 140, |
| "text": "(Bisk et al., 2016)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Comparison with the state-of-the-art We first investigate the ability of our model to learn solely from environment simulation. Figure 5 shows the discounted reward achieved by our model as well as the two baselines for both instruction types. In both experiments, our model is the only one of the Table 3 : Performance on a test set of environments and instructions after supervised training. Lower is better for MSE and Manhattan distance; higher is better for policy quality. The gradient basis significantly improves the reconstruction error and goal localization of our model on global instructions, and expectedly does not affect its performance on local instructions.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 128, |
| "end": 136, |
| "text": "Figure 5", |
| "ref_id": null |
| }, |
| { |
| "start": 298, |
| "end": 305, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Puddle world navigation", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "three to achieve an average nonnegative reward after convergence (0.88 for local instructions and 0.49 for global instructions), signifying that the baselines do not fully learn how to navigate through these environments. Following Schaul et al. (2015) , we also evaluated our model using the metric of policy quality. This is defined as the expected discounted reward achieved by following a softmax policy of the value predictions. Policy quality is normalized such that an optimal policy has a score of 1 and a uniform random policy has a score of 0. Intuitively, policy quality is the true normalized expectation of score over all maps in the dataset, instructions per map, and start states per map-instruction pair. Our model outperforms both baselines on this metric as well on the test maps (Table 2 ). We also note that the perfor-mance of the baselines flip with respect to each other as compared to their performance on the training maps ( Figure 5 ). While the UVFA variant learned a better policy on the train set, it did not generalize to new environments as well as the CNN + LSTM.", |
| "cite_spans": [ |
| { |
| "start": 232, |
| "end": 252, |
| "text": "Schaul et al. (2015)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 798, |
| "end": 806, |
| "text": "(Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 950, |
| "end": 958, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Puddle world navigation", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Finally, given the nature of our environments, we can use the predicted value maps to infer a goal location by taking the position of the maximum value. We use the Manhattan distance from this predicted position to the actual goal location as a third metric. The accuracy of our model's goal predictions is more than twice that of the baselines on local references and roughly 45% better on global references.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Puddle world navigation", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Analysis of learned representations For the representation analysis in a supervised setting, we compared the predicted value maps of all models against Figure 6 : Value map predictions for two environments paired with two instructions each. Despite the difference in instructions, with one being global and the other local in nature and sharing no objects in their descriptions, they refer to the same goal location in the environment in (a). However, in (b), the descriptions correspond to different locations on the map. The vertical axis considers variance in goal location for the same instruction, depending on the map configuration.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 152, |
| "end": 160, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Puddle world navigation", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "the unseen test split of maps. Table 3 shows the results of this study. As expected, our model without the global gradient performs no differently from the full model on local references, but has higher MSE and average distances to true goal than the full model on global references. We also note that UVFA (pos) performs much worse than both CNN+LSTM and our model, showing the difficulty of environment generalization even when the goals are observed directly. (The original UVFA paper (Schaul et al., 2015) demonstrated effective generalization over goal states within a single environment.) Surprisingly, our model trained via reinforcement learning has more precise goal location predictions (as measured via Manhattan distance) than when trained on true state values in a supervised manner. However, the MSE of the value predictions are much higher in the RL setting (e.g., 0.80 vs 0.25 for supervised on local instructions). This shows that despite the comparative stability of the supervised setting, minimization of value prediction error does not nec-essarily lend itself to the best policy or goal localization. Conversely, having a higher MSE does not always imply a worse policy, as seen also in the performance of the two UVFA variants in Table 3 .", |
| "cite_spans": [ |
| { |
| "start": 488, |
| "end": 509, |
| "text": "(Schaul et al., 2015)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 31, |
| "end": 38, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 1253, |
| "end": 1260, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Puddle world navigation", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Generalization One of the criteria laid out for our model was its ability to construct language representations and produce accurate value maps, independent of layouts and linguistic variation. Figure 6 provides examples of two layouts, each with two different instructions. In the first map (top), we have both instructions referring to the same location. Our model is able to mimic the optimal value map accurately, while the other baselines are not as precise, either producing a large field of possible goal locations (CNN+LSTM) or completely missing the goal (UVFA-text).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 194, |
| "end": 202, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Puddle world navigation", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "On the vertical axis, we observe generalization across different maps with the same instructions. Our model is able to precisely identify the goals in each scenario in spite of significant variation in their locations. This proves harder for the other representations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Puddle world navigation", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Although our model is compositional in the sense that it transfers knowledge of spatial references between different environments, some types of instructions do prove challenging. We identify two of the poorest predictions in Figure 7 . We see that multiple levels of indirection (as in 7a, which references a location relative to an object relative to another object) or unnecessarily long instructions (as in 7b, which uniquely identifies a position by the eighth token but then proceeds with redundant information) are still a challenge.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 226, |
| "end": 234, |
| "text": "Figure 7", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Puddle world navigation", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Learning curve Due to the manual effort that comes with constructing a dataset of human annotations, it is also important to consider the sampleefficiency of a model. Figure 8 shows the quality policy and prediction error on local instructions as a function of training set size. Our model reaches 0.90 policy quality with only 400 samples, demonstrating efficient generalization capability. Table 4 : The performance of our model and two baselines on the ISI Language Grounding dataset (Bisk et al., 2016) . Our model once again outperforms the baselines, although all models have a lower policy quality on this dataset than on our own.", |
| "cite_spans": [ |
| { |
| "start": 487, |
| "end": 506, |
| "text": "(Bisk et al., 2016)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 167, |
| "end": 175, |
| "text": "Figure 8", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 392, |
| "end": 399, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Puddle world navigation", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We also evaluate our model on the ISI Language Grounding dataset (Bisk et al., 2016) , which contains human-annotated instructions describing how to arrange blocks identified by numbers and logos. Although it does not contain variable environment maps as in our dataset, it has a larger action space and vocabulary. The caveat is that the task as posed in the original dataset is not compatible with our model. For a policy to be derived from a value map with the same dimension as the state observation, it is implicitly assumed that there is a single controllable agent, whereas the ISI set allows multiple blocks to be moved. We therefore modify the ISI setup using an oracle to determine which block is given agency during each step. This allows us to Figure 9 : (a-c) Visualizations of tasks from the ISI Language Grounding dataset (Bisk et al., 2016) and our model's value map predictions. The agentive block and goal location are outlined in red for visibility. (d) The MSE of the value map prediction as a function of a subgoal's ordering in an overall task. The model performs better on subgoals later in a task despite the subgoals being treated completely independently during both training and testing.", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 84, |
| "text": "(Bisk et al., 2016)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 837, |
| "end": 856, |
| "text": "(Bisk et al., 2016)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 756, |
| "end": 764, |
| "text": "Figure 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "ISI Grounding Dataset", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "retain the linguistic variability of the dataset while overcoming the mismatch in task setup. The states are discretized to a 13 \u00d7 13 map and the instructions are lemmatized. Performance on the modified ISI dataset is reported in Table 4 and representative visualizations are shown in Figure 9 . Our model outperforms both baselines by a greater margin in policy quality than on our own dataset. Misra et al. (2017) also use this dataset and report results in part by determining the minimum distance between an agent and a goal during an evaluation lasting N steps. This evaluation metric is therefore dependent on this timeout parameter N . Because we discretized the state space so as to be able to represent it as a grid of embeddings, the notion of a single step has been changed and direct comparison limited to N steps is ill-defined. 6 Hence, due to modifica-6 When a model is available and the states are not overwhelmingly high-dimensional, policy quality is a useful metric that is independent of this type of parameter. As such, it is our tions in the task setup, we cannot compare directly to the results in Misra et al. (2017).", |
| "cite_spans": [ |
| { |
| "start": 842, |
| "end": 843, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 230, |
| "end": 237, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 285, |
| "end": 293, |
| "text": "Figure 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "ISI Grounding Dataset", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Understanding grounding evaluation An interesting finding in our analysis was that the difficulty of the language interpretation task is a function of the stage in task execution (Figure 9(d) ). In the ISI Language Grounding set (Bisk et al., 2016) , each individual instruction (describing where to move a particular block) is a subgoal in a larger task (such as constructing a circle with all of the blocks). The value maps predicted for subgoals occurring later in a task are more accurate than those occurring early in the task. It is likely that the language plays a less crucial role in specifying the subgoal position in the final steps of a task. As shown in Figure 9 (a), it may be possible to narrow down candidate subgoal positions just by looking at a nearly-constructed highdefault metric here. However, estimating policy quality for environments substantially larger than those investigated here is a challenge in itself. level shape. In contrast, this would not be possible early in a task because most of the blocks will be randomly positioned. This finding is consistent with a result from Branavan et al. (2011) , who reported that strategy game manuals were useful early in the game but became less essential further into play. It appears to be part of a larger trend that the marginal benefit of language in such grounding tasks can vary predictably between individual instructions.", |
| "cite_spans": [ |
| { |
| "start": 229, |
| "end": 248, |
| "text": "(Bisk et al., 2016)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1107, |
| "end": 1129, |
| "text": "Branavan et al. (2011)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 179, |
| "end": 191, |
| "text": "(Figure 9(d)", |
| "ref_id": null |
| }, |
| { |
| "start": 667, |
| "end": 675, |
| "text": "Figure 9", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "ISI Grounding Dataset", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We have described a novel approach for grounded spatial reasoning. Combining the language representation in a spatially localized manner allows for increased precision of goal identification a nd improved performance on unseen environment configurations. Alongside our models, we present Puddle World Navigation, a new grounding dataset for testing the generalization capacity of instructionfollowing algorithms in varied environments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Code and dataset are available at https://github. com/JannerM/spatial-reasoning", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We will use the terms goal specifications and instructions interchangeably.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "A local reference for a non-unique object would be ambiguous, of course.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that we are referring to gradient filters here, not the gradient calculated during backpropagation in deep learning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The other objects were added back into the map after collecting the instruction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the members of the MIT NLP group, the TACL reviewers and action editor for helpful feedback. We gratefully acknowledge support from the MIT Lincoln Laboratory and the MIT Super-UROP program.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Alignment-based compositional semantics for instruction following", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Andreas", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Andreas and Dan Klein. 2015. Alignment-based compositional semantics for instruction following. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Weakly supervised learning of semantic parsers for mapping instructions to actions", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Artzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "TACL", |
| "issue": "", |
| "pages": "49--62", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Artzi and Luke Zettlemoyer. 2013. Weakly su- pervised learning of semantic parsers for mapping in- structions to actions. TACL, 1(1):49-62.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Dynamic Programming", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Bellman", |
| "suffix": "" |
| } |
| ], |
| "year": 1957, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Bellman. 1957. Dynamic Programming.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Natural language communication with robots", |
| "authors": [ |
| { |
| "first": "Yonatan", |
| "middle": [], |
| "last": "Bisk", |
| "suffix": "" |
| }, |
| { |
| "first": "Deniz", |
| "middle": [], |
| "last": "Yuret", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "NAACL HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "751--761", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yonatan Bisk, Deniz Yuret, and Daniel Marcu. 2016. Natural language communication with robots. In NAACL HLT, pages 751-761.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Learning to win by reading manuals in a Monte-Carlo framework", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "R K" |
| ], |
| "last": "Branavan", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Silver", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "ACL HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "268--277", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S.R.K. Branavan, David Silver, and Regina Barzilay. 2011. Learning to win by reading manuals in a Monte- Carlo framework. In ACL HLT, pages 268-277.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Amazon's Mechanical Turk. Perspectives on", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Buhrmester", |
| "suffix": "" |
| }, |
| { |
| "first": "Tracy", |
| "middle": [], |
| "last": "Kwang", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [ |
| "D" |
| ], |
| "last": "Gosling", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Psychological Science", |
| "volume": "6", |
| "issue": "1", |
| "pages": "3--5", |
| "other_ids": { |
| "PMID": [ |
| "26162106" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Buhrmester, Tracy Kwang, and Samuel D. Gosling. 2011. Amazon's Mechanical Turk. Per- spectives on Psychological Science, 6(1):3-5. PMID: 26162106.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Spatial reasoning", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "J" |
| ], |
| "last": "Ruth", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [ |
| "N" |
| ], |
| "last": "Byrne", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Johnson-Laird", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Journal of memory and language", |
| "volume": "28", |
| "issue": "5", |
| "pages": "564--575", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ruth M.J. Byrne and Philip N. Johnson-Laird. 1989. Spatial reasoning. Journal of memory and language, 28(5):564-575.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Learning to interpret natural language navigation instructions fro mobservations", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [ |
| "J" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI-2011)", |
| "volume": "", |
| "issue": "", |
| "pages": "859--865", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David L. Chen and Raymond J. Mooney. 2011. Learn- ing to interpret natural language navigation instruc- tions fro mobservations. In Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI- 2011), pages 859-865, San Francisco, CA, USA, Au- gust.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "ABC-CNN: An attention based convolutional neural network for visual question answering", |
| "authors": [ |
| { |
| "first": "Kan", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiang", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Liang-Chieh", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Haoyuan", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ram", |
| "middle": [], |
| "last": "Nevatia", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1511.05960" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kan Chen, Jiang Wang, Liang-Chieh Chen, Haoyuan Gao, Wei Xu, and Ram Nevatia. 2015. ABC- CNN: An attention based convolutional neural net- work for visual question answering. arXiv preprint arXiv:1511.05960.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The design and use of steerable filters", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "William", |
| "suffix": "" |
| }, |
| { |
| "first": "Edward", |
| "middle": [ |
| "H" |
| ], |
| "last": "Freeman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Adelson", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "IEEE TPAMI", |
| "volume": "13", |
| "issue": "9", |
| "pages": "891--906", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William T. Freeman and Edward H. Adelson. 1991. The design and use of steerable filters. IEEE TPAMI, 13(9):891-906.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Learning spatialsemantic representations from natural language descriptions and scene classifications", |
| "authors": [ |
| { |
| "first": "Sachithra", |
| "middle": [], |
| "last": "Hemachandra", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [ |
| "R" |
| ], |
| "last": "Walter", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefanie", |
| "middle": [], |
| "last": "Tellex", |
| "suffix": "" |
| }, |
| { |
| "first": "Seth", |
| "middle": [], |
| "last": "Teller", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ICRA", |
| "volume": "", |
| "issue": "", |
| "pages": "2623--2630", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sachithra Hemachandra, Matthew R. Walter, Stefanie Tellex, and Seth Teller. 2014. Learning spatial- semantic representations from natural language de- scriptions and scene classifications. In ICRA, pages 2623-2630. IEEE.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735- 1780.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Proximity in context: an empirically grounded computational model of proximity for processing topological spatial expressions", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [ |
| "D" |
| ], |
| "last": "Kelleher", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Geert-Jan", |
| "suffix": "" |
| }, |
| { |
| "first": "Fintan", |
| "middle": [ |
| "J" |
| ], |
| "last": "Kruijff", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Costello", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "745--752", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John D. Kelleher, Geert-Jan M. Kruijff, and Fintan J. Costello. 2006. Proximity in context: an empirically grounded computational model of proximity for pro- cessing topological spatial expressions. In ACL, pages 745-752.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Adapting discriminative reranking to grounded language learning", |
| "authors": [ |
| { |
| "first": "Joohyun", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "ACL (1)", |
| "volume": "", |
| "issue": "", |
| "pages": "218--227", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joohyun Kim and Raymond J. Mooney. 2013. Adapting discriminative reranking to grounded language learn- ing. In ACL (1), pages 218-227.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Toward understanding natural language directions", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Kollar", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefanie", |
| "middle": [], |
| "last": "Tellex", |
| "suffix": "" |
| }, |
| { |
| "first": "Deb", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicholas", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Human-Robot Interaction", |
| "volume": "", |
| "issue": "", |
| "pages": "259--266", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Kollar, Stefanie Tellex, Deb Roy, and Nicholas Roy. 2010. Toward understanding natural language directions. In Human-Robot Interaction, pages 259- 266. IEEE.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Space in language and cognition: Explorations in cognitive diversity", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Stephen C Levinson", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "5", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen C Levinson. 2003. Space in language and cog- nition: Explorations in cognitive diversity, volume 5. Cambridge University Press.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Interpretation of spatial language in a map navigation task", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Levit", |
| "suffix": "" |
| }, |
| { |
| "first": "Deb", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)", |
| "volume": "37", |
| "issue": "", |
| "pages": "667--679", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Levit and Deb Roy. 2007. Interpretation of spa- tial language in a map navigation task. IEEE Transac- tions on Systems, Man, and Cybernetics, Part B (Cy- bernetics), 37(3):667-679.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Turning the tables: Language and spatial reasoning", |
| "authors": [ |
| { |
| "first": "Peggy", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Lila", |
| "middle": [], |
| "last": "Gleitman", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Cognition", |
| "volume": "83", |
| "issue": "3", |
| "pages": "265--294", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peggy Li and Lila Gleitman. 2002. Turning the ta- bles: Language and spatial reasoning. Cognition, 83(3):265-294.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Walk the talk: Connecting language, knowledge, and action in route instructions", |
| "authors": [ |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Macmahon", |
| "suffix": "" |
| }, |
| { |
| "first": "Brian", |
| "middle": [], |
| "last": "Stankiewicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Kuipers", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matt MacMahon, Brian Stankiewicz, and Benjamin Kuipers. 2006. Walk the talk: Connecting language, knowledge, and action in route instructions. AAAI, 2(6):4.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Iterative hierarchical optimization for misspecified problems (IHOMP). CoRR", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mankowitz", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [ |
| "Arthur" |
| ], |
| "last": "Mann", |
| "suffix": "" |
| }, |
| { |
| "first": "Shie", |
| "middle": [], |
| "last": "Mannor", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel J. Mankowitz, Timothy Arthur Mann, and Shie Mannor. 2016. Iterative hierarchical optimiza- tion for misspecified problems (IHOMP). CoRR, abs/1602.03348.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Mapping instructions and visual observations to actions with reinforcement learning", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Dipendra", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Misra", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Langford", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Artzi", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dipendra K. Misra, John Langford, and Yoav Artzi. 2017. Mapping instructions and visual observations to actions with reinforcement learning. EMNLP.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Human-level control through deep reinforcement learning", |
| "authors": [ |
| { |
| "first": "Volodymyr", |
| "middle": [], |
| "last": "Mnih", |
| "suffix": "" |
| }, |
| { |
| "first": "Koray", |
| "middle": [], |
| "last": "Kavukcuoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Silver", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrei", |
| "middle": [ |
| "A" |
| ], |
| "last": "Rusu", |
| "suffix": "" |
| }, |
| { |
| "first": "Joel", |
| "middle": [], |
| "last": "Veness", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [ |
| "G" |
| ], |
| "last": "Bellemare", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Graves", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Riedmiller", |
| "suffix": "" |
| }, |
| { |
| "first": "Andreas", |
| "middle": [ |
| "K" |
| ], |
| "last": "Fidjeland", |
| "suffix": "" |
| }, |
| { |
| "first": "Georg", |
| "middle": [], |
| "last": "Ostrovski", |
| "suffix": "" |
| }, |
| { |
| "first": "Stig", |
| "middle": [], |
| "last": "Petersen", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Charles Beattie", |
| "volume": "518", |
| "issue": "7540", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidje- land, Georg Ostrovski, Stig Petersen, Charles Beat- tie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-level control through deep reinforcement learning. Nature, 518(7540):529- 533, 02.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Spatial reference in linguistic human-robot interaction: Iterative, empirically supported development of a model of projective relations", |
| "authors": [ |
| { |
| "first": "Reinhard", |
| "middle": [], |
| "last": "Moratz", |
| "suffix": "" |
| }, |
| { |
| "first": "Thora", |
| "middle": [], |
| "last": "Tenbrink", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Spatial cognition and computation", |
| "volume": "6", |
| "issue": "1", |
| "pages": "63--107", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Reinhard Moratz and Thora Tenbrink. 2006. Spatial ref- erence in linguistic human-robot interaction: Iterative, empirically supported development of a model of pro- jective relations. Spatial cognition and computation, 6(1):63-107.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Universal value function approximators", |
| "authors": [ |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Schaul", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Horgan", |
| "suffix": "" |
| }, |
| { |
| "first": "Karol", |
| "middle": [], |
| "last": "Gregor", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Silver", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "1312--1320", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. 2015. Universal value function approximators. In ICML, pages 1312-1320.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Spatial language for human-robot dialogs", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Skubic", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Perzanowski", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Blisard", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Schultz", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Adams", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Bugajska", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Brock", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)", |
| "volume": "34", |
| "issue": "2", |
| "pages": "154--167", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Skubic, D. Perzanowski, S. Blisard, A. Schultz, W. Adams, M. Bugajska, and D. Brock. 2004. Spatial language for human-robot dialogs. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applica- tions and Reviews), 34(2):154-167, May.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Introduction to reinforcement learning", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [ |
| "S" |
| ], |
| "last": "Sutton", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "G" |
| ], |
| "last": "Barto", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard S. Sutton and Andrew G. Barto. 1998. Introduc- tion to reinforcement learning. MIT Press.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Generalization in reinforcement learning: Successful examples using sparse coarse coding", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [ |
| "S" |
| ], |
| "last": "Sutton", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard S. Sutton. 1996. Generalization in rein- forcement learning: Successful examples using sparse coarse coding. In NIPS.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Understanding natural language commands for robotic navigation and mobile manipulation", |
| "authors": [ |
| { |
| "first": "Stefanie", |
| "middle": [], |
| "last": "Tellex", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Kollar", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Dickerson", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [ |
| "R" |
| ], |
| "last": "Walter", |
| "suffix": "" |
| }, |
| { |
| "first": "Seth", |
| "middle": [ |
| "J" |
| ], |
| "last": "Banerjee", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicholas", |
| "middle": [], |
| "last": "Teller", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R. Walter, Ashis Gopal Banerjee, Seth J. Teller, and Nicholas Roy. 2011. Understanding natu- ral language commands for robotic navigation and mo- bile manipulation. In AAAI.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Space, Time, and the Use of Language: An Investigation of Relationships", |
| "authors": [ |
| { |
| "first": "Thora", |
| "middle": [], |
| "last": "Tenbrink", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "36", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thora Tenbrink. 2007. Space, Time, and the Use of Lan- guage: An Investigation of Relationships, volume 36. Walter de Gruyter.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "The use of spatial relations in referring expression generation", |
| "authors": [ |
| { |
| "first": "Jette", |
| "middle": [], |
| "last": "Viethen", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Dale", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Fifth International Natural Language Generation Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "59--67", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jette Viethen and Robert Dale. 2008. The use of spatial relations in referring expression generation. In Pro- ceedings of the Fifth International Natural Language Generation Conference, pages 59-67. ACL.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Learning to follow navigational directions", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "806--814", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam Vogel and Dan Jurafsky. 2010. Learning to follow navigational directions. In ACL, pages 806-814.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Learning semantic maps from natural language descriptions", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [ |
| "R" |
| ], |
| "last": "Walter", |
| "suffix": "" |
| }, |
| { |
| "first": "Sachithra", |
| "middle": [], |
| "last": "Hemachandra", |
| "suffix": "" |
| }, |
| { |
| "first": "Bianca", |
| "middle": [], |
| "last": "Homberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefanie", |
| "middle": [], |
| "last": "Tellex", |
| "suffix": "" |
| }, |
| { |
| "first": "Seth", |
| "middle": [], |
| "last": "Teller", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Robotics: Science and Systems IX Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew R. Walter, Sachithra Hemachandra, Bianca Homberg, Stefanie Tellex, and Seth Teller. 2013. Learning semantic maps from natural language de- scriptions. Proceedings of the 2013 Robotics: Science and Systems IX Conference.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "text": "to obtain a continuous vector representation h(x). This vector is then split into local and global components h(x) = [h 1 (x); h 2 (x)]. The local component, h 2 (x), is reshaped into a kernel to perform a convolution operation on the state embedding \u03c6(s) (similar to Chen et al. (2015)):", |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "uris": null, |
| "text": "Examples of failure cases for our model. Multiple levels of indirection in (a) and a long instruction filled with redundant information in (b) make the instruction difficult to interpret. Intended goal locations are outlined in red for clarity.", |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "uris": null, |
| "text": "Effect of training set size on held-out predictions. The curves show the mean of ten training runs and the shaded regions show standard deviation. Our model's policy quality is greater than 0.90 with as few as 400 training goal annotations.", |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>4: 5: 6: 7: 8: 9:</td><td>Sample instruction x\u2208X and associated environ-ment E Predict value mapV (s,x;\u0398) for all s\u2208E Choose start state s 0 randomly for t=1,N do Select a t =argmax a T (s|s t\u22121 ,a)V (s,x;\u0398) s Observe next state s t and reward r t</td></tr><tr><td>10: 11: 12: 13:</td><td>for j=1,J do Sample random trajectory (s,r) from D</td></tr></table>", |
| "text": "Algorithm 1 Training Procedure 1: Initialize experience memory D 2: Initialize model parameters \u0398 3: for epoch=1,M do Store trajectory (s=s 0 ,s 1 ,...,r=r 0 ,r 1 ,...) in D", |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td/><td/><td>Local</td><td/><td/><td>Global</td><td/></tr><tr><td/><td>MSE</td><td>Policy Quality</td><td>Distance</td><td>MSE</td><td>Policy Quality</td><td>Distance</td></tr><tr><td>UVFA (pos)</td><td>1.30</td><td>0.48</td><td>4.96</td><td>1.04</td><td>0.52</td><td>5.39</td></tr><tr><td>UVFA (text)</td><td>3.23</td><td>0.57</td><td>4.97</td><td>1.9</td><td>0.62</td><td>5.31</td></tr><tr><td>CNN + LSTM</td><td>0.42</td><td>0.86</td><td>4.08</td><td>0.43</td><td>0.83</td><td>4.18</td></tr><tr><td>Our model (w/o gradient)</td><td>0.25</td><td>0.94</td><td>2.39</td><td>0.61</td><td>0.87</td><td>5.15</td></tr><tr><td>Our model</td><td>0.25</td><td>0.94</td><td>2.34</td><td>0.41</td><td>0.89</td><td>3.81</td></tr></table>", |
| "text": "Performance of models trained via reinforcement learning on a held-out set of environments and instructions. Policy quality is the true expected normalized reward and distance denotes the Manhattan distance from goal location prediction to true goal position. We show results from training on the local and global instructions both separately and jointly.", |
| "type_str": "table" |
| } |
| } |
| } |
| } |