| { |
| "title": "Decentralized Structural-RNN for Robot Crowd Navigation with Deep Reinforcement Learning", |
| "abstract": "Safe and efficient navigation through human crowds is an essential capability for mobile robots.\nPrevious work on robot crowd navigation assumes that the dynamics of all agents are known and well-defined. In addition, the performance of previous methods deteriorates in partially observable environments and environments with dense crowds.\nTo tackle these problems, we propose decentralized structural-Recurrent Neural Network (DS-RNN), a novel network that reasons about spatial and temporal relationships for robot decision making in crowd navigation.\nWe train our network with model-free deep reinforcement learning without any expert supervision.\nWe demonstrate that our model outperforms previous methods in challenging crowd navigation scenarios. We successfully transfer the policy learned in the simulator to a real-world TurtleBot 2i.\nFor more information, please visit the project website at https://sites.google.com/view/crowdnav-ds-rnn/home.", |
| "sections": [ |
| { |
| "section_id": "1", |
| "parent_section_id": null, |
| "section_name": "Introduction", |
| "text": "As mobile robots are becoming prevalent in people’s daily lives, autonomous navigation in crowded places with other dynamic agents is an important yet challenging problem [1 ###reference_b1###, 2 ###reference_b2###].\nInspired by the recent applications of deep learning in robot control [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###] and in graph modeling [7 ###reference_b7###], we seek to build a learning-based graphical model for mobile robot navigation in pedestrian-rich environments.\nRobot crowd navigation is a challenging task for two key reasons. First, the problem is decentralized, meaning that each agent runs its own policy individually, which makes the environment not fully observable to the robot. For example, other agents’ preferred walking style and intended goals are not known in advance and are difficult to infer online [8 ###reference_b8###]. Second, the crowded environment contains both dynamic and static agents, who implicitly interact with each other during navigation.\nThe ways agents influence each other are often difficult to model [9 ###reference_b9###], making the dynamic environment harder to navigate and likely to produce emergent phenomenon [10 ###reference_b10###].\nDespite these challenges, robot crowd navigation is well-studied and has had many successful demonstrations [11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###]. Reaction-based methods such as Optimal Reciprocal Collision Avoidance (ORCA) and Social Force (SF) use one-step interaction rules to determine the robot’s optimal action [14 ###reference_b14###, 12 ###reference_b12###, 15 ###reference_b15###]. Another line of works first predict other agents’ future trajectories and then plan a path for the robot [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###]. However, these two methods suffer from the freezing robot problem: in dense crowds, the planner decides that all paths are unsafe and the robot freezes, which is suboptimal as a feasible path usually exists [20 ###reference_b20###].\nMore recently, learning-based methods model the robot crowd navigation as a Markov Decision Process (MDP) and use Deep V-Learning to solve the MDP [21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 13 ###reference_b13###, 24 ###reference_b24###]. In Deep V-Learning, the agent chooses an action based on the state value approximated by neural networks.\nHowever, Deep V-Learning is typically initialized by ORCA using supervised learning and, as a result, the final policy inherits ORCA’s aforementioned problems. Moreover, to choose actions from the value network, the dynamics of the humans are assumed to be known to the robot and are deterministic, which can be unrealistic in real applications.\n###figure_1### In this paper, we seek to create a learning framework for robot crowd navigation using spatio-temporal reasoning trained with model-free deep reinforcement learning (RL).\nWe model the crowd navigation scenario as a decentralized spatio-temporal graph (st-graph) to capture the interactions between the robot and multiple humans through both space and time.\nThen, we convert the decentralized st-graph to a novel end-to-end decentralized structural-RNN (DS-RNN) network.\nUsing model-free RL, our method directly learns a navigation policy without prior knowledge of any agent’s dynamics or expert policies. Since the robot learns entirely from its own experience, the resulting navigation policy easily adapts to dense human crowds and partial observability and outperforms previous methods in these scenarios.\nWe present the following contributions: (1) We propose a novel deep neural network architecture called DS-RNN, which enables the robot to perform efficient spatio-temporal reasoning in crowd navigation; (2) We train the network using model-free RL without any supervision, which both simplifies the learning pipeline and avoids the network from converging to a suboptimal policy too early; (3) Our method demonstrates better performance in challenging navigation settings compared with previous methods. For more details, our code is available at https://github.com/Shuijing725/CrowdNav_DSRNN ###reference_RNN###.\nThis paper is organized as follows: We review previous related works in Section II ###reference_###. We formalize the problem and propose our network architecture in Section III ###reference_###. Experiments and results in simulation and in the real world are discussed in Section IV ###reference_### and Section V ###reference_###, respectively. Finally, we conclude the paper in Section VII ###reference_###." |
| }, |
| { |
| "section_id": "2", |
| "parent_section_id": null, |
| "section_name": "II Related Works", |
| "text": "" |
| }, |
| { |
| "section_id": "2.1", |
| "parent_section_id": "2", |
| "section_name": "II-A Reaction-based methods", |
| "text": "Robot navigation in dynamic environments has been studied for over two decades [11 ###reference_b11###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###]. A subset of these works specifically focuses on robot navigation in pedestrian-rich environments or crowd navigation [1 ###reference_b1###, 28 ###reference_b28###].\nReaction-based methods such as Reciprocal Velocity Obstacle (RVO) and ORCA model other agents as velocity obstacles to find optimal collision-free velocities under reciprocal assumption [29 ###reference_b29###, 14 ###reference_b14###, 12 ###reference_b12###].\nAnother method named Social Force models the interactions in crowds using attractive and repulsive forces [15 ###reference_b15###]. However, these algorithms suffer from the freezing robot problem [20 ###reference_b20###]. In addition, since the robot only uses the current states as input, the generated paths are often shortsighted and unnatural.\nIn contrast, we train our network with model-free RL to mitigate the freezing problem. Also, our network contains RNNs that take a sequence of trajectories as input to encourage longsighted behaviors." |
| }, |
| { |
| "section_id": "2.2", |
| "parent_section_id": "2", |
| "section_name": "II-B Trajectory-based methods", |
| "text": "Trajectory-based methods predict other agents’ intended trajectories to plan a feasible path for the robot [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###]. Trajectory predictions allow the robot planner to look into the future and make long-sighted decisions. However, these methods have the following disadvantages. First, predicting trajectory sequences and searching a path from a large state space online are computationally expensive and can be slow in real time [33 ###reference_b33###].\nSecond, the predicted trajectories can make a large portion of the space untraversable, which might make the robot overly conservative [34 ###reference_b34###]." |
| }, |
| { |
| "section_id": "2.3", |
| "parent_section_id": "2", |
| "section_name": "II-C Learning-based methods", |
| "text": "With the recent advancement of deep learning, imitation learning has been used to uncover policies from demonstrations of desired behaviors [35 ###reference_b35###, 36 ###reference_b36###].\nAnother line of works use Deep V-Learning, which combines supervised learning and RL [21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 13 ###reference_b13###, 30 ###reference_b30###, 24 ###reference_b24###].\nGiven the state transitions of all agents, the planner first calculates the values of all possible next states from a value network. Then, the planner chooses an action that leads to the state with the highest value.\nTo train the value network, Deep V-Learning first initializes the network by supervised learning using trajectories generated by ORCA, and then fine-tunes the network with RL. Using a single rollout of the policy, Monte-Carlo value estimation calculates the ground-truth values for both supervised learning and RL.\nDeep V-Learning has demonstrated success in simulation and/or in the real world but still suffers from the following drawbacks: (1) Deep V-Learning assumes that the state transitions of all surrounding humans are known and well-defined, which are in fact highly stochastic and difficult to model; (2) Since the networks are pre-trained with supervised learning, they share the same disadvantages with the demonstration policy, which are hard to be corrected by RL; (3) Monte-Carlo value estimation is not scalable with increasing time horizon; and (4) To achieve the best performance, Deep V-Learning needs state information of all humans. If applied to real robots, a real-time human detector with a field of view is required, which can be expensive or impractical.\nTo tackle these problems, we introduce a policy network trained with model-free RL, which does not need state transitions, Monte-Carlo value estimation, or expert supervision.\nFurther, we show that incorporating both spatial and temporal reasoning in our network improves performance in challenging navigation environments over prior methods." |
| }, |
| { |
| "section_id": "2.4", |
| "parent_section_id": "2", |
| "section_name": "II-D Spatio-temporal graphs and structural-RNN", |
| "text": "St-graph is a type of conditional random field [37 ###reference_b37###] with wide applications [7 ###reference_b7###, 38 ###reference_b38###, 39 ###reference_b39###, 40 ###reference_b40###]. St-graphs use nodes to represent the problem components and edges to capture the spatio-temporal interactions [7 ###reference_b7###]. With each node or edge governed by a factor function, st-graph decomposes a complex problem into many smaller and simpler factors.\nJain et al propose a general method called structural-RNN (S-RNN) that transforms any st-graph to a mixture of RNNs that learn the parameters of factor functions end-to-end [7 ###reference_b7###]. S-RNNs have been applied to research areas such as human tracking and human trajectory prediction [41 ###reference_b41###, 42 ###reference_b42###]. However, the scope of these works is restricted to learning from static datasets. Applying st-graph to crowd navigation poses extra challenges in data collection and decision-making under uncertainty.\nAlthough some works in crowd navigation have used graph convolutional network [43 ###reference_b43###] to model the robot-crowd interactions [30 ###reference_b30###, 24 ###reference_b24###], to the best of our knowledge, our work is the first to combine S-RNN with model-free RL for robot crowd navigation." |
| }, |
| { |
| "section_id": "3", |
| "parent_section_id": null, |
| "section_name": "III Methodology", |
| "text": "In this section, we first formulate the robot decision making in crowd navigation as an RL problem. Then, we present our approach to model crowd navigation scenario as an st-graph, which leads to the derivation of our DS-RNN network architecture.\n###figure_2### ###figure_3###" |
| }, |
| { |
| "section_id": "3.1", |
| "parent_section_id": "3", |
| "section_name": "III-A Problem formulation", |
| "text": "Consider a robot interacting with an episodic environment with other humans. We model this interaction as an MDP, defined by the tuple .\nSuppose that all agents move in a 2D Euclidean space. Let be the robot states and be the -th human’s states observable by the robot. Then, the state for the MDP is assuming a total number of humans was involved at the timestep . The robot state consists of the robot’s position , velocity , goal position , maximum speed , heading angle , and radius .\nEach human state consists of the human’s position . In contrast to previous works [14 ###reference_b14###, 21 ###reference_b21###, 13 ###reference_b13###, 30 ###reference_b30###], the human state does not include human’s velocity and radius because they are hard to be measured accurately in the real world.\nIn each episode, the robot begins at an initial state . At each timestep , the robot takes an action according to its policy . In return, the robot receives a reward and transits to the next state according to an unknown state transition . Meanwhile, all other humans also take actions according to their policies and move to the next states with unknown state transition probabilities.\nThe process continues until exceeds the maximum episode length , the robot reaches its goal, or the robot collides with any humans.\nLet be the discount factor. Then, is the total accumulated return from timestep . The goal of the robot is to maximize the expected return from each state. The value of state under policy , defined as , is the expected return for following policy from state ." |
| }, |
| { |
| "section_id": "3.2", |
| "parent_section_id": "3", |
| "section_name": "III-B Spatio-Temporal Graph Representation", |
| "text": "We formulate the crowd navigation scenario as a decentralized st-graph. Our graph consists of a set of nodes , a set of spatial edges , and a set of temporal edges . As shown in Fig. 2 ###reference_###a, the nodes in the st-graph represent the agents, the spatial edges connect two different agents at the same timestep, and the temporal edges connect the same nodes at adjacent timesteps. We prune the edges and nodes not shown in Fig. 2 ###reference_###a as they have little effect on the robot’s decisions111From experiments, we find that a network derived from a full st-graph as in [7 ###reference_b7###] performs very similarly to DS-RNN.. The corresponding unrolled st-graph is shown in Fig. 2 ###reference_###b.\nThe factor graph representation of the st-graph factorizes the robot policy function into the robot node factor, spatial edge factors, and robot temporal edge factor. At each timestep, the factors take the node or edge features as inputs and collectively determine the robot’s action. In Fig. 2 ###reference_###c, factors are denoted by black boxes and have parameters that need to be learned.\nWe choose to be the vector pointing from humans’ position to the robot position, , to be robot velocity , and to be . To reduce the number of parameters, all spatial edges share the same factor. This parameter sharing is important for the scalability of our st-graph because the number of parameters is kept constant with an increasing number of humans [7 ###reference_b7###]." |
| }, |
| { |
| "section_id": "3.3", |
| "parent_section_id": "3", |
| "section_name": "III-C Network Architecture", |
| "text": "As shown in Fig. 3 ###reference_###, we derive our network architecture from the factor graph representation of the st-graph motivated by [7 ###reference_b7###]. In our network, we represent each factor with an RNN, referred to as spatial edgeRNN , temporal edgeRNN , and nodeRNN respectively. We use and to denote trainable weights and fully connected layers throughout this section.\nThe spatial edgeRNN captures the spatial interactions between humans and the robot. first applies a non-linear transformation to each spatial feature and then feeds the transformed results to the RNN cell:\nwhere is the hidden state of the RNN at time for -th human and the robot. Due to the parameter sharing mentioned in Section III-B ###reference_###, the spatial edge features between all human-robot pairs are fed into the same spatial edgeRNN.\nThe temporal edgeRNN captures the dynamics of the robot’s own trajectory. Similar to , applies a linear transformation to the temporal edge feature and processes the results with its RNN cell:\nwhere is the hidden state of the RNN at time .\nThe outputs of two edgeRNNs are fed into an attention module which assigns attention weights to each spatial edge. The attention mechanism is similar to the scaled dot product attention in [44 ###reference_b44###]. Let be the output of at time , , where is the number of spatial edges or humans.\nBoth and are first put through linear transformations:\nwhere , , and is a hyperparameter for the attention size. The attention weight at time , , is calculated as\nThe output of the attention module at time , , is the weighted sum of spatial edges:\nThe nodeRNN uses the robot state, , the weighted hidden states of , , and the hidden states of temporal edgeRNN, , to determine the robot action and state value at each time . The nodeRNN concatenates and and embeds the concatenated results and the robot state with linear transformations:\nBoth and are concatenated and fed into to get the nodeRNN hidden state.\nFinally, the is input to a fully connected layer to obtain the value and the policy . We use Proximal Policy Optimization (PPO), a model-free policy gradient algorithm, for policy and value function learning [45 ###reference_b45###] and we adopt the PPO implementation from [46 ###reference_b46###]. To accelerate and stabilize training, we run twelve instances of the environment in parallel for collecting the robot’s experiences. At each policy update, 30 steps of six episodes are used.\nBy identifying the independent components of robot crowd navigation, we split the complex problem into smaller factors, and use three RNNs to efficiently learn the parameters of the corresponding factors. By combining all components above, the end-to-end trainable DS-RNN network performs spatial and temporal reasoning to determine the robot action." |
| }, |
| { |
| "section_id": "4", |
| "parent_section_id": null, |
| "section_name": "IV Simulation Experiments", |
| "text": "In this section, we describe the simulation environment for training and present our experimental results in simulation." |
| }, |
| { |
| "section_id": "4.1", |
| "parent_section_id": "4", |
| "section_name": "IV-A Simulation environment", |
| "text": "Fig. 4 ###reference_### shows our 2D simulation environments adapted from [13 ###reference_b13###]. We use holonomic kinematics for each agent, whose action at time consists of the desired velocity along the and axis, . All humans are controlled by ORCA with randomized maximum speed and radius.\nWe assume that\nhumans react only to other humans but not to the robot. This invisible setting prevents our model from learning an extremely aggressive policy in which the robot forces all humans to yield while achieving a high reward.\nWe also assume that all agents can achieve the desired velocities immediately, and they will keep moving with these velocities for seconds.\nWe define the update rule for an agent’s position , as follows:" |
| }, |
| { |
| "section_id": "4.1.1", |
| "parent_section_id": "4.1", |
| "section_name": "IV-A1 Environment configurations", |
| "text": "Fig. 4(a) ###reference_sf1### shows the FoV Environment, where the robot’s field view (FoV) is within to and remains unchanged in each episode. The robot assumes that the humans out of its view proceed in a straight line with their last observed velocities. There are always five humans, whose starting and goal positions are randomly placed on a circle with radius . The FoV Environment simulates the limited sensor range of a robot, since deploying several sensors to obtain a FoV is usually expensive and unrealistic in the real world.\nFig. 4(b) ###reference_sf2### shows the Group Environment, where the robot’s FoV is but the number of humans is large and remains the same in each episode. Among these humans, some form circle groups in random positions and do not move while the rest of them move freely. The Group Environment simulates the scene of a dense crowd with both static and dynamic obstacles. We use this environment to evaluate whether the robot policies have the freezing robot problem.\n###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###table_1### To simulate the variety of complexities in real-world crowd navigation, we add the following randomness and features not included in the original simulator from [13 ###reference_b13###]. When an episode begins, the robot’s initial position and goal are chosen randomly. In addition, all humans occasionally change their goal positions within an episode. Finally, to simulate a continuous human flow, immediately after humans arrive at their goal positions, they will move to new random goals instead of remaining stationary at their initial destinations." |
| }, |
| { |
| "section_id": "4.1.2", |
| "parent_section_id": "4.1", |
| "section_name": "IV-A2 Reward function", |
| "text": "The reward function awards the robot for reaching its goal and penalizes the robot for colliding with humans or getting too close to humans. In addition, we add a potential-based reward shaping to guide the robot to approach the goal:\nwhere is the minimum separation distance between the robot and any human at time , and is the distance between the robot position and goal position at time .\nIntuitively, the robot gets a high reward when it approaches the goal while maintaining a safe distance from all humans." |
| }, |
| { |
| "section_id": "4.2", |
| "parent_section_id": "4", |
| "section_name": "IV-B Experiment setup", |
| "text": "" |
| }, |
| { |
| "section_id": "4.2.1", |
| "parent_section_id": "4.2", |
| "section_name": "IV-B1 Baselines and Ablation Models", |
| "text": "We compare the performance of our model with the representatives of the three types of methods in Section II ###reference_###. We use ORCA and SF as the baselines for reaction-based methods; Relational Graph Learning (RGL) [30 ###reference_b30###] as a baseline for both trajectory-based methods and Deep V-Learning; and CADRL [21 ###reference_b21###] and OM-SARL [13 ###reference_b13###] as the baselines for Deep V-Learning.\nTo remove the performance gain caused by other factors such as model-free RL and RNN, we also implement an ablation model, called RNN+Attn, by adding an RNN to the end of OM-SARL network. For RNN+Attn, the attention module assigns attention weights on the state features of humans. The weighted human features are then concatenated with robot state features to form the joint state features which are passed to an RNN network with the same size and sequence length as the robot nodeRNN in our model.\nBoth networks are trained using PPO with the same hyper-parameters and thus the results serve as a clean comparison to highlight the benefits of our DS-RNN." |
| }, |
| { |
| "section_id": "4.2.2", |
| "parent_section_id": "4.2", |
| "section_name": "IV-B2 Training", |
| "text": "We use the same reward as defined in Equation 9 ###reference_### for CADRL, OM-SARL, RGL, RNN+Attn, and DS-RNN.\nThe network architectures of all methods are kept the same in all experiments.\nWe train DS-RNN and RNN+Attn for timesteps with a learning rate . We train all baselines as stated in the original papers." |
| }, |
| { |
| "section_id": "4.2.3", |
| "parent_section_id": "4.2", |
| "section_name": "IV-B3 Evaluation", |
| "text": "We evaluate the performance of all the models with six experiments: for the FoV Environment, the FoV of the robot is , , or ; for the Group Environment, the number of humans is 10, 15, or 20. For each of the six experiments, we test all the models with random unseen test cases. We measure the percentage of success, collision, and timeout episodes, as well as the average navigation time of the successful episodes." |
| }, |
| { |
| "section_id": "4.3", |
| "parent_section_id": "4", |
| "section_name": "IV-C Results", |
| "text": "Group Environment (20 Humans)\n###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### FoV Environment ()" |
| }, |
| { |
| "section_id": "4.3.1", |
| "parent_section_id": "4.3", |
| "section_name": "IV-C1 Spatio-temporal reasoning", |
| "text": "We show the effectiveness of our DS-RNN architecture by comparing it with RNN+Attn.\nIn Fig. 5 ###reference_### and Fig. 6 ###reference_###, compared with RNN+Attn, our model exhibits higher success rates and lower collision and timeout rates in all settings. In Table I ###reference_###, our model has shorter navigation time because DS-RNN often finds a better path (Fig. 7(c) ###reference_sf3###e and 7(c) ###reference_sf3###f).\nWe believe the main reason is that by formulating the crowd navigation into an st-graph, we decompose the robot decision making into smaller factors and feed each RNN with only relevant edge or node features. In this way, the three RNNs are able to learn their corresponding factors more effectively. By combining all factors (RNNs), the robot is able to explicitly reason about the spatial relationships with humans and its own dynamics to take actions. In contrast, RNN+Attn does not have such spatio-temporal reasoning and learns all factors with one single RNN, which explains its lower performance." |
| }, |
| { |
| "section_id": "4.3.2", |
| "parent_section_id": "4.3", |
| "section_name": "IV-C2 Comparison with traditional methods", |
| "text": "We compare the performance of our model with those of ORCA and SF. As shown in Fig. 6 ###reference_###, in the Group Environment, ORCA and SF exhibit high timeout rates, which increases significantly as the number of humans increases. This observation indicates that the freezing robot problem is prevalent in these mixed static and dynamic settings (Fig. 7(c) ###reference_sf3###a).\nAlso, as Table I ###reference_### suggests, the large navigation times show that both methods are overly conservative in dense crowds.\nIn the FoV Environment, our method also outperforms ORCA and SF in most metrics, as shown in Fig. 5 ###reference_### and Table I ###reference_###, because our method explores the environment and learns from the past experience during RL training. Combined with spatio-temporal reasoning, our method is able to better adapt to dense and partially observable environments. In addition, with our method, the robot is long-sighted because RL optimizes the policy over cumulative reward and the RNNs takes a sequence of trajectories to make decisions while ORCA and SF only consider the current state (Fig. 7(c) ###reference_sf3###c)." |
| }, |
| { |
| "section_id": "4.3.3", |
| "parent_section_id": "4.3", |
| "section_name": "IV-C3 Comparison with Deep V-Learning", |
| "text": "We compare model-free RL training with Deep V-Learning used by CADRL, OM-SARL, and RGL. In Fig. 6 ###reference_###, all three baselines exhibit large timeout rates. The reason is that the value networks are initialized by a suboptimal expert (ORCA) in supervised learning and are insufficient to provide good state value estimates, resulting in policies that inherit OCRA’s drawbacks.\nIn contrast, model-free RL enables RNN+Attn and DS-RNN to learn from scratch and prevents the network from converging to a suboptimal policy too early.\nIn addition, despite the unknown state transitions of all agents, RNN+Attn and our method still perform better in all metrics compared with Deep V-Learning.\nAs shown in Fig. 7(c) ###reference_sf3###d and 7(c) ###reference_sf3###f, RGL is competitive to our method in some cases, because the relational graph can perform spatial reasoning and human trajectory predictions make RGL long-sighted.\nHowever, in RGL, the relational graph and the robot planner are separated modules, while our network is trained end-to-end and jointly learns the robot-human interactions and decision making." |
| }, |
| { |
| "section_id": "5", |
| "parent_section_id": null, |
| "section_name": "Real-world Experiments", |
| "text": "We evaluate our trained model’s performance on a TurtleBot 2i mobile platform as shown in Fig. 1 ###reference_###.\nAn Intel RealSense depth camera D435 with an approximately FoV is used to obtain human positions. We use YOLOv3 [47 ###reference_b47###] for human detection and Deep SORT [48 ###reference_b48###] for human tracking (our implementation is adopted from [49 ###reference_b49###]). The human detection and tracking are combined with the camera depth information to calculate human positions.\nAn Intel RealSense tracking camera T265 is used to localize the robot and obtain the robot orientation.\nWe run the above perception algorithms and our decision-making model on a remote host computer. The communication between the robot and the host computer is established by ROS. A video demonstration is available at https://youtu.be/bYO-1IAjzgY ###reference_youtu.be/bYO-1IAjzgY###, where the robot successfully reaches the goals with maneuvers to maintain a safe distance with humans in various scenarios." |
| }, |
| { |
| "section_id": "6", |
| "parent_section_id": null, |
| "section_name": "VI Limitations", |
| "text": "Our work encompasses the following limitations. First, the invisible setting of our simulation environment is different from the reality where the motions of pedestrians and those of the robot mutually affect each other. Since the robot does not affect human behaviors, it is difficult to quantify the social awareness of the robot and incorporate it into our design. Second, in the real world experiment, due to the sensor limitations, the detected human positions are noisy and thus cause differences in robot behaviors between the simulation and the real world. Third, the total number of humans is fixed for each network model, which poses challenges to the generalization of our model to navigation scenarios with real human flows." |
| }, |
| { |
| "section_id": "7", |
| "parent_section_id": null, |
| "section_name": "VII Conclusion and future work", |
| "text": "We propose a novel DS-RNN network that incorporates spatial and temporal reasoning into robot decision making for crowd navigation.\nWe train our DS-RNN with model-free deep RL without any supervised learning or assumptions on agents’ dynamics.\nOur experiments shows that our model outperforms various baselines in challenging simulation environments and show promising results in the real world.\nPossible directions to explore in future work include (1) utilizing mutual interactions between the robot and humans to improve our model, and (2) enabling our network to take raw camera images as inputs to simplify detection and localization in the real world." |
| } |
| ], |
| "appendix": [], |
| "tables": { |
| "1": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.11.1.1\" style=\"font-size:90%;\">TABLE I</span>: </span><span class=\"ltx_text\" id=\"S4.T1.12.2\" style=\"font-size:90%;\">Navigation time (second) in two environments.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T1.9.9\">\n<tr class=\"ltx_tr\" id=\"S4.T1.9.9.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T1.9.9.10.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.9.9.10.1.1\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S4.T1.9.9.10.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.9.9.10.2.1\">FoV</span></td>\n<td class=\"ltx_td ltx_border_tt\" id=\"S4.T1.9.9.10.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S4.T1.9.9.10.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.9.9.10.4.1\">Number of Humans</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.9.9.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.2.2.2\"> 90<sup class=\"ltx_sup\" id=\"S4.T1.2.2.2.2.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.2.2.2.2.1.1\">∘</span></sup>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.4.4.4.4\"> 180<sup class=\"ltx_sup\" id=\"S4.T1.4.4.4.4.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.4.4.4.4.1.1\">∘</span></sup>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.6.6.6.6\"> 360<sup class=\"ltx_sup\" id=\"S4.T1.6.6.6.6.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.6.6.6.6.1.1\">∘</span></sup>\n</td>\n<td class=\"ltx_td\" id=\"S4.T1.9.9.9.10\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.7.7.7.7\"> 10</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.8.8.8.8\"> 15</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S4.T1.9.9.9.9\"> 20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.9.9.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.9.9.11.1\">ORCA</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.9.9.11.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.9.9.11.2.1\">9.12</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.9.9.11.3\">9.96</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.9.9.11.4\">10.40</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T1.9.9.11.5\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.9.9.11.6\">15.94</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.9.9.11.7\">19.09</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S4.T1.9.9.11.8\">19.66</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.9.9.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.12.1\">SF</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.12.2\">20.86</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.12.3\">24.28</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.12.4\">23.96</td>\n<td class=\"ltx_td\" id=\"S4.T1.9.9.12.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.12.6\">24.60</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.12.7\">30.26</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.9.9.12.8\">31.12</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.9.9.13\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.13.1\">CADRL</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.13.2\">30.84</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.13.3\">27.54</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.13.4\">33.46</td>\n<td class=\"ltx_td\" id=\"S4.T1.9.9.13.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.13.6\">32.62</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.13.7\">36.81</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.9.9.13.8\">41.75</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.9.9.14\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.14.1\">OM-SARL</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.14.2\">18.32</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.14.3\">13.70</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.14.4\">21.04</td>\n<td class=\"ltx_td\" id=\"S4.T1.9.9.14.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.14.6\">27.25</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.14.7\">23.79</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.9.9.14.8\">29.24</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.9.9.15\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.15.1\">RGL</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.15.2\">9.54</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.15.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.9.9.15.3.1\">9.48</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.15.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.9.9.15.4.1\">9.59</span></td>\n<td class=\"ltx_td\" id=\"S4.T1.9.9.15.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.15.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.9.9.15.6.1\">13.22</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.15.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.9.9.15.7.1\">14.75</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.9.9.15.8\">16.44</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.9.9.16\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.16.1\">RNN+Attn</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.16.2\">16.57</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.16.3\">14.00</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.16.4\">10.96</td>\n<td class=\"ltx_td\" id=\"S4.T1.9.9.16.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.16.6\">16.01</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.9.9.16.7\">21.31</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.9.9.16.8\">25.55</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.9.9.17\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.9.9.17.1\">DS-RNN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.9.9.17.2\">11.83</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.9.9.17.3\">10.99</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.9.9.17.4\">11.79</td>\n<td class=\"ltx_td ltx_border_bb\" id=\"S4.T1.9.9.17.5\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.9.9.17.6\">13.51</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.9.9.17.7\">15.64</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_bb\" id=\"S4.T1.9.9.17.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.9.9.17.8.1\">15.52</span></td>\n</tr>\n</table>\n</figure>", |
| "capture": "TABLE I: Navigation time (second) in two environments." |
| } |
| }, |
| "image_paths": { |
| "1": { |
| "figure_path": "2011.04820v4_figure_1.png", |
| "caption": "Figure 1: Real-world crowd navigation with a TurtleBot 2i. The orange cone on the floor denotes the robot goal. The TurtleBot is equipped with cameras for localization and human tracking.", |
| "url": "http://arxiv.org/html/2011.04820v4/extracted/6159954/Figures/open2.jpg" |
| }, |
| "2": { |
| "figure_path": "2011.04820v4_figure_2.png", |
| "caption": "Figure 2: Conversion from the st-graph to the factor graph. (a) St-graph representation of the crowd navigation scenario. We use ww\\mathrm{w}roman_w to denote the robot node and uisubscriptu𝑖\\mathrm{u}_{i}roman_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to denote the i𝑖iitalic_i-th human node. (b) Unrolled st-graph for two timsteps. At timestep t𝑡titalic_t, the node feature for the robot is xwtsubscriptsuperscript𝑥𝑡𝑤x^{t}_{w}italic_x start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT. The spatial edge feature between the i𝑖iitalic_i-th human and the robot is xuiwtsuperscriptsubscript𝑥subscript𝑢𝑖𝑤𝑡x_{u_{i}w}^{t}italic_x start_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT. The temporal edge feature for the robot is xwwtsuperscriptsubscript𝑥𝑤𝑤𝑡x_{ww}^{t}italic_x start_POSTSUBSCRIPT italic_w italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT. (c) The corresponding factor graph. Factors are denoted by black boxes.", |
| "url": "http://arxiv.org/html/2011.04820v4/extracted/6159954/Figures/SRNN_partial.png" |
| }, |
| "3": { |
| "figure_path": "2011.04820v4_figure_3.png", |
| "caption": "Figure 3: DS-RNN network architecture. The components for processing spatial edge features, temporal edge features, and node features are in blue, green, and yellow, respectively. Fully connected layers are denoted as FC𝐹𝐶FCitalic_F italic_C.", |
| "url": "http://arxiv.org/html/2011.04820v4/extracted/6159954/Figures/network.png" |
| }, |
| "4(a)": { |
| "figure_path": "2011.04820v4_figure_4(a).png", |
| "caption": "(a)\nFigure 4: Illustration of our simulation environment.\nIn a 12m×12m12𝑚12𝑚12m\\times 12m12 italic_m × 12 italic_m 2D2𝐷2D2 italic_D plane, the humans are represented as circles, the orientation of an agent is indicated by a red arrow, the robot is the yellow disk, and the robot’s goal is the red star. We outline the borders of the robot FoV with dashed lines. The humans in the robot’s FoV are blue and the humans outside are red.", |
| "url": "http://arxiv.org/html/2011.04820v4/extracted/6159954/Figures/FOV_env.png" |
| }, |
| "4(b)": { |
| "figure_path": "2011.04820v4_figure_4(b).png", |
| "caption": "(b)\nFigure 4: Illustration of our simulation environment.\nIn a 12m×12m12𝑚12𝑚12m\\times 12m12 italic_m × 12 italic_m 2D2𝐷2D2 italic_D plane, the humans are represented as circles, the orientation of an agent is indicated by a red arrow, the robot is the yellow disk, and the robot’s goal is the red star. We outline the borders of the robot FoV with dashed lines. The humans in the robot’s FoV are blue and the humans outside are red.", |
| "url": "http://arxiv.org/html/2011.04820v4/extracted/6159954/Figures/group_env.png" |
| }, |
| "5(a)": { |
| "figure_path": "2011.04820v4_figure_5(a).png", |
| "caption": "Figure 5: Success, timeout, and collision rates w.r.t. different FoV. The numbers on the bars indicate the percentages of the corresponding bars.", |
| "url": "http://arxiv.org/html/2011.04820v4/extracted/6159954/Figures/bar_graphs/FOV/FOV_ORCA.png" |
| }, |
| "5(b)": { |
| "figure_path": "2011.04820v4_figure_5(b).png", |
| "caption": "Figure 5: Success, timeout, and collision rates w.r.t. different FoV. The numbers on the bars indicate the percentages of the corresponding bars.", |
| "url": "http://arxiv.org/html/2011.04820v4/x1.png" |
| }, |
| "5(c)": { |
| "figure_path": "2011.04820v4_figure_5(c).png", |
| "caption": "Figure 5: Success, timeout, and collision rates w.r.t. different FoV. The numbers on the bars indicate the percentages of the corresponding bars.", |
| "url": "http://arxiv.org/html/2011.04820v4/x2.png" |
| }, |
| "5(d)": { |
| "figure_path": "2011.04820v4_figure_5(d).png", |
| "caption": "Figure 5: Success, timeout, and collision rates w.r.t. different FoV. The numbers on the bars indicate the percentages of the corresponding bars.", |
| "url": "http://arxiv.org/html/2011.04820v4/x3.png" |
| }, |
| "5(e)": { |
| "figure_path": "2011.04820v4_figure_5(e).png", |
| "caption": "Figure 5: Success, timeout, and collision rates w.r.t. different FoV. The numbers on the bars indicate the percentages of the corresponding bars.", |
| "url": "http://arxiv.org/html/2011.04820v4/x4.png" |
| }, |
| "5(f)": { |
| "figure_path": "2011.04820v4_figure_5(f).png", |
| "caption": "Figure 5: Success, timeout, and collision rates w.r.t. different FoV. The numbers on the bars indicate the percentages of the corresponding bars.", |
| "url": "http://arxiv.org/html/2011.04820v4/x5.png" |
| }, |
| "5(g)": { |
| "figure_path": "2011.04820v4_figure_5(g).png", |
| "caption": "Figure 5: Success, timeout, and collision rates w.r.t. different FoV. The numbers on the bars indicate the percentages of the corresponding bars.", |
| "url": "http://arxiv.org/html/2011.04820v4/x6.png" |
| }, |
| "6(a)": { |
| "figure_path": "2011.04820v4_figure_6(a).png", |
| "caption": "Figure 6: Success, timeout, and collision rates w.r.t. different number of humans.", |
| "url": "http://arxiv.org/html/2011.04820v4/extracted/6159954/Figures/bar_graphs/group/Group_ORCA.png" |
| }, |
| "6(b)": { |
| "figure_path": "2011.04820v4_figure_6(b).png", |
| "caption": "Figure 6: Success, timeout, and collision rates w.r.t. different number of humans.", |
| "url": "http://arxiv.org/html/2011.04820v4/x7.png" |
| }, |
| "6(c)": { |
| "figure_path": "2011.04820v4_figure_6(c).png", |
| "caption": "Figure 6: Success, timeout, and collision rates w.r.t. different number of humans.", |
| "url": "http://arxiv.org/html/2011.04820v4/x8.png" |
| }, |
| "6(d)": { |
| "figure_path": "2011.04820v4_figure_6(d).png", |
| "caption": "Figure 6: Success, timeout, and collision rates w.r.t. different number of humans.", |
| "url": "http://arxiv.org/html/2011.04820v4/x9.png" |
| }, |
| "6(e)": { |
| "figure_path": "2011.04820v4_figure_6(e).png", |
| "caption": "Figure 6: Success, timeout, and collision rates w.r.t. different number of humans.", |
| "url": "http://arxiv.org/html/2011.04820v4/x10.png" |
| }, |
| "6(f)": { |
| "figure_path": "2011.04820v4_figure_6(f).png", |
| "caption": "Figure 6: Success, timeout, and collision rates w.r.t. different number of humans.", |
| "url": "http://arxiv.org/html/2011.04820v4/x11.png" |
| }, |
| "6(g)": { |
| "figure_path": "2011.04820v4_figure_6(g).png", |
| "caption": "Figure 6: Success, timeout, and collision rates w.r.t. different number of humans.", |
| "url": "http://arxiv.org/html/2011.04820v4/x12.png" |
| }, |
| "7(a)": { |
| "figure_path": "2011.04820v4_figure_7(a).png", |
| "caption": "(a) ORCA\nFigure 7: Trajectory comparisons of different methods with the same test cases. Letter “S” denotes moving agents’ starting positions, and stars denote moving agents’ goals. The yellow filled circle denotes the robot. For the Group Environment (top), static humans are grouped in three circles.", |
| "url": "http://arxiv.org/html/2011.04820v4/x13.png" |
| }, |
| "7(b)": { |
| "figure_path": "2011.04820v4_figure_7(b).png", |
| "caption": "(b) OM-SARL\nFigure 7: Trajectory comparisons of different methods with the same test cases. Letter “S” denotes moving agents’ starting positions, and stars denote moving agents’ goals. The yellow filled circle denotes the robot. For the Group Environment (top), static humans are grouped in three circles.", |
| "url": "http://arxiv.org/html/2011.04820v4/x14.png" |
| }, |
| "7(c)": { |
| "figure_path": "2011.04820v4_figure_7(c).png", |
| "caption": "(c) DS-RNN\nFigure 7: Trajectory comparisons of different methods with the same test cases. Letter “S” denotes moving agents’ starting positions, and stars denote moving agents’ goals. The yellow filled circle denotes the robot. For the Group Environment (top), static humans are grouped in three circles.", |
| "url": "http://arxiv.org/html/2011.04820v4/x18.png" |
| }, |
| "7(d)": { |
| "figure_path": "2011.04820v4_figure_7(d).png", |
| "caption": "(d) RGL\nFigure 7: Trajectory comparisons of different methods with the same test cases. Letter “S” denotes moving agents’ starting positions, and stars denote moving agents’ goals. The yellow filled circle denotes the robot. For the Group Environment (top), static humans are grouped in three circles.", |
| "url": "http://arxiv.org/html/2011.04820v4/x16.png" |
| }, |
| "7(e)": { |
| "figure_path": "2011.04820v4_figure_7(e).png", |
| "caption": "(e) RNN+Attn\nFigure 7: Trajectory comparisons of different methods with the same test cases. Letter “S” denotes moving agents’ starting positions, and stars denote moving agents’ goals. The yellow filled circle denotes the robot. For the Group Environment (top), static humans are grouped in three circles.", |
| "url": "http://arxiv.org/html/2011.04820v4/x17.png" |
| }, |
| "7(f)": { |
| "figure_path": "2011.04820v4_figure_7(f).png", |
| "caption": "(f) DS-RNN\nFigure 7: Trajectory comparisons of different methods with the same test cases. Letter “S” denotes moving agents’ starting positions, and stars denote moving agents’ goals. The yellow filled circle denotes the robot. For the Group Environment (top), static humans are grouped in three circles.", |
| "url": "http://arxiv.org/html/2011.04820v4/x18.png" |
| } |
| }, |
| "validation": true, |
| "references": [], |
| "url": "http://arxiv.org/html/2011.04820v4", |
| "section_res": { |
| "Introduction_RelatedWorks": [ |
| "1", |
| "2" |
| ], |
| "Method": [ |
| "3" |
| ], |
| "Experiment": [ |
| "4", |
| "5" |
| ], |
| "Conclusion": [ |
| "6", |
| "7" |
| ] |
| }, |
| "new_table": { |
| "I": { |
| "renderDpi": 300, |
| "name": "I", |
| "page": 4, |
| "figType": "Table", |
| "regionBoundary": { |
| "x1": 55.919999999999995, |
| "y1": 351.84, |
| "x2": 297.12, |
| "y2": 450.24 |
| }, |
| "caption": "TABLE I: Navigation time (second) in two environments.", |
| "renderURL": "2011.04820v4-TableI-1.png", |
| "captionBoundary": { |
| "x1": 58.0849609375, |
| "y1": 336.87054443359375, |
| "x2": 294.71661376953125, |
| "y2": 342.8730163574219 |
| } |
| } |
| } |
| } |