Daoze commited on
Commit
d6cab57
·
verified ·
1 Parent(s): 49ae35d

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. papers/CoRL/CoRL 2022/CoRL 2022 Conference/4g3PwAp5nsX/Initial_manuscript_md/Initial_manuscript.md +245 -0
  2. papers/CoRL/CoRL 2022/CoRL 2022 Conference/4nt6RUGmILw/Initial_manuscript_md/Initial_manuscript.md +291 -0
  3. papers/CoRL/CoRL 2022/CoRL 2022 Conference/4nt6RUGmILw/Initial_manuscript_tex/Initial_manuscript.tex +145 -0
  4. papers/CoRL/CoRL 2022/CoRL 2022 Conference/52c5e73SlS2/Initial_manuscript_md/Initial_manuscript.md +223 -0
  5. papers/CoRL/CoRL 2022/CoRL 2022 Conference/52c5e73SlS2/Initial_manuscript_tex/Initial_manuscript.tex +263 -0
  6. papers/CoRL/CoRL 2022/CoRL 2022 Conference/52uzsIGV32_/Initial_manuscript_md/Initial_manuscript.md +249 -0
  7. papers/CoRL/CoRL 2022/CoRL 2022 Conference/52uzsIGV32_/Initial_manuscript_tex/Initial_manuscript.tex +275 -0
  8. papers/CoRL/CoRL 2022/CoRL 2022 Conference/5GJ-_KMLASa/Initial_manuscript_md/Initial_manuscript.md +249 -0
  9. papers/CoRL/CoRL 2022/CoRL 2022 Conference/5GJ-_KMLASa/Initial_manuscript_tex/Initial_manuscript.tex +181 -0
  10. papers/CoRL/CoRL 2022/CoRL 2022 Conference/6BIffCl6gsM/Initial_manuscript_md/Initial_manuscript.md +255 -0
  11. papers/CoRL/CoRL 2022/CoRL 2022 Conference/6BIffCl6gsM/Initial_manuscript_tex/Initial_manuscript.tex +200 -0
  12. papers/CoRL/CoRL 2022/CoRL 2022 Conference/6gEyD5zg0dt/Initial_manuscript_md/Initial_manuscript.md +229 -0
  13. papers/CoRL/CoRL 2022/CoRL 2022 Conference/6gEyD5zg0dt/Initial_manuscript_tex/Initial_manuscript.tex +193 -0
  14. papers/CoRL/CoRL 2022/CoRL 2022 Conference/7CrXRhmzVVR/Initial_manuscript_md/Initial_manuscript.md +333 -0
  15. papers/CoRL/CoRL 2022/CoRL 2022 Conference/7CrXRhmzVVR/Initial_manuscript_tex/Initial_manuscript.tex +249 -0
  16. papers/CoRL/CoRL 2022/CoRL 2022 Conference/7JVNhaMbZUu/Initial_manuscript_md/Initial_manuscript.md +285 -0
  17. papers/CoRL/CoRL 2022/CoRL 2022 Conference/7JVNhaMbZUu/Initial_manuscript_tex/Initial_manuscript.tex +256 -0
  18. papers/CoRL/CoRL 2022/CoRL 2022 Conference/7RyzGWLk79H/Initial_manuscript_md/Initial_manuscript.md +267 -0
  19. papers/CoRL/CoRL 2022/CoRL 2022 Conference/7RyzGWLk79H/Initial_manuscript_tex/Initial_manuscript.tex +229 -0
  20. papers/CoRL/CoRL 2022/CoRL 2022 Conference/7ZcePvChS7u/Initial_manuscript_md/Initial_manuscript.md +285 -0
  21. papers/CoRL/CoRL 2022/CoRL 2022 Conference/7ZcePvChS7u/Initial_manuscript_tex/Initial_manuscript.tex +178 -0
  22. papers/CoRL/CoRL 2022/CoRL 2022 Conference/8-8e18idYLD/Initial_manuscript_md/Initial_manuscript.md +289 -0
  23. papers/CoRL/CoRL 2022/CoRL 2022 Conference/8-8e18idYLD/Initial_manuscript_tex/Initial_manuscript.tex +231 -0
  24. papers/CoRL/CoRL 2022/CoRL 2022 Conference/80vpxjt3vq/Initial_manuscript_md/Initial_manuscript.md +271 -0
  25. papers/CoRL/CoRL 2022/CoRL 2022 Conference/80vpxjt3vq/Initial_manuscript_tex/Initial_manuscript.tex +197 -0
  26. papers/CoRL/CoRL 2022/CoRL 2022 Conference/8ktEdb5NHEh/Initial_manuscript_md/Initial_manuscript.md +253 -0
  27. papers/CoRL/CoRL 2022/CoRL 2022 Conference/8ktEdb5NHEh/Initial_manuscript_tex/Initial_manuscript.tex +189 -0
  28. papers/CoRL/CoRL 2022/CoRL 2022 Conference/8tmKW-NG2bH/Initial_manuscript_md/Initial_manuscript.md +239 -0
  29. papers/CoRL/CoRL 2022/CoRL 2022 Conference/8tmKW-NG2bH/Initial_manuscript_tex/Initial_manuscript.tex +169 -0
  30. papers/CoRL/CoRL 2022/CoRL 2022 Conference/A5l7wE2uqtM/Initial_manuscript_md/Initial_manuscript.md +249 -0
  31. papers/CoRL/CoRL 2022/CoRL 2022 Conference/A5l7wE2uqtM/Initial_manuscript_tex/Initial_manuscript.tex +244 -0
  32. papers/CoRL/CoRL 2022/CoRL 2022 Conference/AdFROt9BoqE/Initial_manuscript_md/Initial_manuscript.md +259 -0
  33. papers/CoRL/CoRL 2022/CoRL 2022 Conference/AdFROt9BoqE/Initial_manuscript_tex/Initial_manuscript.tex +177 -0
  34. papers/CoRL/CoRL 2022/CoRL 2022 Conference/Ag-vOezQ0Gw/Initial_manuscript_md/Initial_manuscript.md +301 -0
  35. papers/CoRL/CoRL 2022/CoRL 2022 Conference/Ag-vOezQ0Gw/Initial_manuscript_tex/Initial_manuscript.tex +195 -0
  36. papers/CoRL/CoRL 2022/CoRL 2022 Conference/AmPeAFzU3a4/Initial_manuscript_md/Initial_manuscript.md +321 -0
  37. papers/CoRL/CoRL 2022/CoRL 2022 Conference/AmPeAFzU3a4/Initial_manuscript_tex/Initial_manuscript.tex +248 -0
  38. papers/CoRL/CoRL 2022/CoRL 2022 Conference/Bf6on28H0Jv/Initial_manuscript_md/Initial_manuscript.md +295 -0
  39. papers/CoRL/CoRL 2022/CoRL 2022 Conference/Bf6on28H0Jv/Initial_manuscript_tex/Initial_manuscript.tex +183 -0
  40. papers/CoRL/CoRL 2022/CoRL 2022 Conference/BxHcg_Zlpxj/Initial_manuscript_md/Initial_manuscript.md +245 -0
  41. papers/CoRL/CoRL 2022/CoRL 2022 Conference/BxHcg_Zlpxj/Initial_manuscript_tex/Initial_manuscript.tex +154 -0
  42. papers/CoRL/CoRL 2022/CoRL 2022 Conference/Bxr45keYrf/Initial_manuscript_md/Initial_manuscript.md +235 -0
  43. papers/CoRL/CoRL 2022/CoRL 2022 Conference/Bxr45keYrf/Initial_manuscript_tex/Initial_manuscript.tex +165 -0
  44. papers/CoRL/CoRL 2022/CoRL 2022 Conference/CC4JMO4dzg/Initial_manuscript_md/Initial_manuscript.md +215 -0
  45. papers/CoRL/CoRL 2022/CoRL 2022 Conference/CC4JMO4dzg/Initial_manuscript_tex/Initial_manuscript.tex +177 -0
  46. papers/CoRL/CoRL 2022/CoRL 2022 Conference/DE8rdNuGj_7/Initial_manuscript_md/Initial_manuscript.md +217 -0
  47. papers/CoRL/CoRL 2022/CoRL 2022 Conference/DE8rdNuGj_7/Initial_manuscript_tex/Initial_manuscript.tex +180 -0
  48. papers/CoRL/CoRL 2022/CoRL 2022 Conference/DLkubm-dq-y/Initial_manuscript_md/Initial_manuscript.md +257 -0
  49. papers/CoRL/CoRL 2022/CoRL 2022 Conference/DLkubm-dq-y/Initial_manuscript_tex/Initial_manuscript.tex +193 -0
  50. papers/CoRL/CoRL 2022/CoRL 2022 Conference/ED0G14V3WeH/Initial_manuscript_md/Initial_manuscript.md +291 -0
papers/CoRL/CoRL 2022/CoRL 2022 Conference/4g3PwAp5nsX/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,245 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SE(2)-Equivariant Pushing Dynamics Models for Tabletop Object Manipulations
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: For tabletop object manipulation tasks, learning an accurate pushing dynamics model, which predicts the objects' motions when a robot pushes an object, is very important. In this work, we claim that an ideal pushing dynamics model should have the $\mathrm{{SE}}\left( 2\right)$ -equivariance property, i.e., if tabletop objects’ poses and pushing action are transformed by some same planar rigid-body transformation, then the resulting motion should also be the result of the same transformation. Existing state-of-the-art data-driven approaches do not have this equiv-ariance property, resulting in less-than-desirable learning performances. In this paper, we propose a new neural network architecture that by construction has the above equivariance property. Through extensive empirical validations, we show that the proposed model shows significantly improved learning performances over the existing methods. Also, we verify that our pushing dynamics model can be used for various downstream pushing manipulation tasks such as the object moving, singulation, and grasping in both simulation and real robot experiments.
12
+
13
+ Keywords: Pushing dynamics learning, Pushing manipulation, Symmetry and Equivariance
14
+
15
+ ## 17 1 Introduction
16
+
17
+ Robotic visual pushing manipulation - by visual manipulation, we mean that only visual observations (e.g., depth camera) are available - in cluttered environments including unseen objects is an important yet challenging manipulation skill that allows a robot to interact with and change its environment to be suitable for performing downstream tasks. For example, pushing manipulation techniques have been used to move tabletop objects graspable $\left\lbrack {1,2,3,4}\right\rbrack$ , rearrange multiple objects for sorting $\left\lbrack {5,6,7,8}\right\rbrack$ , and find a target object occluded by the other objects $\left\lbrack {9,{10}}\right\rbrack$ .
18
+
19
+ We consider model-based approaches for the pushing manipulations that consist of the following two components: (i) to construct a pushing dynamics model which predicts the motions of the objects after a robot performs a pushing action to the environment and (ii) to find an optimal sequence of pushing actions that achieves the goal given a predesigned task criteria [11, 12]. Our primary focus is the first step which is to develop an accurate visual pushing dynamics model that takes a visual observation as an input. Analytic approaches that precisely model the physical interactions [13, 14, 15, 16] cannot be used since we are given unseen objects with only vision data.
20
+
21
+ ![01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_0_896_1617_583_224_0.jpg](images/01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_0_896_1617_583_224_0.jpg)
22
+
23
+ Figure 1: The box object and pushing vector in Scene 1 are transformed by some same planar rigid-body transformation to those in Scene 2. An ideal pushing dynamics model should be $\mathrm{{SE}}\left( 2\right)$ - equivariant, i.e., the resulting motion in Scene 2 is a transformation of that in Scene 1.
24
+
25
+ Recently, there has been considerable interest in data-driven methods for learning pushing dynamics models [17, 18, 19, 20, 21, 22], but their generalization performances are still far-less-than-satisfying. We claim that one of the important reasons behind this is that neural network models used in existing approaches lack of considering symmetry of the physical systems, and more precisely, equivariance. For example, suppose a model is trained with an experience where a robot pushes a box object into a red arrow direction as shown in Figure 1 (Scene 1). And consider a new situation where the same box object is located at a different pose and the robot pushes the object in the same relative direction as shown in Figure 1 (Scene 2). At an intuitive level, a good model should be able to easily generalize to this type of new situations, where tabletop objects are only translated or rotated along the $z$ -axis. In more technical terms, the pushing dynamics model needs to be equivariant to the $\mathrm{{SE}}\left( 2\right)$ transformation.
26
+
27
+ In this paper, we define the SE(2)-equivariant pushing dynamics model and deliberately design a neural network architecture that by construction has the equivariance property. The core idea to make the model equivariant is to properly transform the coordinates of the pushing action and the objects' poses as needed; details are elaborated in Section 2. This construction naturally captures the symmetry of the physical systems and significantly improves the generalization performances.
28
+
29
+ To employ the proposed equivariant pushing dynamics model in environments that have unseen objects, we need an additional module that can recognize the objects' shapes and poses. In this work, we represent 3-d objects' shapes by using the shape class called the superquadrics, which can express diverse shapes ranging from boxes, cylinders, ellipsoids to other complex symmetric shapes. We train the recognition network that predicts the objects' shapes with superquadrics by adopting an idea from [23]. We call our superquadric object representation-based pushing dynamics model a SuperQuadric Pushing Dynamics Network (SQPD-Net).
30
+
31
+ Experiments and benchmark comparisons against the existing state-of-the-art methods confirm that our model achieves the highest motion prediction accuracy. In addition, we validate the effectiveness of our model by using it for model-based optimal controls for various pushing manipulation tasks in both simulation and real-world experiments.
32
+
33
+ ## 2 SE(2)-Equivariant Pushing Dynamics Models
34
+
35
+ In this section, we develop a neural network architecture specialized to learn a SE(2)-equivariant pushing dynamics model. We assume that multiple rigid-body objects are placed on the table whose surface is assumed to be flat and orthogonal to the gravity direction, and the robot interacts with the objects by pushing manipulations. Each object is represented by a pose parameter $\mathbf{T} \in \mathrm{{SE}}\left( 3\right) (4 \times 4$ matrix representation) and shape parameter $\mathbf{q}$ , where the pose parameter is described with respect to some global fixed frame and the shape parameter is a vector. And the pushing action is defined as a tuple(p, v)where the tip of the end-effector moves from the position $\mathbf{p} \in {\mathbb{R}}^{3}$ to $\mathbf{p} + \mathbf{v} \in {\mathbb{R}}^{3}$ . As the tip of the end-effector moves, the robot can have contacts with environments, pushes objects, and changes the poses of the objects.
36
+
37
+ Further, we assume there are maximally $M$ rigid-body objects on the table that have the parameters ${\left\{ \left( {\mathbf{T}}_{i},{\mathbf{q}}_{i}\right) \right\} }_{i = 1}^{N}$ for $N \leq M$ . We consider a discrete-time pushing dynamics model $f$ that outputs the object’s transformed poses ${\left\{ {\mathbf{T}}_{i}^{\prime }\right\} }_{i = 1}^{N}$ when a pushing action(p, v)is applied, i.e., ${\left\{ {\mathbf{T}}_{i}^{\prime }\right\} }_{i = 1}^{N} =$ $f\left( {{\left\{ \left( {\mathbf{T}}_{i},{\mathbf{q}}_{i}\right) \right\} }_{i = 1}^{N},\left( {\mathbf{p},\mathbf{v}}\right) }\right)$ , where $N$ can vary as long as $N \leq M$ . Assuming the gravity direction is the $z$ -axis, we first give a precise definition of the $\mathrm{{SE}}\left( 2\right)$ -equivariant pushing dynamics model:
38
+
39
+ Definition 1 A pushing dynamics model $f$ is $\mathrm{{SE}}\left( 2\right)$ -equivariant if
40
+
41
+ $$
42
+ {\left\{ {\mathbf{{TT}}}_{i}^{\prime }\right\} }_{i = 1}^{N} = f\left( {{\left\{ \left( {\mathbf{{TT}}}_{i},{\mathbf{q}}_{i}\right) \right\} }_{i = 1}^{N},\left( {\mathbf{{Rp}} + \mathbf{t},\mathbf{{Rv}}}\right) }\right) \tag{1}
43
+ $$
44
+
45
+ 81 for all object numbers $N \leq M$ and rigid-body transformations that have the following form
46
+
47
+ $$
48
+ \mathbf{T} = \left\lbrack \begin{matrix} \operatorname{Rot}\left( {\widehat{\mathbf{z}},\theta }\right) & {\mathbf{t}}_{\mathbf{{xy}}} \\ 0 & 1 \end{matrix}\right\rbrack \tag{2}
49
+ $$
50
+
51
+ 82 where $\operatorname{Rot}\left( {\widehat{\mathbf{z}},\theta }\right)$ is a $3 \times 3$ rotation matrix for rotations around $z$ -axis and ${\mathbf{t}}_{\mathbf{{xy}}} = \left( {{t}_{x},{t}_{y},0}\right) \in {\mathbb{R}}^{3}$ .
52
+
53
+ ![01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_2_315_211_1174_350_0.jpg](images/01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_2_315_211_1174_350_0.jpg)
54
+
55
+ Figure 3: $\mathrm{{SE}}\left( 2\right)$ -equivariant pushing dynamics neural network architecture for an $i$ -th object, ${f}_{i}$ .
56
+
57
+ To build a $\mathrm{{SE}}\left( 2\right)$ -equivariant neural network architecture, we first introduce an object pose decomposition method that decomposes an object pose ${\mathbf{T}}_{i} \in \mathrm{{SE}}\left( 3\right)$ to a pose projected to the table surface denoted by ${\mathbf{C}}_{i} \in \mathrm{{SE}}\left( 3\right)$ and the relative rigid-body transformation ${\mathbf{U}}_{i} \in \mathrm{{SE}}\left( 3\right)$ such that ${\mathbf{T}}_{i} = {\mathbf{C}}_{i}{\mathbf{U}}_{i}.$
58
+
59
+ Object Pose Decomposition. Given an object pose $\mathbf{T} \in \mathrm{{SE}}\left( 3\right)$ , we decompose it to two $4 \times 4$ matrices $\mathbf{C},\mathbf{U} \in \mathrm{{SE}}\left( 3\right)$ as visualized in Figure 2. First, $\mathbf{C}$ is defined by projecting $\mathbf{T}$ to the table surface, which has the form in equation (2). And secondly, $\mathbf{U}$ is defined as ${\mathbf{C}}^{-1}\mathbf{T}$ . More details are in Appendix.
60
+
61
+ Now, we explain our network architecture for the pushing dynamics model $f$ ; overall architecture is described in Figure 3. The model $f$ is defined as ${\left\{ {f}_{i}\right\} }_{i = 1}^{N}$ where each ${f}_{i}$ outputs the $i$ -th object’s transformed pose, i.e., ${\mathbf{T}}_{i}^{\prime } =$ ${f}_{i}\left( {{\left\{ \left( {\mathbf{T}}_{i},{\mathbf{q}}_{i}\right) \right\} }_{i = 1}^{N},\left( {\mathbf{p},\mathbf{v}}\right) }\right)$ . For ${f}_{i}$ , we first decompose the $i$ -th object pose ${\mathbf{T}}_{i} = {\mathbf{C}}_{i}{\mathbf{U}}_{i}$ and transform the other objects' poses (including itself) and pushing action as follows: (i) ${\mathbf{T}}_{j} \mapsto {\mathbf{C}}_{i}^{-1}{\mathbf{T}}_{j}$ for $j = 1,\cdots , N$ and (ii) $\left( {\mathbf{p},\mathbf{v}}\right) \mapsto {\mathbf{C}}_{i}^{-1}\left( {\mathbf{p},\mathbf{v}}\right) \mathrel{\text{:=}} \left( {{\mathbf{R}}_{i}^{T}\mathbf{p} - {\mathbf{R}}_{i}^{T}{\mathbf{t}}_{i},{\mathbf{R}}_{i}^{T}\mathbf{v}}\right)$ where ${\mathbf{R}}_{i}$ and ${\mathbf{t}}_{i}$ are rotation matrix and translation vector parts of ${\mathbf{C}}_{i}$ . Then, three different multilayer perceptron (MLP) networks are used to extract $\mathrm{{SE}}\left( 2\right)$ -invariant feature vectors: (i) the ${\mathrm{{MLP}}}_{1}$ takes the transformed action and outputs a feature vector ${\mathbf{a}}_{i}$ ,(ii) the ${\mathrm{{MLP}}}_{2}$ takes the $i$ -th object’s parameter $\left( {{\mathbf{U}}_{i},{\mathbf{q}}_{i}}\right)$ and outputs a feature vector ${\mathbf{b}}_{i}$ , and (iii) the ${\mathrm{{MLP}}}_{3}$ takes the transformed object’s parameters $\left( {{\mathbf{C}}_{i}^{-1}{\mathbf{T}}_{j},{\mathbf{q}}_{j}}\right)$ and outputs a feature vector ${\mathbf{c}}_{i}^{j}$ for all $j = 1,\cdots , N$ and then these output vectors pass through some permutation invariant function $h$ as ${\mathbf{c}}_{i} = h\left( {{\mathbf{c}}_{i}^{1},\cdots ,{\mathbf{c}}_{i}^{N}}\right)$ such as the element-wise max pooling. These feature vectors are concatenated as ${\mathbf{y}}_{i} = \left( {{\mathbf{a}}_{\mathbf{i}},{\mathbf{b}}_{\mathbf{i}},{\mathbf{c}}_{\mathbf{i}}}\right)$ , and we have the last MLP layer that takes ${\mathbf{y}}_{i}$ and outputs $\delta {\mathbf{T}}_{i} \in \mathrm{{SE}}\left( 3\right)$ . We note that these MLP layers are shared across all $i = 1,\cdots , N$ . Then, the transformed poses are defined as ${\mathbf{T}}_{i}^{\prime } = {\mathbf{T}}_{i}\delta {\mathbf{T}}_{i}$ for all $i = 1,\cdots , N$ . As a result, this dynamics model is $\mathrm{{SE}}\left( 2\right)$ -equivariant by construction; the proof is in Appendix.
62
+
63
+ ![01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_2_896_981_589_299_0.jpg](images/01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_2_896_981_589_299_0.jpg)
64
+
65
+ Figure 2: Object Pose Decomposition.
66
+
67
+ Training. Denote by $\mathbf{s} = \left( {\left\{ \left( {\mathbf{T}}_{i},{\mathbf{q}}_{i}\right) \right\} }_{i = 1}^{N}\right)$ for some $N \leq M$ and $\mathbf{a} = \left( {\mathbf{p},\mathbf{v}}\right)$ . In this paper, we train the pushing dynamics model given a set of 3-tuples ${\left\{ {\left( \mathbf{s},\mathbf{a},{\left\{ {\mathbf{T}}_{i}^{\prime }\right\} }_{i = 1}^{N}\right) }_{k}\right\} }_{k = 1}^{K}$ where ${\mathbf{T}}_{i}^{\prime }$ is the next pose of the $i$ -th object. The loss function $\mathcal{L}$ is defined by comparing the ground-truth next poses ${\left\{ {\mathbf{T}}_{i}^{\prime }\right\} }_{i = 1}^{N}$ and the predicted poses ${\left\{ {\widehat{\mathbf{T}}}_{i}^{\prime }\right\} }_{i = 1}^{N} = f\left( {\mathbf{s},\mathbf{a}}\right)$ as follows:
68
+
69
+ $$
70
+ \mathcal{L}\left( f\right) = \mathop{\sum }\limits_{{k = 1}}^{K}\mathop{\sum }\limits_{{i = 1}}^{N}\left( {{\begin{Vmatrix}{\mathbf{t}}_{i} - {\widehat{\mathbf{t}}}_{i}\end{Vmatrix}}_{2}^{2} + \alpha {\begin{Vmatrix}{\mathbf{I}}_{3} - {\mathbf{R}}_{i}^{-1}{\widehat{\mathbf{R}}}_{i}\end{Vmatrix}}_{F}^{2}}\right) , \tag{3}
71
+ $$
72
+
73
+ where ${\mathbf{R}}_{i},{\widehat{\mathbf{R}}}_{i}$ and ${\mathbf{t}}_{i},{\widehat{\mathbf{t}}}_{i}$ are rotation matrices and translation vectors parts of ${\mathbf{T}}_{i},{\widehat{\mathbf{T}}}_{i}$ , respectively, and $\alpha$ is a weighting parameter (for our later experiments we set $\alpha$ to 0.1).
74
+
75
+ ## 3 Object Recognition-based Pushing Manipulations
76
+
77
+ If we have known objects and can easily estimate the poses of the objects, then it is straightforward to use the learned pushing dynamics model for pushing manipulations. However, for unseen objects, we first need to recognize the objects' shapes and poses. Therefore, our overall framework consists of the following two steps: (i) to recognize objects' shapes and poses and (ii) to push objects by using the learned pushing dynamics model and a pre-designed task criteria, of which details are explained in the following subsections.
78
+
79
+ ### 3.1 Object Shape and Pose Recognition via Superquadrics
80
+
81
+ We propose to use implicit functions to represent 3-d objects' shapes. In general, an implicit object surface representation is defined by a level set of a function $S\left( {x, y, z;\mathbf{q},\mathbf{T}}\right) = 0$ , where $\mathbf{q} \in \mathcal{Q}$ is a shape parameter and $\mathbf{T} \in \mathrm{{SE}}\left( 3\right)$ is a pose parameter. In our framework, any implicit function approximation model $S\left( {x, y, z;\mathbf{q},\mathbf{T}}\right)$ can be used.
82
+
83
+ In this work, we employ the shape class called the superquadrics, a family of geometric shapes that resemble ellipsoids and other quadrics, which can be used to represent diverse shapes ranging from boxes, cylinders, and ellipsoids to bi-cones, octahedra, and other complex symmetric shapes. The implicit equation for a su-perquadric surface at $\mathbf{T} = {\mathbf{I}}_{4}$ has the following form:
84
+
85
+ $$
86
+ S\left( {x, y, z;\mathbf{q},{\mathbf{I}}_{4}}\right) = {\left( {\left| \frac{x}{{a}_{1}}\right| }^{\frac{2}{{e}_{2}}} + {\left| \frac{y}{{a}_{2}}\right| }^{\frac{2}{{e}_{2}}}\right) }^{\frac{{e}_{2}}{{e}_{1}}} + {\left| \frac{z}{{a}_{3}}\right| }^{\frac{2}{{e}_{1}}} - 1 = 0, \tag{4}
87
+ $$
88
+
89
+ ![01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_3_901_760_578_239_0.jpg](images/01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_3_901_760_578_239_0.jpg)
90
+
91
+ Figure 4: Examples of superquadrics.
92
+
93
+ where $\mathbf{q} = \left( {{a}_{1},{a}_{2},{a}_{3},{e}_{1},{e}_{2}}\right) \in {\mathbb{R}}^{5}$ is the shape parameter. In particular, ${a}_{1},{a}_{2},{a}_{3}$ controls the sizes and ${e}_{1},{e}_{2}$ controls the geometric shapes. Some examples are shown in Figure 4. At $\mathbf{T} \neq {\mathbf{I}}_{4}$ , the equation $S\left( {x, y, z;\mathbf{q},\mathbf{T}}\right)$ can be written with the passive coordinate transformation of(x, y, z) by $\mathbf{T}$ ; see Appendix for details.
94
+
95
+ The object recognition problem that we address in this paper can then be posed as follows: given a visual input obtained from a depth camera that typically contains partial views of the objects, we need to predict the superquadric parameters(q, T)for each object. The predicted object represented by(q, T)should fit the full object, although only a partial view of the object is given as an input. This problem has been recently tackled by [23], where two neural network models that take point cloud data as inputs are employed: (i) object segmentation network [24] and (ii) object full shape and pose recognition network [23]. We include details about the network architectures and training methods of these networks in Appendix.
96
+
97
+ We call our SE(2)-equivariant pushing dynamics model that uses the superquadric representation a SuperQuadric Pushing Dynamics Network (SQPD-Net).
98
+
99
+ ### 3.2 Model-based Pushing Manipulations
100
+
101
+ Given a visual observation of tabletop objects as a point cloud which we denote by $\mathbf{o}$ , our goal is to find a sequence of robot actions $\left( {{\mathbf{a}}_{1},{\mathbf{a}}_{2},\cdots ,{\mathbf{a}}_{T}}\right)$ that changes the environment for some given task. In this section, we assume that we are given (i) a recognition module $R$ that outputs the objects’ poses and shapes, i.e. $R\left( {\mathbf{o}}_{t}\right) = {\mathbf{s}}_{t}$ (throughout, we denote by ${\mathbf{s}}_{t} = {\left\{ \left( {\mathbf{T}}_{t, i},{\mathbf{q}}_{t, i}\right) \right\} }_{i = 1}^{N}$ ), and (ii) a pushing dynamics model ${\mathbf{s}}_{t + 1} = f\left( {{\mathbf{s}}_{t},{\mathbf{a}}_{t}}\right)$ . Given a task-specific objective function $\mathcal{J}$ , we solve the following optimal control problem:
102
+
103
+ $$
104
+ \mathop{\min }\limits_{{{\mathbf{a}}_{1},\cdots ,{\mathbf{a}}_{\mathbf{T}}}}\mathcal{J}\left( {{\mathbf{o}}_{1},{\mathbf{a}}_{1},\cdots ,{\mathbf{a}}_{T}}\right) = \mathop{\sum }\limits_{{t = 1}}^{T}r\left( {{\mathbf{s}}_{t},{\mathbf{a}}_{t}}\right) + q\left( {\mathbf{s}}_{T + 1}\right) \text{ s.t. }{\mathbf{s}}_{1} = R\left( {\mathbf{o}}_{1}\right) ,{\mathbf{s}}_{t + 1} = f\left( {{\mathbf{s}}_{t},{\mathbf{a}}_{t}}\right) . \tag{5}
105
+ $$
106
+
107
+ For tasks we focus in this paper, we set $r\left( {{\mathbf{s}}_{t},{\mathbf{a}}_{t}}\right) = 0$ and only use a terminal cost function $q\left( {\mathbf{s}}_{T + 1}\right)$ . We use the sampling-based MPCs [12] (implementation details are in Appendix). Below, we introduce three terminal cost functions for the following pushing manipulation tasks: (i) moving, (ii) singulation, and (iii) grasping. We denote the translation vector and rotation matrix parts of $\mathbf{T}$ as $\mathbf{t},\mathbf{R}$ , respectively.
108
+
109
+ Moving is a task to move objects to their desired poses. The desired poses are given as ${\left\{ {\mathbf{T}}_{d, i}\right\} }_{i = 1}^{N}$ , then we define a terminal cost function as
110
+
111
+ $$
112
+ q\left( {\mathbf{s}}_{T + 1}\right) = \mathop{\sum }\limits_{{i = 1}}^{N}\left( {{\begin{Vmatrix}{\mathbf{t}}_{T + 1, i} - {\mathbf{t}}_{d, i}\end{Vmatrix}}_{2}^{2} + \beta {\begin{Vmatrix}{\mathbf{I}}_{3} - {\mathbf{R}}_{d, i}^{-1}{\mathbf{R}}_{T + 1, i}\end{Vmatrix}}_{F}^{2}}\right) . \tag{6}
113
+ $$
114
+
115
+ Singulation is a task to separate objects by more than a certain distance $\tau$ . We define a terminal cost function as
116
+
117
+ $$
118
+ q\left( {\mathbf{s}}_{T + 1}\right) = - \mathop{\min }\limits_{\left\{ \left( i, j\right) \in \{ 1,\cdots , N\} \mid i > j\right\} }\left( {\min \left( {\begin{Vmatrix}{{\mathbf{t}}_{T + 1, i} - {\mathbf{t}}_{T + 1, j}}\end{Vmatrix} - \tau ,0}\right) }\right) . \tag{7}
119
+ $$
120
+
121
+ Grasping is a task to make a target object graspable. Given a target object index $i$ , we generate candidate grasp poses for the recognized target object as shown in Figure 5 and check collisions with the environment and the other recognized objects; green grasp poses are collision-free and red poses are not. The terminal cost $q\left( {\mathbf{s}}_{T + 1}\right)$ is defined to be 0 if at least one collision-free grasp pose exists and 1 otherwise. Further details are provided in Appendix.
122
+
123
+ ![01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_4_904_772_578_301_0.jpg](images/01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_4_904_772_578_301_0.jpg)
124
+
125
+ Figure 5: Sampling-based grasping criteria.
126
+
127
+ ## 4 Experiments
128
+
129
+ In this section, we empirically show that (i) our proposed pushing dynamics model, the SQPD-Net, outperforms the existing state-of-the-art data-driven pushing dynamics models, and (ii) our SQPD-Net can be used for various downstream pushing manipulation tasks, e.g., object moving, singulation, and grasping.
130
+
131
+ Environment. We use the 7-dof Franka Emika Panda robot with a parallel-jaw gripper and an Azure Kinect DK camera sensor mounted on the gripper. The raw input visual observation is a depth image, which is then pre-processed to other 3-d representations (e.g., point cloud) as needed.
132
+
133
+ Pushing Manipulation Dataset. To train pushing dynamics models, we generate a pushing manipulation dataset in simulation (Pybullet). Throughout our experiments, we use cylinder and cube-shaped objects with various sizes, and one scene contains less than 4 objects. To execute an action $\left( {\mathbf{p},\mathbf{v}}\right) \in {\mathbb{R}}^{6}$ , (i) the robot first moves so that the gripper’s tip is placed at $\mathbf{p}$ and it's orientation is set as visualized in Figure 6, and then (ii) the robot moves in a way that the gripper’s tip moves to $\mathbf{p} + \mathbf{v}$ with fixed orientation. We generate the pushing manipulation dataset as follows: (i) we place random objects at random poses in the workspace,(ii) we sample an action $\left( {\mathbf{p},\mathbf{v}}\right) \in {\mathbb{R}}^{6}$ , where $\mathbf{p}$ is sampled near one randomly selected object and $\mathbf{v}$ directs the center of the object, and (iii) we execute the robot pushing action. In this process, we note that the gripper's other parts than the tip can also make contacts with the environment. More details are included in Appendix.
134
+
135
+ ![01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_4_1090_1516_387_297_0.jpg](images/01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_4_1090_1516_387_297_0.jpg)
136
+
137
+ Figure 6: Execution of a pushing action.
138
+
139
+ Baseline Methods. We compare our SQPD-Net with the following baseline methods: 2DFlow and SE3-Net adopted from [17], SE3Pose-Net adopted from [18], and 3DFlow and DSR-Net adopted from [20]. The 2DFlow, SE3-Net, and SE3Pose-Net take an organized point cloud as a visual input and predict the flow vectors of the points. The 3DFlow and DSR-Net take a voxelized truncated signed distance field (TSDF) as a visual input and predict the voxel flow. Our SQPD-Net takes the estimated objects' poses and superquadric shape parameters as an input and predicts the objects' next poses. While, in the existing approaches, the models directly predict motions from the pre-processed raw visual observations, our model consists of the two modules: (i) a pre-trained recognition network that predicts objects' poses and shape parameters (training details are in Appendix) and (ii) the SQPD-Net that predicts the objects’ next poses. We denote these two networks together by $R$ - ${SQPD}$ -Net. For the comparison purpose, we also test the case where the ground-truth objects' poses and shape parameters are used as an input for the SQPD-Net, and denote it by ${GT} - {SQPD} - {Net}$ .
140
+
141
+ Evaluation Metrics. Throughout, we use two types of the evaluation metrics for the learned pushing dynamics models: (i) flow error (the lower the better) and (ii) mask intersection over union (mask IoU, the higher the better). First of all, we consider the visible and full flow error. The visible flow error is the root mean squared error (RMSE) between the ground-truth flows and predicted flows of the points on the visible surface of the objects, while the full flow error is the RMSE computed with all points from the objects' surfaces. Second, we consider the 2D and 3D mask IoUs. The 2D mask IoU is computed by using the depth images and thus only visible surfaces are taken into consideration. On the other hand, the 3D mask IoU is computed with the complete 3D occupancy grid. The full flow error and 3D mask IoU cannot be computed in 2DFlow, SE3-Net, and SE3Pose-Net, because they do not estimate the complete objects' shapes as an intermediate step of the pushing dynamics prediction. Details for the metric computations for each method are in Appendix.
142
+
143
+ ### 4.1 Pushing Dynamics Learning
144
+
145
+ In this section, we first empirically verify the equivariance property of our method and show the performance advantages of ours over the existing methods.
146
+
147
+ Equivariance Study. For the purpose of testing equivari-ance of the models, we design the following experiment. First, we train the models with only one pushing manipulation data that is a 3-tuple (past observation, pushing action, next observation), so that the models overfit the training data. Then, we compare the models' generalization capabilities with test data that are generated by applying random $\mathrm{{SE}}\left( 2\right)$ -transformation to the training data. An ideal equivariant model should produce almost zero error to the test data.
148
+
149
+ <table><tr><td>METHOD</td><td>visible flow (↓)</td></tr><tr><td>2DFlow [17]</td><td>4.73</td></tr><tr><td>SE3-Net [17]</td><td>4.73</td></tr><tr><td>SE3Pose-Net [18]</td><td>4.72</td></tr><tr><td>R-SQPD-Net (ours)</td><td>0.73</td></tr><tr><td>GT-SQPD-Net (ours)</td><td>0.02</td></tr></table>
150
+
151
+ Table 1: Test visible flow error (cm).
152
+
153
+ Table 1 shows average visible flow errors of the baseline methods and SQPD-Nets, obtained by running the above experiment multiple times with different training data (details are in Appendix). The 3DFlow and DSQ-Net are omitted in this experiment because they cannot make estimations if the transformed actions do not belong to the pre-defined discrete set of actions. The GT-SQPD-Net produces almost zero error as expected while the R-SQPD-Net produces a little error originated from the recognition error. Our SQPD-Nets are much more $\mathrm{{SE}}\left( 2\right)$ - equivariant compared to the existing works. Figure 7 shows an example prediction result from the SE3Pose-Net and R-SQPD-Net; the blue bounding box represents the ground-truth next pose of the object. For the test data, the SE3Pose-Net predicts a completely wrong motion.
154
+
155
+ ![01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_5_898_1599_586_319_0.jpg](images/01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_5_898_1599_586_319_0.jpg)
156
+
157
+ Figure 7: Depth images of prediction results. For SE3Pose-Net, after the point cloud moves, the space occupied before is colored black.
158
+
159
+ <table><tr><td rowspan="3">METHOD</td><td colspan="4">Known</td><td colspan="4">Unknown</td></tr><tr><td colspan="2">Flow error $\left( \downarrow \right)$</td><td colspan="2">Mask IoU (↑)</td><td colspan="2">Flow error $\left( \downarrow \right)$</td><td colspan="2">Mask IoU (↑)</td></tr><tr><td>visible</td><td>full</td><td>2D</td><td>3D</td><td>visible</td><td>full</td><td>2D</td><td>3D</td></tr><tr><td>2DFlow [17]</td><td>3.081</td><td>-</td><td>-</td><td>-</td><td>2.763</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SE3-Net [17]</td><td>1.910</td><td>-</td><td>-</td><td>-</td><td>1.925</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SE3Pose-Net [18]</td><td>1.829</td><td>-</td><td>-</td><td>-</td><td>1.794</td><td>-</td><td>-</td><td>-</td></tr><tr><td>3DFlow [20]</td><td>1.702</td><td>1.699</td><td>0.752</td><td>0.703</td><td>1.661</td><td>1.672</td><td>0.754</td><td>0.704</td></tr><tr><td>DSR-Net [20]</td><td>1.536</td><td>1.540</td><td>0.731</td><td>0.703</td><td>1.584</td><td>1.577</td><td>0.687</td><td>0.658</td></tr><tr><td>R-SQPD-Net (ours)</td><td>0.939</td><td>0.917</td><td>0.808</td><td>0.764</td><td>0.988</td><td>0.967</td><td>0.804</td><td>0.761</td></tr><tr><td>GT-SQPD-Net (ours)</td><td>0.883</td><td>0.599</td><td>0.859</td><td>0.841</td><td>0.936</td><td>0.658</td><td>0.850</td><td>0.830</td></tr></table>
160
+
161
+ Table 2: Evaluation metrics computed with test data (the unit of flow error is cm).
162
+
163
+ ![01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_6_311_648_1174_278_0.jpg](images/01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_6_311_648_1174_278_0.jpg)
164
+
165
+ Figure 8: Depth images and 3D masks of the ground-truth next scene and predicted scenes. Upper: Depth images where the blue bounding boxes represent the ground-truth next poses of the green and gray objects. Lower: (i) (incomplete) 3D masks converted from the depth images for 2DFlow, SE3-Net, and SE3Pose-Net and (ii) predicted complete 3D masks for 3DFlow, DSR-Net, and R-SQPD-Net.
166
+
167
+ Pushing Dynamics Learning. We compare the learning performances of the SQPD-Nets and the baseline methods with a large-scale pushing manipulation dataset where the training/validation/test data consist of 12000,1200,1200 number of 3-tuples, respectively. Table 2 shows the evaluation metrics computed with the test data and models' predictions, and shows that our SQPD-Nets outperform the other baseline methods by significant margins. Figure 8 shows predicted depth images and 3D masks for an example test data. As shown in 8 (Left), two objects (green and gray) have contact, and, by pushing the gray object, both two objects move. For this data, our R-SQPD-Net only successfully predicts the complex contact motions of the moving objects. Further experimental results with more example figures are provided in Appendix.
168
+
169
+ ### 4.2 Pushing Manipulation using R-SQPD-Net
170
+
171
+ In this section, we use the R-SQPD-Net trained in Section 4.1 and conduct the pushing manipulation tasks introduced in Section 3.2 (moving, singulation, and grasping) in both simulation and real-world. For the real-world experimental setup, we use various box- or cylinder-like objects as shown in Figure 9; the same objects are used in simulation experiments. Since we directly apply the R-SQPD-Net trained in simulation to the real physical environment, it is reasonable to ask the sim-to-real transfer issue. In our experiments, we use slow pushing motions to generate quasi-static movements of the objects, and thus minimize the sim-to-real gap (for quasi-static object movements, the dynamical properties of the objects and environment, e.g., mass, friction coefficient, become less affective [25]). Figure 10 shows some real-world manipulation results for various tasks. For the moving task (first row), we set the desired positions ${\mathbf{t}}_{d, i}$ as $\left( {{0.3},{\mathbf{t}}_{0, i, y},{\mathbf{t}}_{0, i, z}}\right)$ and $\beta = 0$ in equation (6). For the singulation task (second row), we set $\tau = {20}\left( \mathrm{\;{cm}}\right)$ in equation (7). For the grasping tasks (third row), we sample about 15 to 30 candidate grasp poses for the target recognized objects. For all three examples, our approach can find series of pushing actions that successfully perform the desired tasks. Notably, for the grasping tasks, without using ad hoc objective functions, the robot realizes how to re-configure the objects so that feasible grasp poses can be found for the target objects: (i) (grasping-upper) the robot pushes the large and flat object to the edge of the table and (ii) (grasping-lower) the robot pushes the surrounding objects to make the isolated target object graspable.
172
+
173
+ ![01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_6_896_1645_586_303_0.jpg](images/01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_6_896_1645_586_303_0.jpg)
174
+
175
+ Figure 9: Real-world experimental setting.
176
+
177
+ ![01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_7_311_203_1171_471_0.jpg](images/01963fb2-06ee-7952-8d73-bdfd6eb0eaf0_7_311_203_1171_471_0.jpg)
178
+
179
+ Figure 10: Real-world manipulation results using R-SQPD-Net for moving, singulation, and grasping tasks (for the fourth row case, the target object is the cylinder surrounded by the three cubes). The red arrow at each recognition step means the optimal pushing action.
180
+
181
+ Failure Cases. Table 3 shows the manipulation success rates in simulation and real-world experiments. We design 10 test scenarios for each task, of which object configurations are in Appendix. A few failure cases occur, whose underlying reasons we observe can be roughly categorized as (i) a failure of shape recognition (simulation, real) and (ii) sim-to-real transfer issue (real). Details can be found in Appendix.
182
+
183
+ <table><tr><td>TASK</td><td>Simulation</td><td>Real</td></tr><tr><td>Moving</td><td>9/10</td><td>8/10</td></tr><tr><td>Singulation</td><td>9/10</td><td>8/10</td></tr><tr><td>Grasping clutter</td><td>4/5</td><td>4/5</td></tr><tr><td>Grasping large</td><td>4/5</td><td>3/5</td></tr></table>
184
+
185
+ Table 3: Simulation and real-world manipulation results.
186
+
187
+ ## 5 Conclusion
188
+
189
+ This paper has proposed a SE(2)-equivariant pushing dynamics model. Using the superquadric representations of object shapes, we have proposed a SuperQuadric Pushing Dynamics Network (SQPD-Net). Through extensive empirical validations, we confirm that the SQPD-Net significantly outperforms the existing state-of-the-art visual pushing dynamics models. Moreover, we have verified that the SQPD-Net can be used for various pushing manipulation tasks, even in the real environment with little sim-to-real gap.
190
+
191
+ Future directions. First of all, our current implementation only uses a depth image to estimate the objects' shapes and poses; the color image can be used to enhance the recognition performance as in [26]. Second, our current implementation recognizes the objects' poses and shapes from only the single observation; the historical observations can be used together for more accurate object recognition and motion predictions. Lastly, our model is visual dynamics model that only takes a visual observation of the objects and scenes; to learn more accurate pushing dynamics model, taking into consideration of the inertial aspects (e.g., mass, inertia) of the objects and environments in neural network design will be important.
192
+
193
+ References
194
+
195
+ [1] A. Zeng, S. Song, S. Welker, J. Lee, A. Rodriguez, and T. Funkhouser. Learning synergies between pushing and grasping with self-supervised deep reinforcement learning. In 2018 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), pages 4238- 4245. IEEE, 2018.
196
+
197
+ [2] M. Danielczuk, J. Mahler, C. Correa, and K. Goldberg. Linear push policies to increase grasp access for robot bin picking. In 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), pages 1249-1256. IEEE, 2018.
198
+
199
+ [3] K. Xu, H. Yu, Q. Lai, Y. Wang, and R. Xiong. Efficient learning of goal-oriented push-grasping synergy in clutter. IEEE Robotics and Automation Letters, 6(4):6337-6344, 2021.
200
+
201
+ [4] M. Kiatos and S. Malassiotis. Robust object grasping in clutter via singulation. In 2019 International Conference on Robotics and Automation (ICRA), pages 1596-1600. IEEE, 2019.
202
+
203
+ [5] W. Yuan, K. Hang, D. Kragic, M. Y. Wang, and J. A. Stork. End-to-end nonprehensile rearrangement with deep reinforcement learning and simulation-to-reality transfer. Robotics and Autonomous Systems, 119:119-134, 2019.
204
+
205
+ [6] A. H. Qureshi, A. Mousavian, C. Paxton, M. C. Yip, and D. Fox. Nerp: Neural rearrangement planning for unknown objects. arXiv preprint arXiv:2106.01352, 2021.
206
+
207
+ [7] E. Huang, Z. Jia, and M. T. Mason. Large-scale multi-object rearrangement. In 2019 International Conference on Robotics and Automation (ICRA), pages 211-218. IEEE, 2019.
208
+
209
+ [8] H. Song, J. A. Haustein, W. Yuan, K. Hang, M. Y. Wang, D. Kragic, and J. A. Stork. Multi-object rearrangement with monte carlo tree search: A case study on planar nonprehensile sorting. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 9433-9440. IEEE, 2020.
210
+
211
+ [9] Y. Yang, H. Liang, and C. Choi. A deep learning approach to grasping the invisible. IEEE Robotics and Automation Letters, 5(2):2232-2239, 2020.
212
+
213
+ [10] M. Danielczuk, A. Angelova, V. Vanhoucke, and K. Goldberg. X-ray: Mechanical search for an occluded object by minimizing support of learned occupancy distributions. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 9577-9584. IEEE, 2020.
214
+
215
+ [11] S. M. LaValle and J. J. Kuffner Jr. Randomized kinodynamic planning. The international journal of robotics research, 20(5):378-400, 2001.
216
+
217
+ [12] A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 7559-7566. IEEE, 2018.
218
+
219
+ [13] M. T. Mason. Mechanics and planning of manipulator pushing operations. The International Journal of Robotics Research, 5(3):53-71, 1986.
220
+
221
+ [14] K. M. Lynch. Estimating the friction parameters of pushed objects. In Proceedings of 1993 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS'93), volume 1, pages 186-193. IEEE, 1993.
222
+
223
+ [15] K. M. Lynch and M. T. Mason. Stable pushing: Mechanics, controllability, and planning. The international journal of robotics research, 15(6):533-556, 1996.
224
+
225
+ [16] J. Zhou, Y. Hou, and M. T. Mason. Pushing revisited: Differential flatness, trajectory planning, and stabilization. The International Journal of Robotics Research, 38(12-13):1477-1489, 2019.
226
+
227
+ [17] A. Byravan and D. Fox. Se3-nets: Learning rigid body motion using deep neural networks. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 173-180. IEEE, 2017.
228
+
229
+ [18] A. Byravan, F. Leeb, F. Meier, and D. Fox. Se3-pose-nets: Structured deep dynamics models for visuomotor control. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 3339-3346. IEEE, 2018.
230
+
231
+ [19] Y. Ye, D. Gandhi, A. Gupta, and S. Tulsiani. Object-centric forward modeling for model predictive control. In Conference on Robot Learning, pages 100-109. PMLR, 2020.
232
+
233
+ [20] Z. Xu, Z. He, J. Wu, and S. Song. Learning 3d dynamic scene representations for robot manipulation. arXiv preprint arXiv:2011.01968, 2020.
234
+
235
+ [21] J. Wang, C. Hu, Y. Wang, and Y. Zhu. Dynamics learning with object-centric interaction networks for robot manipulation. IEEE Access, 9:68277-68288, 2021.
236
+
237
+ [22] B. Huang, S. D. Han, A. Boularias, and J. Yu. Dipn: Deep interaction prediction network with application to clutter removal. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 4694-4701. IEEE, 2021.
238
+
239
+ [23] This work will be soon published. 2022.
240
+
241
+ [24] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog), 38(5):1-12, 2019.
242
+
243
+ [25] Z. Xu, J. Wu, A. Zeng, J. B. Tenenbaum, and S. Song. Densephysnet: Learning dense physical object representations via multi-step dynamic interactions. arXiv preprint arXiv:1906.03853, 2019.
244
+
245
+ [26] C. Xie, Y. Xiang, A. Mousavian, and D. Fox. Unseen object instance segmentation for robotic environments. IEEE Transactions on Robotics, 37(5):1343-1359, 2021.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/4nt6RUGmILw/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,291 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # i-Sim2Real: Reinforcement Learning of Robotic Policies in Tight Human-Robot Interaction Loops
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Sim-to-real transfer is a powerful paradigm for robotic reinforcement learning. The ability to train policies in simulation enables safe exploration and large-scale data collection quickly at low cost. However, prior works in sim-to-real transfer of robotic policies typically do not involve any human-robot interaction because accurately simulating human behavior is an open problem. In this work, our goal is to leverage the power of simulation to train robotic policies that are proficient at interacting with humans upon deployment. But there is a chicken and egg problem - how to gather examples of a human interacting with a physical robot so as to model human behavior in simulation without already having a robot that is able to interact with a human? Our proposed method, Iterative-Sim-to-Real (i-S2R), attempts to address this. i-S2R bootstraps from a simple model of human behavior and alternates between training in simulation and deploying in the real world. In each iteration, both the human behavior model and the policy are refined. We evaluate our method on a real world robotic table tennis setting, where the objective for the robot is to play cooperatively with a human player for as long as possible. Table tennis is a high-speed, dynamic task that requires the two players to react quickly to each other's moves, making for a challenging test bed for research on human-robot interaction. We present results on an industrial robotic arm that is able to cooperatively play table tennis with human players, achieving rallies of 22 successive hits on average and 150 at best. Further, for 80% of players, rally lengths are ${70}\%$ to ${175}\%$ longer compared to the sim-to-real (S2R) baseline.
12
+
13
+ Keywords: sim-to-real, human-robot interaction, reinforcement learning
14
+
15
+ ## 23 1 Introduction
16
+
17
+ Sim-to-real transfer has emerged as a dominant paradigm for learning-based robotics. Real world training is often slow, cost-prohibitive, and poses safety-related challenges, so training in simulation is an attractive alternative and has been explored for a number of real world tasks, including object manipulation $\left\lbrack {1,2,3,4}\right\rbrack$ , legged robot locomotion $\left\lbrack {5,6}\right\rbrack$ , and aerial navigation $\left\lbrack {7,8}\right\rbrack$ . However, one element that is missing in this prior work is that the policies are not trained to be proficient at interacting with humans upon deployment. The utility of sim-to-real learning can be greatly increased if we extend it to settings where the trained policies need to interact with humans in a close, tight-loop fashion upon deployment. One of the major promises of learning-based robotics is to deploy robots in human-occupied settings, since non-learning robots already work well in deterministic, non-human occupied settings, such as factory floors. However, simulating human behavior is non-trivial (and indeed, one of the primary goals of artificial intelligence research), making it a major bottleneck in sim-to-real research for tasks involving human-robot interaction.
18
+
19
+ One approach to simulating human behavior is imitation learning: given a few examples of human behavior, we can use techniques such as behavior cloning [9, 10], or inverse reinforcement learning [11, 12] to distill that behavior into a policy, and then use these policies to generate human behavior in simulation. However, this approach presents a chicken and egg problem: in order to obtain useful examples of human behavior (in the context of human-robot interaction), we need a robot policy that already knows how to interact with humans in the real world, but we cannot
20
+
21
+ learn such a policy without the ability to simulate human behaviors in the first place. The primary contribution of this paper is a practical solution to this problem.
22
+
23
+ Our proposed method involves learning a coarse model of human behavior from initial data collected in the real world to bootstrap reinforcement learning of robotic policies in simulation. Deploying this learned policy in the real world now allows us to collect data in which the human subjects meaningfully interact with the robot. We then use this real world experience to improve our human behavior model, and continue training the robot policy in simulation under this updated model. We repeat this iterative process until a desired level of performance is achieved.
24
+
25
+ We present results on a task involving a robot playing table tennis with non-professional human players (see Figure 1). Table tennis is a high-speed, dynamic task that and requires close, tight-loop interactions between the two players (in this case, a human and a robot). We build an initial model of the human player's ball trajectories without a robot present and iteratively refine the robot and player models as they play together, ultimately resulting in a robot policy that can hold rallies of 22 successive hits on average and 150 at best.
26
+
27
+ ![01963f23-b991-7841-b4a3-3c8259de029d_1_1066_498_420_321_0.jpg](images/01963f23-b991-7841-b4a3-3c8259de029d_1_1066_498_420_321_0.jpg)
28
+
29
+ Figure 1: Robot setup An ABB IRB 120T 6-DOF robotic arm is mounted to a two-dimensional Festo linear actuator, creating an 8-DOF system.
30
+
31
+ While we demonstrate our approach on table tennis, we believe it is applicable more broadly, and can be applied to a number of tasks. For example, if the task involved a robot navigating through a busy hallway, we would first model the motion of human subjects alone (using motion capture devices, or a computer vision pipeline), and then train a policy in simulation with simulated human paths (so as to avoid collisions). Once this learned policy is deployed in the real world, the humans would likely alter their behavior in response to the robot, and capturing this data would allow us to create a more accurate human behavior model, which would further help us train a better policy. The process can be repeated until both human and robot behaviors converge, which would likely result in some co-adapted equilibrium point for the human and robot.
32
+
33
+ In summary, the primary contributions of this paper are: (a) a framework for training robotic policies in simulation that would need to interact with human subjects upon deployment, (b) a real world instantiation of this framework on a high-speed, dynamic task requiring tight, closed-loop interactions between humans and robots, (c) a detailed assessment of how our method, which we call Iterative-Sim-to-Real (i-S2R), compares with a baseline sim-to-real approach in the domain of cooperative robotic table tennis, and (d) the first robotic table tennis policy trained to control robot joints using reinforcement learning that can handle a wide variety of balls and can rally consistently with nonprofessional humans. To see videos of our system in action, please see the supplementary materials and this website https://sites.google.com/view/i-s2r.
34
+
35
+ ## 2 Related Work
36
+
37
+ Sim-to-real Learning for Robotics Reinforcement Learning (RL) is a powerful paradigm for learning increasingly capable and robust robot controllers $\left\lbrack {{13},{14},{15}}\right\rbrack$ . However, learning controllers from scratch on a physical robot is often prohibitively time consuming due to the large number of samples required to learn competent policies and potentially unsafe due to the random exploration inherent in RL methods [16, 17]. Training policies in simulation and transferring them to a physical robot, known as sim-to-real transfer (S2R), is therefore appealing.
38
+
39
+ Whilst it is both fast and safe to train agents from scratch in simulation, S2R presents its own challenge - persistent differences between simulated and real world environments that are extremely difficult to overcome [17, 18]. No single technique has been found to bridge the gap by itself. Instead a combination of multiple techniques are typically required for successful transfer. These include system identification $\left\lbrack {{13},{19},{20},{21},{22}}\right\rbrack$ which may involve iterating with a physical robot in the loop [2, 23], building hybrid simulators with learned models [5, 13, 22], dynamics randomization [1, $2,5,6,{13},{14},{15}\rbrack$ , simulated latency [15,22], and more complex network architectures [13]. We use (1) system ID with a physical robot in the loop, (2) dynamics randomization, (3) simulated latency, and (4) more complex networks. Similarly to Lee et al. [13], we use a 1D CNN to represent control 5 policies. Yet a sim-to-real gap persists. Continuing to train in the real world [24, 25, 26] (known as fine-tuning) is an effective way to bridge the remaining gap since the policy can adapt to changes in the environment. We also utilize fine-tuning in this work, but, unlike most past work, our learned policy is expected to interact cooperatively with a real human during this fine-tuning phase.
40
+
41
+ The closest sim-to-real approaches in prior work are Chebotar et al. [2] and Farchy et al. [23] since they update simulation parameters based on multiple iterations of real world data collection interleaved with simulated training. However, both of these prior works focus on using real world interaction data to learn improved physical parameters for the simulator, whereas our method focuses on learning better human behavior models. Unlike these prior work, our learned policies are proficient at interacting with humans upon deployment in the real world.
42
+
43
+ Reinforcement Learning for Table Tennis Robotic table tennis is a challenging, dynamic task [27] that has been a test bed for robotics research since the 1980s [28, 29, 30, 31, 32]. The current exemplar is the Omron robot [33]. Until recently, most methods tackled the problem by identifying a virtual hitting point for the racket $\left\lbrack {{34},{35},{36},{37},{38},{39},{40},{41}}\right\rbrack$ . These methods depend on being able to predict the ball state at time $t$ either from a ball dynamics model which may be parameterized $\left\lbrack {{34},{35},{42},{43}}\right\rbrack$ or by learning to predict it $\left\lbrack {{32},{37},{38}}\right\rbrack$ . This results in a target paddle state or states and various methods are used to generate robot joint trajectories given these targets $\left\lbrack {{32},{34},{35},{42},{43},{44},{45},{46},{47},{48},{49}}\right\rbrack$ . More recently, Tebbe et al. [50] learned to predict the paddle target using RL.
44
+
45
+ An alternative line of research seeks to do away with hitting points and ball prediction models, instead focusing on high frequency control of a robot's joints using either RL [27, 38, 51] or learning from demonstrations [45, 52, 53]. Of these, Büchler et al. [27] is the most similar, training RL policies to control robot joints from scratch at high frequencies given ball and robot states as policy inputs. However Büchler et al. [27] restricts the task to playing with a ball thrower on a single setting, whereas we focus on the harder problem of cooperative play with different humans.
46
+
47
+ Most prior work simplifies the problem by focusing on play with a ball thrower. Only a few [45, 48, ${50},{54}\rbrack$ focus on cooperative rallying with a human. Of these, Tebbe et al. [50], is the most similar, evaluating policies on various styles of human-robot cooperative play. However, Tebbe et al. [50] simplify the environment to a single-step bandit and the policy learns to predict the paddle state given the ball state at a pre-determined hit time $t$ . In contrast, we learn closed-loop policies that operate at a high frequency(75Hz), removing the need for learned policy to accurately predict where the ball will be in the future, increasing the robustness of the system, and enabling more dynamic play.
48
+
49
+ Human Robot Interaction Although not a typical HRI benchmark, cooperative robotic table tennis exhibits many of the features studied in the field: a human and robot working together, complex interactions between the two, inferring actions based on non-explicit cues, and so on. A major challenge in HRI is effectively modeling the complexities of human behavior in simulation [55] in order to learn without requiring an actual human. We employ several common techniques from HRI to learn in simulation such as simplifying the human model [56], specialized models for specific players [57], and refining our model based on real world interactions. Finally we note that like us, Paleja et al. [58] found policy performance varied depending on the skill of the human player.
50
+
51
+ ## 3 Preliminaries
52
+
53
+ Problem Setting We consider the problem of a cooperative human-robot table tennis as a single-agent sequential decision making problem in which the human is a part of the environment. We formalize the problem as a Markov Decision Process (MDP) [59] consisting of a of a 4-tuple $(\mathcal{S}$ , $\mathcal{A},\mathcal{R}, p)$ , whose elements are the state space $\mathcal{S}$ , action space $\mathcal{A}$ , reward function $\mathcal{R} : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ , and transition dynamics $p : \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}$ . An episode $\left( {{s}_{0},{a}_{0},{r}_{0},\ldots ,{s}_{n},{a}_{n},{r}_{n}}\right)$ is a finite sequence of $s \in \mathcal{S}, a \in \mathcal{A}, r \in \mathcal{R}$ elements, beginning with a start state ${s}_{0}$ and ending when the environment terminates. We define a parameterized policy ${\pi }_{\theta } : \mathcal{S} \rightarrow \mathcal{A}$ with parameters $\theta$ . The objective is to maximize $\mathbb{E}\left\lbrack {\mathop{\sum }\limits_{{t = 1}}^{N}r\left( {{s}_{t},{\pi }_{\theta }\left( {s}_{t}\right) }\right) }\right\rbrack$ , the expected cumulative reward obtained in an episode under ${\pi }_{\theta }$ .
54
+
55
+ We make two simplifications to our problem. First, we focus on rallies starting with a hit instead of a table tennis serve to make the data more uniform. Second, an episode consists of a single ball throw and return. Policies are therefore rewarded based on their ability to return balls to the opposite side of the table. This reward structure encourages longer rally length, as an agent that can return any ball can also rally indefinitely provided the simulated single shots overlap with the real rally shots.
56
+
57
+ ![01963f23-b991-7841-b4a3-3c8259de029d_3_350_223_1118_389_0.jpg](images/01963f23-b991-7841-b4a3-3c8259de029d_3_350_223_1118_389_0.jpg)
58
+
59
+ Figure 2: Iterative-Sim-to-Real. left We start with a coarse bootstrap model of human behavior (shown in yellow), and use it to train an initial robot policy in simulation. We then fine-tune this policy in the real world against a human player, and the human interaction data collected during this period is used to update the human behavior model used in simulation. We then take the fine-tuned policy back to simulation to further train it against the improved human behavior model, and this process is repeated until robot and human behaviors converge. right Specific i-S2R details used in this work. $x$ -axis represents the training iterations in sim, $y$ -axis represents the fine-tuning iterations in real with human-in-the-loop. Model names are in italics
60
+
61
+ Evolutionary Strategies Our proposed approach can be used with any RL algorithm, but we optimize our policies using evolutionary strategies (ES) [60, 61, 62, 63, 64] which have been shown to be an effective strategy for solving MDPs [62, 64]. The main idea behind ES is to maximise the Gaussian smoothing of the RL objective described above. Let $F\left( \theta \right)$ be the RL objective where $\theta$ are the policy parameters, then the ES objective is given by:
62
+
63
+ $$
64
+ {F}_{\sigma }\left( \theta \right) = {\mathbb{E}}_{\delta \sim \mathcal{N}\left( {0,{\mathbf{I}}_{d}}\right) }\left\lbrack {F\left( {\theta + {\sigma \delta }}\right) }\right\rbrack , \tag{1}
65
+ $$
66
+
67
+ where $\sigma > 0$ controls the precision of the smoothing, and $\delta$ is a random normal vector with the same dimension as the policy parameters $\theta$ . We apply common ES approaches such as state normalization [62, 65], reward normalization [64], and perturbation filtering [62]. We also repeat and average rollouts with the same parameters to reduce variance. See Appendix A for details.
68
+
69
+ ## 4 Method
70
+
71
+ i-S2R consists of two core components: (1) an iterative procedure for progressively updating and learning from a human behavior model - the human ball distribution in this setting - and (2) a method for modeling human behavior in simulation given a dataset of human play gathered in the real world (see Figure 2 for an overview). We first describe our iterative training procedure, and then discuss how we model human ball distributions.
72
+
73
+ Iterative training procedure An overview of the method can be seen in Figure 2. First we gather an initial dataset, ${D}_{0}$ , from player $P$ hitting table tennis balls across the table without a robot present. From ${D}_{0}$ , we build our first human behavior model ${M}_{0}$ that defines a ball distribution (see below). A robot policy is trained in simulation to return balls sampled from ${M}_{0}$ . Once the policy has converged, we transfer the parameters, ${\theta }_{0S}$ , to a real robotic system. The model is fine-tuned whilst player $P$ plays cooperatively (i.e. trying to maximize rally length) with the robot for a fixed number of parameter updates to produce ${\theta }_{0R}$ . All of the human hits during this fine-tuning phase are added to ${D}_{0}$ to form ${D}_{1}$ , which is used to define ${M}_{1}$ . The policy weights, ${\theta }_{0R}$ , are then transferred back to simulation and training is continued with the new distribution ${M}_{1}$ . After training in sim, the policy weights ${\theta }_{1S}$ are transferred back to the real world. The fine-tuning process is repeated to produce the next set of policy parameters ${\theta }_{1R}$ , dataset ${D}_{2}$ , and human model ${M}_{2}$ . This process can be repeated as many times as needed. One useful method for knowing when to stop is to check the change in human model in each iteration. See Appendix B for more details.
74
+
75
+ Modeling human ball distributions One of our primary goals is to simulate human player behaviors from a set of real world ball trajectories that have been subjected to air drag, gravity, and spin. Due to perception challenges in the real world, we do not explicitly model spin. The input to this procedure is a dataset of ball trajectories, where each trajectory consists of a sequence of ball positions. The output is a uniform ball distribution defined by 16 numbers: the min and max initial ball position (6), velocity (6), and $x$ and $y$ ball landing locations on robot side (4).
76
+
77
+ The ball distribution is derived from the dataset in two stages. The first step is to estimate a ball's initial position and velocity for each trajectory. We do this by selecting the free flight part of the trajectory (before the first bounce) and minimize the Euclidean distance between the simulated and real trajectory using the Nelder-Mead method [66]. We use the model, ${\ddot{x}}_{t} = g - {K}_{d}\left| \right| {\dot{x}}_{t}\left| \right| {\dot{x}}_{t}$ , ${x}_{t + 1} = {x}_{t} + {\Delta t}\left( {{\dot{x}}_{t} + \frac{{\Delta t}{\ddot{x}}_{t}}{2}}\right) ,{\dot{x}}_{t + 1} = {\dot{x}}_{t} + {\Delta t}{\ddot{x}}_{t}$ to simulate a trajectory, where (1) ${x}_{t},{\dot{x}}_{t}$ , and ${\ddot{x}}_{t}$ denote the position, velocity, and acceleration of the ball at time $\mathrm{t}$ ,(2) $g = - {9.81}\mathrm{\;m}/{\mathrm{s}}^{2}{\left\lbrack 0,0,1\right\rbrack }^{T}$ is the gravity, and (3) ${K}_{d} = {C}_{d}\rho \frac{A}{2m}.m = {0.0027kg}$ is the ball’s mass, $\rho = {1.29kg}/{m}^{3}$ is the air density, ${C}_{d} = {0.47}$ is the the drag coefficient, and $A = {1.256} \times {10}^{-3}{m}^{2}$ is the cross-sectional area for a standard table tennis ball.
78
+
79
+ We remove outliers using DBSCAN [67] and take the minimum and maximum per dimension to define the ball distribution. We sample an initial position and velocity from this distribution and generate a ball trajectory in simulation subject to the drag force. Other parameters needed for the simulation, such as coefficient of restitution, friction between the table and ball and the robot paddle and the ball, and so on have been empirically estimated following [68, 69].
80
+
81
+ ## 5 System, Simulation, and MDP Details
82
+
83
+ Our real world robotic system (see Figure 1) is a combination of an ABB IRB 120T 6-DOF robotic arm mounted to a two-dimensional Festo linear actuator, creating an 8-DOF system, with a table tennis paddle mounted on the end-effector. The 3D ball position is estimated via a stereo pair of Ximea MQ013CG-ON cameras from which we process 2D detections, triangulate to 3D, and filter through a 3D tracker. See Appendix C for more details. We concatenate the ball position with the 8-DOF robot joint angles to form an 11-dimensional observation space. Along with the current observation, we pass the past seven observations (a state space of $8 \times {11}$ ) as an input to the policy. The policy controls the robot by outputting eight individual joint velocities at ${75}\mathrm{{Hz}}$ . Following Gao et al. [51] we use a 3-layer 1-D dilated gated convolutional neural network as our policy architecture. Details of the policy architecture can be found in Appendix D.
84
+
85
+ Our simulation is built on the PyBullet [70] physics engine replicating our real environment. We use PyBullet to model robot and contact dynamics whilst balls are modeled as described in Section 4. We add random uniform noise of $2 \times$ the diameter of a table tennis ball to the ball observation per timestep to aid transfer to a physical system. We also found it necessary to simulate sensor latency, otherwise sim-to-real transfer completely failed. Robot actions as well as ball and robot observation latencies are modeled as parameterized Gaussians based on measurements from the real system. Policies are rewarded for hitting balls and for returning balls in a cooperative manner. See Appendix $\mathrm{F}$ for more details.
86
+
87
+ ## 6 Experimental Results
88
+
89
+ Here we aim to answer the following questions; (1) Does i-S2R improve over baseline sim-to-real with fine-tuning (which we refer to as S2R) in a human-robot interactive setting where the human behavior changes in response to the robot policy? (2) How many sim-to-real iterations does the human behavior model need to converge? (3) How much of i-S2R's performance can be attributed to (a) improving the human behavior model vs. (b) the additional training steps in simulation? and (4) Does i-S2R generalize better to new players compared with S2R?
90
+
91
+ Experimental setup To evaluate our method, we completed the procedure described in Section 4 for five different non-professional table tennis players, thus training five independent i-S2R policies. Each player also trained (1) a S2R baseline which was given the same amount of real world training time as the i-S2R policy and (2) a S2R-Oracle which was trained in simulation on the penultimate human behavior model obtained through i-S2R and fine-tuned in the real world for 35% of the i-S2R training budget. This is equivalent to the last round of fine-tuning for i-S2R. (See Figure 2 right). S2R-Oracle is intended to isolate the effect of the human behavior modeling on final performance, enabling us to better understand what aspects of the i-S2R process matter.
92
+
93
+ Each model was evaluated by (a) the model's trainer and (b) two other players. In each evaluation, 50 rallies (defined as a sequence of consecutive hits ending when one player fails to return the ball)
94
+
95
+ ![01963f23-b991-7841-b4a3-3c8259de029d_5_333_202_1131_331_0.jpg](images/01963f23-b991-7841-b4a3-3c8259de029d_5_333_202_1131_331_0.jpg)
96
+
97
+ Figure 3: Aggregated results Boxplot details: The white circle is the mean, the horizontal line is the median, box bounds are the 25th and 75th percentiles. "out-of-sim" refers to models that are deployed on the real hardware with zero fine-tuning (see Figure 2). left When aggregated across all players, i-S2R rally length is higher than S2R by about $9\%$ . However, note that simple aggregation puts extra weight on higher skilled players that are able to hold a longer rally. center The normalized rally length distribution (see Appendix I for normalization details) shows a bigger improvement between i-S2R and S2R in terms of the mean, median and 25th and 75th percentiles. right The histogram of rally lengths for i-S2R and S2R (250 rallies per model) shows that a large fraction of the rallies for S2R are shorter (i.e. less than 5), while i-S2R achieves longer rallies more frequently.
98
+
99
+ ![01963f23-b991-7841-b4a3-3c8259de029d_5_323_829_1152_331_0.jpg](images/01963f23-b991-7841-b4a3-3c8259de029d_5_323_829_1152_331_0.jpg)
100
+
101
+ Figure 4: Results by player skill. When broken down by player skill, we notice that i-S2R has a significantly longer rally length than S2R and is comparable to S2R-Oracle for beginner and intermediate players. The advanced player is an exception. Note that S2R-Oracle gets just 35% of i-S2R and S2R fine-tuning budget.
102
+
103
+ were played with the human always starting and the rally length calculated as the number of paddle touches for both the human and robot. While the human can be responsible for a rally ending, almost all ended with the robot failing to return the ball or returning it such that the human could not easily continue the rally. The model trainer also evaluated intermediate checkpoints (see Figure 2) using the same methodology to shed light on the training dynamics. To ensure fair evaluation, all models were tested in random order and the identity of the model was kept hidden from the evaluator ( "blind eval"). Further details can be found in Appendix G.
104
+
105
+ Due to the time needed to train and evaluate i-S2R, S2R, and S2R-Oracle (roughly 20 hours per person) we note that 4 of the 5 players are authors on this paper. The non-author player's results appear consistent with our overall findings (see Appendix J for details).
106
+
107
+ (1) Does i-S2R improve over S2R in a human-robot interactive setting? Figure 3 presents rally length distributions aggregated across all players whilst Figure 4 splits the data by skill. Players are grouped in to beginner (40% players), intermediate (40% of players) and advanced (20% players). The non-author player was classified as beginner. Please see Appendix H for skill level definitions. When aggregated over all players, we see that i-S2R is able to hold longer rallies (i.e. rallies that are longer than length 5) at a much higher rate than S2R, as shown in Figure 3. When the players are split by skill level, i-S2R significantly outperforms S2R for both beginner and intermediate players (80% of the players). The improvement differs between the two groups, with i-S2R yielding a $\approx {70}\%$ and $\approx {175}\%$ improvement for beginner and intermediate players respectively.
108
+
109
+ The policy trained by the advanced player has a different trend. Here, S2R dramatically outperforms i-S2R. We hypothesize that a good out-of-sim model (after 1st round of sim training) plays a large part in this difference (see Figure 5). However, due to the time consuming nature of repeating experiments on the physical system it is difficult to fully explain why this is the case, especially since both the training methodology and involvement of humans introduces a high degree of variance.
110
+
111
+ ![01963f23-b991-7841-b4a3-3c8259de029d_6_320_201_1154_321_0.jpg](images/01963f23-b991-7841-b4a3-3c8259de029d_6_320_201_1154_321_0.jpg)
112
+
113
+ Figure 5: Policy performance at key checkpoints during training. For beginner players i-S2R performance converges after just two iterations (see fine-tune-65%). For intermediate players i-S2R takes three iterations to converge (see fine-tune-100%). "S2R-Oracle-sim-3" here is same as "S2R-Oracle-out-of-sim" in Figure 4.
114
+
115
+ ![01963f23-b991-7841-b4a3-3c8259de029d_6_333_633_1120_247_0.jpg](images/01963f23-b991-7841-b4a3-3c8259de029d_6_333_633_1120_247_0.jpg)
116
+
117
+ Figure 6: While the key distribution parameters change significantly from initial ball distribution (sim1) to that after 1st round of sim training (sim2), the change in the parameters between 1st and 2nd round of sim training is much less (sim2 vs. sim3).
118
+
119
+ (2) How many sim-to-real iterations does the human behavior model take to converge? For beginners we find that it only took two iterations for i-S2R to converge (see Figure 5). In the leftmost chart showing beginner policy data, i-S2R achieves comparable levels of performance at the end of the 2nd (fine-tune-65%) and final (fine-tune-100%) iterations. However, for intermediate skilled players this is not the case. The change in the human behavior model (ball distribution) from iteration to iteration shown in Figure 6 offers a clue. For beginner players, the distribution barely changes after the 2nd round as evidenced by the difference between the left and right charts. Whereas for intermediate players the distribution continues to change substantially from round 2 to 3 (specifically in $y$ and $z$ velocities), which is perhaps why we see the strongest performance of $\mathrm{i} - \mathrm{S}2\mathrm{R}$ after the $2\mathrm{{nd}}$ iteration for beginners but after the 3rd iteration for intermediate players.
120
+
121
+ The advanced player’s distribution hardly changes between the 2nd and 3rd round and the performance of i-S2R is comparable across both. However this does not explain why we observed the best i-S2R performance at the end of the 1st round for this player. We hypothesize that a good out-of-sim model after first round of training (see Figure 5) plays a large part in this. Investigating the effect of playing style on changes in ball distribution every iteration and hence on the sim-to-real gap or training for more iterations for advanced players can shed light on this in future work.
122
+
123
+ (3) What is the impact of the human behavior model? For beginner and intermediate players, S2R-Oracle is in line with i-S2R performance. However S2R-Oracle also achieved this level of performance with just 35% of the real world training time compared to i-S2R and S2R. Therefore much of the benefit of i-S2R likely comes from improving the human behavior model from iteration to iteration. It also suggests that if we had access to the final human behavior model at the beginning of training, the iterative sim-to-real training would not be needed. We could simply fine-tune in real and achieve comparable performance with significantly less human training time. S2R-Oracle's strong performance also validates our motivation for this work, in which we hypothesized that the difficulty of defining a good human behavior model a priori for human-robot cooperative rallies was limiting performance.
124
+
125
+ This result suggests that i-S2R does not benefit from additional training iterations in simulation over and above the improvements to the human behavior model. The evaluations at earlier stages in training (shown in Figure 5) suggest the remaining sim-to-real gap could be responsible. Figure 5 shows that, in all cases, after both the second (sim-2) and third (sim-3) rounds of simulated training, rally length drops noticeably. Reducing the sim-to-real gap might improve i-S2R's performance due to better starting points for the last two rounds of fine-tuning.
126
+
127
+ (4) Does i-S2R offer any generalization benefits in this setting? We now evaluate the generalization capabilities of models trained with i-S2R, and how they compare against models trained using S2R. As shown in Figure 7, i-S2R significantly outperforms S2R when the models are cross evaluated by other players (with similar blind evaluations as earlier) including for the advanced player where S2R was best in self evaluation (see Appendix J for details by player). This observation holds whether we look at absolute or normalized rally length (see Appendix I normalization methodology details). Performance with other players is lower for all models, however i-S2R maintains around ${70}\%$ of performance on average compared to ${30}\%$ for S2R. We hypothesize that the broader training distribution obtained by iterating between simulation and reality leads to policies that can deal with a wider range of ball throws, leading to better generalization to new players. Our confidence in this hypothesis is strengthened by the fact that both i-S2R and S2R-Oracle significantly outperform S2R under this setting.
128
+
129
+ ![01963f23-b991-7841-b4a3-3c8259de029d_7_1123_413_387_622_0.jpg](images/01963f23-b991-7841-b4a3-3c8259de029d_7_1123_413_387_622_0.jpg)
130
+
131
+ Figure 7: Cross-evaluations mean rally lengths (with ${95}\%$ CI) aggregated across all players. i-S2R generalizes better to new players compared to S2R.
132
+
133
+ ## 7 Limitations
134
+
135
+ Having a human in the loop poses numerous challenges to robotic reinforcement learning. It slows down the overall learning process to accommodate human participants, and limits the scale at which one can experiment. As one example, while we tested our method on five subjects, time limitations prevented us from training with multiple random seeds for each subject. There is significant variation in how people interact with robots (or sometimes even the same person over time), which introduces extra variance into our experiments. In our experiments, the trends we saw for one particular subject were substantially different from all other subjects, and we could not come to a clear explanation of why that was the case.
136
+
137
+ It is possible for an expert human player to get long rallies by keeping the ball in a very narrow distribution without really improving the inherent capability of the agent to play beyond those balls. In our studies, since we used non-professional players, this was not an issue. However, for future work in cooperative human-robot tasks, it would be interesting to explore ways to disentangle the skill level of the robot from the human participant.
138
+
139
+ Another limitation arising from training a policy with a human in the loop is the possibility that some performance improvements are attributable to human learning and not policy learning. We did our best to mitigate this by asking players to evaluate all models "blind" (i.e. the player is unaware of what model they are evaluating) and at the end of training, after which the majority of human learning was likely to have occurred. Consequently, we think that differences between models reflect differences in policy capability and not human.
140
+
141
+ Finally, we represent humans in simulation in a simple way - by capturing all initial position and velocity ranges during their play - and then we sample each ball in simulation uniformly and independently. This ignores the probability distribution of balls within those ranges and also results in a loss of correlation between subsequent balls in a rally. This could be addressed by developing a more sophisticated ball model that takes these factors into account.
142
+
143
+ ## 8 Conclusion
144
+
145
+ We present i-S2R to learn RL policies that are able to interact with humans by iteratively training in simulation and fine-tuning in the real world with humans in the loop. The approach starts with a coarse model of human behavior and refines it over a series of fine-tuning iterations. The effectiveness of this method is demonstrated in the context of a table tennis rallying task. Extensive "blind" experiments shed light on various aspects of the method and compare it against a baseline where we train and fine-tune in real only once (S2R). We show that i-S2R outperforms S2R in aggregate, and the difference in performance is particularly significant for beginner and intermediate players (4/5). Moreover, i-S2R generalizes much better than S2R to other players.
146
+
147
+ References
148
+
149
+ [1] X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel. Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE International Conference on Robotics and Automation, ICRA 2018, Brisbane, Australia, May 21-25, 2018, pages 1-8. IEEE, 2018.
150
+
151
+ [2] Y. Chebotar, A. Handa, V. Makoviychuk, M. Macklin, J. Issac, N. D. Ratliff, and D. Fox. Closing the sim-to-real loop: Adapting simulation randomization with real world experience. In International Conference on Robotics and Automation, ICRA 2019, Montreal, QC, Canada, May 20-24, 2019, pages 8973-8979. IEEE, 2019.
152
+
153
+ [3] M. Andrychowicz, B. Baker, M. Chociej, R. Józefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder, L. Weng, and W. Zaremba. Learning dexterous in-hand manipulation. Int. J. Robotics Res., 39(1), 2020.
154
+
155
+ [4] S. Kataoka, S. K. S. Ghasemipour, D. Freeman, and I. Mordatch. Bi-manual manipulation and attachment via sim-to-real reinforcement learning, 2022. URL https://arxiv.org/abs/ 2203.08277.
156
+
157
+ [5] J. Lee, A. Dosovitskiy, D. Bellicoso, V. Tsounis, V. Koltun, and M. Hutter. Learning agile and dynamic motor skills for legged robots. Sci. Robotics, 4(26), 2019.
158
+
159
+ [6] X. B. Peng, E. Coumans, T. Zhang, T. E. Lee, J. Tan, and S. Levine. Learning agile robotic locomotion skills by imitating animals. In M. Toussaint, A. Bicchi, and T. Hermans, editors, Robotics: Science and Systems XVI, Virtual Event / Corvalis, Oregon, USA, July 12-16, 2020, 2020.
160
+
161
+ [7] F. Sadeghi and S. Levine. CAD2RL: real single-image flight without a single real image. In N. M. Amato, S. S. Srinivasa, N. Ayanian, and S. Kuindersma, editors, Robotics: Science and Systems XIII, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA, July 12-16, 2017, 2017.
162
+
163
+ [8] A. Loquercio, E. Kaufmann, R. Ranftl, M. Müller, V. Koltun, and D. Scaramuzza. Learning high-speed flight in the wild. Sci. Robotics, 6(59), 2021.
164
+
165
+ [9] D. Pomerleau. ALVINN: an autonomous land vehicle in a neural network. In D. S. Touretzky, editor, Advances in Neural Information Processing Systems 1, [NIPS Conference, Denver; Colorado, USA, 1988], pages 305-313.
166
+
167
+ [10] T. Zhang, Z. McCarthy, O. Jow, D. Lee, K. Goldberg, and P. Abbeel. Deep imitation learning for complex manipulation tasks from virtual reality teleoperation. arXiv preprint arXiv:1710.04615, 2017.
168
+
169
+ [11] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In C. E. Brodley, editor, Machine Learning, Proceedings of the Twenty-first International Conference (ICML 2004), Banff, Alberta, Canada, July 4-8, 2004, volume 69 of ACM International Conference Proceeding Series. ACM, 2004.
170
+
171
+ [12] B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In D. Fox and C. P. Gomes, editors, Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, AAAI 2008, Chicago, Illinois, USA, July 13-17, 2008, pages 1433- 1438. AAAI Press, 2008.
172
+
173
+ [13] J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter. Learning quadrupedal locomotion over challenging terrain. CoRR, 2020.
174
+
175
+ [14] OpenAI, I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino, M. Plappert, G. Powell, R. Ribas, J. Schneider, N. Tezak, J. Tworek, P. Welinder, L. Weng, Q. Yuan, W. Zaremba, and L. Zhang. Solving rubik's cube with a robot hand. 2019.
176
+
177
+ [15] J. Tan, T. Zhang, E. Coumans, A. Iscen, Y. Bai, D. Hafner, S. Bohez, and V. Vanhoucke. Sim-to-real: Learning agile locomotion for quadruped robots. CoRR, abs/1804.10332, 2018. URL http://arxiv.org/abs/1804.10332.
178
+
179
+ [16] W. Zhao, J. P. Queralta, and T. Westerlund. Sim-to-real transfer in deep reinforcement learning for robotics: a survey. CoRR, 2020.
180
+
181
+ [17] S. Höfer, K. Bekris, A. Handa, J. C. Gamboa, M. Mozifian, F. Golemo, C. Atkeson, D. Fox, K. Goldberg, J. Leonard, C. Karen Liu, J. Peters, S. Song, P. Welinder, and M. White. Sim2real in robotics and automation: Applications and challenges. IEEE Transactions on Automation Science and Engineering, 18(2):398-400, 2021. doi:10.1109/TASE.2021.3064065.
182
+
183
+ [18] M. Neunert, T. Boaventura, and J. Buchli. Why off-the-shelf physics simulators fail in evaluating feedback controller performance - a case study for quadrupedal robots. 2016.
184
+
185
+ [19] S. Zhu, A. Kimmel, K. E. Bekris, and A. Boularias. Model identification via physics engines for improved policy search. CoRR, 2017.
186
+
187
+ [20] M. Kaspar, J. D. M. Osorio, and J. Bock. Sim2real transfer for reinforcement learning without dynamics randomization. CoRR, abs/2002.11635, 2020.
188
+
189
+ [21] J. Tan, Z. Xie, B. Boots, and C. K. Liu. Simulation-based design of dynamic controllers for humanoid balancing. In 2016 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), pages 2729-2736, 2016. doi:10.1109/IROS.2016.7759424.
190
+
191
+ [22] K. Ota, D. K. Jha, D. Romeres, J. van Baar, K. A. Smith, T. Semitsu, T. Oiki, A. Sullivan, D. Nikovski, and J. B. Tenenbaum. Towards human-level learning of complex physical puzzles. CoRR, abs/2011.07193, 2020. URL https://arxiv.org/abs/2011.07193.
192
+
193
+ [23] A. Farchy, S. Barrett, P. MacAlpine, and P. Stone. Humanoid robots learning to walk faster: From the real world to simulation and back. In Proceedings of the 2013 International Conference on Autonomous Agents and Multi-Agent Systems, AAMAS '13, page 39-46, 2013.
194
+
195
+ [24] S. Barrett, M. E. Taylor, and P. Stone. Transfer learning for reinforcement learning on a physical robot. In AAMAS 2010, 2010.
196
+
197
+ [25] A. A. Rusu, M. Vecerík, T. Rothörl, N. Heess, R. Pascanu, and R. Hadsell. Sim-to-real robot learning from pixels with progressive nets. CoRR, 2016.
198
+
199
+ [26] X. Song, Y. Yang, K. Choromanski, K. Caluwaerts, W. Gao, C. Finn, and J. Tan. Rapidly adaptable legged robots via evolutionary meta-learning. CoRR, 2020. URL https://arxiv.org/abs/2003.01239.
200
+
201
+ [27] D. Büchler, S. Guist, R. Calandra, V. Berenz, B. Schölkopf, and J. Peters. Learning to play table tennis from scratch using muscular robots. CoRR, abs/2006.05935, 2020. URL https: //arxiv.org/abs/2006.05935.
202
+
203
+ [28] J. Billingsley. Robot ping pong. Practical Computing, 1983.
204
+
205
+ [29] J. Knight and D. Lowery. Pingpong-playing robot controlled by a microcomputer. Microprocessors and Microsystems - Embedded Hardware Design, 1986.
206
+
207
+ [30] J. Hartley. Toshiba progress towards sensory control in real time. The Industrial Robot 14-1, pages 50-52, 1983.
208
+
209
+ [31] H. Hashimoto, F. Ozaki, and K. Osuka. Development of ping-pong robot system using 7 degree of freedom direct drive robots. In Industrial Applications of Robotics and Machine Vision, 1987.
210
+
211
+ [32] K. Muelling, J. Kober, and J. Peters. A biomimetic approach to robot table tennis. Adaptive Behavior, 2010.
212
+
213
+ [33] A. Kyohei, N. Masamune, and Y. Satoshi. The ping pong robot to return a ball precisely. 2020.
214
+
215
+ [34] F. Miyazaki, M. Takeuchi, M. Matsushima, T. Kusano, and T. Hashimoto. Realization of the table tennis task based on virtual targets. ICRA, 2002.
216
+
217
+ [35] F. Miyazaki et al. Learning to dynamically manipulate: A table tennis robot controls a ball and rallies with a human being. In Advances in Robot Control, 2006.
218
+
219
+ [36] R. Anderson. A Robot Ping-Pong Player: Experiments in Real-Time Intelligent Control. MIT Press, 1988.
220
+
221
+ [37] K. Muelling et al. Simulating human table tennis with a biomimetic robot setup. In Simulation of Adaptive Behavior, 2010.
222
+
223
+ [38] Y. Zhu, Y. Zhao, L. Jin, J. Wu, and R. Xiong. Towards high level skill learning: Learn to return table tennis ball using monte-carlo based policy gradient method. IEEE International Conference on Real-time Computing and Robotics, 2018.
224
+
225
+ [39] Y. Huang, B. Schölkopf, and J. Peters. Learning optimal striking points for a ping-pong playing robot. IROS, 2015.
226
+
227
+ [40] Y. Sun, R. Xiong, Q. Zhu, J. Wu, and J. Chu. Balance motion generation for a humanoid robot playing table tennis. IEEE-RAS Humanoids, 2011.
228
+
229
+ [41] R. Mahjourian, N. Jaitly, N. Lazic, S. Levine, and R. Miikkulainen. Hierarchical policy design for sample-efficient learning of robot table tennis through self-play. arXiv:1811.12927, 2018.
230
+
231
+ [42] M. Matsushima, T. Hashimoto, and F. Miyazaki. Learning to the robot table tennis task-ball control and rally with a human. IEEE International Conference on Systems, Man and Cybernetics, 2003.
232
+
233
+ [43] M. Matsushima, T. Hashimoto, M. Takeuchi, and F. Miyazaki. A learning approach to robotic table tennis. IEEE Transactions on Robotics, 2005.
234
+
235
+ [44] K. Muelling, J. Kober, and J. Peters. Learning table tennis with a mixture of motor primitives. IEEE-RAS Humanoids, 2010.
236
+
237
+ [45] K. Muelling, J. Kober, O. Kroemer, and J. Peters. Learning to select and generalize striking movements in robot table tennis. The International Journal of Robotics Research, 2012.
238
+
239
+ [46] Y. Huang, D. Buchler, O. Koç, B. Schölkopf, and J. Peters. Jointly learning trajectory generation and hitting point prediction in robot table tennis. IEEE-RAS Humanoids, 2016.
240
+
241
+ [47] O. Koç, G. Maeda, and J. Peters. Online optimal trajectory generation for robot table tennis. Robotics & Autonomous Systems, 2018.
242
+
243
+ [48] J. Tebbe, Y. Gao, M. Sastre-Rienietz, and A. Zell. A table tennis robot system using an industrial kuka robot arm. GCPR, 2018.
244
+
245
+ [49] Y. Gao, J. Tebbe, J. Krismer, and A. Zell. Markerless racket pose detection and stroke classification based on stereo vision for table tennis robots. IEEE Robotic Computing, 2019.
246
+
247
+ [50] J. Tebbe, L. Krauch, Y. Gao, and A. Zell. Sample-efficient reinforcement learning in robotic table tennis. ICRA, 2021.
248
+
249
+ [51] W. Gao, L. Graesser, K. Choromanski, X. Song, N. Lazic, P. Sanketi, V. Sindhwani, and N. Jaitly. Robotic table tennis with model-free reinforcement learning. IROS, 2020.
250
+
251
+ [52] L. Chen, R. R. Paleja, and M. C. Gombolay. Learning from suboptimal demonstration via self-supervised reward regression. CoRL, 2020.
252
+
253
+ [53] L. Chen, R. R. Paleja, M. Ghuy, and M. C. Gombolay. Joint goal and strategy inference across heterogeneous demonstrators via reward network distillation. CoRR, abs/2001.00503, 2020.
254
+
255
+ [54] Z. Yu, Y. Liu, Q. Huang, X. Chen, W. Zhang, J. Li, G. Ma, L. Meng, T. Li, and W. Zhang. Design of a humanoid ping-pong player robot with redundant joints. 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), pages 911-916, 2013.
256
+
257
+ [55] A. Aly, S. Griffiths, and F. Stramandinoli. Metrics and benchmarks in human-robot interaction: Recent advances in cognitive robotics. Cognitive Systems Research, 43:313-323, 2017.
258
+
259
+ [56] M. Huber, H. Radrich, C. Wendt, M. Rickert, A. Knoll, T. Brandt, and S. Glasauer. Evaluation of a novel biologically inspired trajectory generator in human-robot interaction. In ${RO} - {MAN}$ 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication, pages 639-644. IEEE, 2009.
260
+
261
+ [57] A. Silva, K. Metcalf, N. Apostoloff, and B.-J. Theobald. Fedembed: Personalized private federated learning, 2022. URL https://arxiv.org/abs/2202.09472.
262
+
263
+ [58] R. Paleja, M. Ghuy, N. Ranawaka Arachchige, R. Jensen, and M. Gombolay. The utility of explainable ai in ad hoc human-machine teaming. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 610-623. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/05d74c48b5b30514d8e9bd60320fc8f6-Paper.pdf.
264
+
265
+ [59] M. L. Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014.
266
+
267
+ [60] K. Choromanski, M. Rowland, V. Sindhwani, R. E. Turner, and A. Weller. Structured Evolution with Compact Architectures for Scalable Policy Optimization. In Proceedings of the 35th International Conference on Machine Learning, pages 969-977. PMLR, 2018.
268
+
269
+ [61] D. Wierstra, T. Schaul, T. Glasmachers, Y. Sun, and J. Schmidhuber. Natural evolution strategies, 2011.
270
+
271
+ [62] T. Salimans, J. Ho, X. Chen, S. Sidor, and I. Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. arXiv:1703.03864, 2017.
272
+
273
+ [63] Y. Nesterov and V. Spokoiny. Random gradient-free minimization of convex functions. FoCM, 2017.
274
+
275
+ [64] H. Mania, A. Guy, and B. Recht. Simple random search provides a competitive approach to reinforcement learning. NeurIPS, 2018.
276
+
277
+ [65] A. Nagabandi et al. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In ${ICRA},{2018}$ .
278
+
279
+ [66] J. A. Nelder and R. Mead. A simplex method for function minimization. Computer Journal, 7: 308-313, 1965.
280
+
281
+ [67] E. Schubert, J. Sander, M. Ester, H.-P. Kriegel, and X. Xu. Dbscan revisited, revisited: Why and how you should (still) use dbscan. ACM Transactions on Database Systems, (3), 2017.
282
+
283
+ [68] P. Blank, B. H. Groh, and B. M. Eskofier. Ball speed and spin estimation in table tennis using a racket-mounted inertial sensor. In S. C. Lee, L. Takayama, K. N. Truong, J. Healey, and T. Ploetz, editors, ISWC, pages 2-9. ACM, 2017. ISBN 978-1-4503-5188-1. URL http://dblp.uni-trier.de/db/conf/iswc/iswc2017.html#BlankGE17.
284
+
285
+ [69] Y. Gao, J. Tebbe, and A. Zell. Optimal stroke learning with policy gradient approach for robotic table tennis. CoRR, abs/2109.03100, 2021. URL https://arxiv.org/abs/2109.03100.
286
+
287
+ [70] E. Coumans and Y. Bai. Pybullet, a python module for physics simulation for games, robotics and machine learning. http://pybullet.org, 2016-2021.
288
+
289
+ [71] Abb application manual - externally guided motion., Vasteras, 2020.
290
+
291
+ [72] L. van der Maaten and G. Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605, 2008. URL http://www.jmlr.org/papers/v9/ vandermaaten08a.html.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/4nt6RUGmILw/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § I-SIM2REAL: REINFORCEMENT LEARNING OF ROBOTIC POLICIES IN TIGHT HUMAN-ROBOT INTERACTION LOOPS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Sim-to-real transfer is a powerful paradigm for robotic reinforcement learning. The ability to train policies in simulation enables safe exploration and large-scale data collection quickly at low cost. However, prior works in sim-to-real transfer of robotic policies typically do not involve any human-robot interaction because accurately simulating human behavior is an open problem. In this work, our goal is to leverage the power of simulation to train robotic policies that are proficient at interacting with humans upon deployment. But there is a chicken and egg problem - how to gather examples of a human interacting with a physical robot so as to model human behavior in simulation without already having a robot that is able to interact with a human? Our proposed method, Iterative-Sim-to-Real (i-S2R), attempts to address this. i-S2R bootstraps from a simple model of human behavior and alternates between training in simulation and deploying in the real world. In each iteration, both the human behavior model and the policy are refined. We evaluate our method on a real world robotic table tennis setting, where the objective for the robot is to play cooperatively with a human player for as long as possible. Table tennis is a high-speed, dynamic task that requires the two players to react quickly to each other's moves, making for a challenging test bed for research on human-robot interaction. We present results on an industrial robotic arm that is able to cooperatively play table tennis with human players, achieving rallies of 22 successive hits on average and 150 at best. Further, for 80% of players, rally lengths are ${70}\%$ to ${175}\%$ longer compared to the sim-to-real (S2R) baseline.
12
+
13
+ Keywords: sim-to-real, human-robot interaction, reinforcement learning
14
+
15
+ § 23 1 INTRODUCTION
16
+
17
+ Sim-to-real transfer has emerged as a dominant paradigm for learning-based robotics. Real world training is often slow, cost-prohibitive, and poses safety-related challenges, so training in simulation is an attractive alternative and has been explored for a number of real world tasks, including object manipulation $\left\lbrack {1,2,3,4}\right\rbrack$ , legged robot locomotion $\left\lbrack {5,6}\right\rbrack$ , and aerial navigation $\left\lbrack {7,8}\right\rbrack$ . However, one element that is missing in this prior work is that the policies are not trained to be proficient at interacting with humans upon deployment. The utility of sim-to-real learning can be greatly increased if we extend it to settings where the trained policies need to interact with humans in a close, tight-loop fashion upon deployment. One of the major promises of learning-based robotics is to deploy robots in human-occupied settings, since non-learning robots already work well in deterministic, non-human occupied settings, such as factory floors. However, simulating human behavior is non-trivial (and indeed, one of the primary goals of artificial intelligence research), making it a major bottleneck in sim-to-real research for tasks involving human-robot interaction.
18
+
19
+ One approach to simulating human behavior is imitation learning: given a few examples of human behavior, we can use techniques such as behavior cloning [9, 10], or inverse reinforcement learning [11, 12] to distill that behavior into a policy, and then use these policies to generate human behavior in simulation. However, this approach presents a chicken and egg problem: in order to obtain useful examples of human behavior (in the context of human-robot interaction), we need a robot policy that already knows how to interact with humans in the real world, but we cannot
20
+
21
+ learn such a policy without the ability to simulate human behaviors in the first place. The primary contribution of this paper is a practical solution to this problem.
22
+
23
+ Our proposed method involves learning a coarse model of human behavior from initial data collected in the real world to bootstrap reinforcement learning of robotic policies in simulation. Deploying this learned policy in the real world now allows us to collect data in which the human subjects meaningfully interact with the robot. We then use this real world experience to improve our human behavior model, and continue training the robot policy in simulation under this updated model. We repeat this iterative process until a desired level of performance is achieved.
24
+
25
+ We present results on a task involving a robot playing table tennis with non-professional human players (see Figure 1). Table tennis is a high-speed, dynamic task that and requires close, tight-loop interactions between the two players (in this case, a human and a robot). We build an initial model of the human player's ball trajectories without a robot present and iteratively refine the robot and player models as they play together, ultimately resulting in a robot policy that can hold rallies of 22 successive hits on average and 150 at best.
26
+
27
+ < g r a p h i c s >
28
+
29
+ Figure 1: Robot setup An ABB IRB 120T 6-DOF robotic arm is mounted to a two-dimensional Festo linear actuator, creating an 8-DOF system.
30
+
31
+ While we demonstrate our approach on table tennis, we believe it is applicable more broadly, and can be applied to a number of tasks. For example, if the task involved a robot navigating through a busy hallway, we would first model the motion of human subjects alone (using motion capture devices, or a computer vision pipeline), and then train a policy in simulation with simulated human paths (so as to avoid collisions). Once this learned policy is deployed in the real world, the humans would likely alter their behavior in response to the robot, and capturing this data would allow us to create a more accurate human behavior model, which would further help us train a better policy. The process can be repeated until both human and robot behaviors converge, which would likely result in some co-adapted equilibrium point for the human and robot.
32
+
33
+ In summary, the primary contributions of this paper are: (a) a framework for training robotic policies in simulation that would need to interact with human subjects upon deployment, (b) a real world instantiation of this framework on a high-speed, dynamic task requiring tight, closed-loop interactions between humans and robots, (c) a detailed assessment of how our method, which we call Iterative-Sim-to-Real (i-S2R), compares with a baseline sim-to-real approach in the domain of cooperative robotic table tennis, and (d) the first robotic table tennis policy trained to control robot joints using reinforcement learning that can handle a wide variety of balls and can rally consistently with nonprofessional humans. To see videos of our system in action, please see the supplementary materials and this website https://sites.google.com/view/i-s2r.
34
+
35
+ § 2 RELATED WORK
36
+
37
+ Sim-to-real Learning for Robotics Reinforcement Learning (RL) is a powerful paradigm for learning increasingly capable and robust robot controllers $\left\lbrack {{13},{14},{15}}\right\rbrack$ . However, learning controllers from scratch on a physical robot is often prohibitively time consuming due to the large number of samples required to learn competent policies and potentially unsafe due to the random exploration inherent in RL methods [16, 17]. Training policies in simulation and transferring them to a physical robot, known as sim-to-real transfer (S2R), is therefore appealing.
38
+
39
+ Whilst it is both fast and safe to train agents from scratch in simulation, S2R presents its own challenge - persistent differences between simulated and real world environments that are extremely difficult to overcome [17, 18]. No single technique has been found to bridge the gap by itself. Instead a combination of multiple techniques are typically required for successful transfer. These include system identification $\left\lbrack {{13},{19},{20},{21},{22}}\right\rbrack$ which may involve iterating with a physical robot in the loop [2, 23], building hybrid simulators with learned models [5, 13, 22], dynamics randomization [1, $2,5,6,{13},{14},{15}\rbrack$ , simulated latency [15,22], and more complex network architectures [13]. We use (1) system ID with a physical robot in the loop, (2) dynamics randomization, (3) simulated latency, and (4) more complex networks. Similarly to Lee et al. [13], we use a 1D CNN to represent control 5 policies. Yet a sim-to-real gap persists. Continuing to train in the real world [24, 25, 26] (known as fine-tuning) is an effective way to bridge the remaining gap since the policy can adapt to changes in the environment. We also utilize fine-tuning in this work, but, unlike most past work, our learned policy is expected to interact cooperatively with a real human during this fine-tuning phase.
40
+
41
+ The closest sim-to-real approaches in prior work are Chebotar et al. [2] and Farchy et al. [23] since they update simulation parameters based on multiple iterations of real world data collection interleaved with simulated training. However, both of these prior works focus on using real world interaction data to learn improved physical parameters for the simulator, whereas our method focuses on learning better human behavior models. Unlike these prior work, our learned policies are proficient at interacting with humans upon deployment in the real world.
42
+
43
+ Reinforcement Learning for Table Tennis Robotic table tennis is a challenging, dynamic task [27] that has been a test bed for robotics research since the 1980s [28, 29, 30, 31, 32]. The current exemplar is the Omron robot [33]. Until recently, most methods tackled the problem by identifying a virtual hitting point for the racket $\left\lbrack {{34},{35},{36},{37},{38},{39},{40},{41}}\right\rbrack$ . These methods depend on being able to predict the ball state at time $t$ either from a ball dynamics model which may be parameterized $\left\lbrack {{34},{35},{42},{43}}\right\rbrack$ or by learning to predict it $\left\lbrack {{32},{37},{38}}\right\rbrack$ . This results in a target paddle state or states and various methods are used to generate robot joint trajectories given these targets $\left\lbrack {{32},{34},{35},{42},{43},{44},{45},{46},{47},{48},{49}}\right\rbrack$ . More recently, Tebbe et al. [50] learned to predict the paddle target using RL.
44
+
45
+ An alternative line of research seeks to do away with hitting points and ball prediction models, instead focusing on high frequency control of a robot's joints using either RL [27, 38, 51] or learning from demonstrations [45, 52, 53]. Of these, Büchler et al. [27] is the most similar, training RL policies to control robot joints from scratch at high frequencies given ball and robot states as policy inputs. However Büchler et al. [27] restricts the task to playing with a ball thrower on a single setting, whereas we focus on the harder problem of cooperative play with different humans.
46
+
47
+ Most prior work simplifies the problem by focusing on play with a ball thrower. Only a few [45, 48, ${50},{54}\rbrack$ focus on cooperative rallying with a human. Of these, Tebbe et al. [50], is the most similar, evaluating policies on various styles of human-robot cooperative play. However, Tebbe et al. [50] simplify the environment to a single-step bandit and the policy learns to predict the paddle state given the ball state at a pre-determined hit time $t$ . In contrast, we learn closed-loop policies that operate at a high frequency(75Hz), removing the need for learned policy to accurately predict where the ball will be in the future, increasing the robustness of the system, and enabling more dynamic play.
48
+
49
+ Human Robot Interaction Although not a typical HRI benchmark, cooperative robotic table tennis exhibits many of the features studied in the field: a human and robot working together, complex interactions between the two, inferring actions based on non-explicit cues, and so on. A major challenge in HRI is effectively modeling the complexities of human behavior in simulation [55] in order to learn without requiring an actual human. We employ several common techniques from HRI to learn in simulation such as simplifying the human model [56], specialized models for specific players [57], and refining our model based on real world interactions. Finally we note that like us, Paleja et al. [58] found policy performance varied depending on the skill of the human player.
50
+
51
+ § 3 PRELIMINARIES
52
+
53
+ Problem Setting We consider the problem of a cooperative human-robot table tennis as a single-agent sequential decision making problem in which the human is a part of the environment. We formalize the problem as a Markov Decision Process (MDP) [59] consisting of a of a 4-tuple $(\mathcal{S}$ , $\mathcal{A},\mathcal{R},p)$ , whose elements are the state space $\mathcal{S}$ , action space $\mathcal{A}$ , reward function $\mathcal{R} : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ , and transition dynamics $p : \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}$ . An episode $\left( {{s}_{0},{a}_{0},{r}_{0},\ldots ,{s}_{n},{a}_{n},{r}_{n}}\right)$ is a finite sequence of $s \in \mathcal{S},a \in \mathcal{A},r \in \mathcal{R}$ elements, beginning with a start state ${s}_{0}$ and ending when the environment terminates. We define a parameterized policy ${\pi }_{\theta } : \mathcal{S} \rightarrow \mathcal{A}$ with parameters $\theta$ . The objective is to maximize $\mathbb{E}\left\lbrack {\mathop{\sum }\limits_{{t = 1}}^{N}r\left( {{s}_{t},{\pi }_{\theta }\left( {s}_{t}\right) }\right) }\right\rbrack$ , the expected cumulative reward obtained in an episode under ${\pi }_{\theta }$ .
54
+
55
+ We make two simplifications to our problem. First, we focus on rallies starting with a hit instead of a table tennis serve to make the data more uniform. Second, an episode consists of a single ball throw and return. Policies are therefore rewarded based on their ability to return balls to the opposite side of the table. This reward structure encourages longer rally length, as an agent that can return any ball can also rally indefinitely provided the simulated single shots overlap with the real rally shots.
56
+
57
+ < g r a p h i c s >
58
+
59
+ Figure 2: Iterative-Sim-to-Real. left We start with a coarse bootstrap model of human behavior (shown in yellow), and use it to train an initial robot policy in simulation. We then fine-tune this policy in the real world against a human player, and the human interaction data collected during this period is used to update the human behavior model used in simulation. We then take the fine-tuned policy back to simulation to further train it against the improved human behavior model, and this process is repeated until robot and human behaviors converge. right Specific i-S2R details used in this work. $x$ -axis represents the training iterations in sim, $y$ -axis represents the fine-tuning iterations in real with human-in-the-loop. Model names are in italics
60
+
61
+ Evolutionary Strategies Our proposed approach can be used with any RL algorithm, but we optimize our policies using evolutionary strategies (ES) [60, 61, 62, 63, 64] which have been shown to be an effective strategy for solving MDPs [62, 64]. The main idea behind ES is to maximise the Gaussian smoothing of the RL objective described above. Let $F\left( \theta \right)$ be the RL objective where $\theta$ are the policy parameters, then the ES objective is given by:
62
+
63
+ $$
64
+ {F}_{\sigma }\left( \theta \right) = {\mathbb{E}}_{\delta \sim \mathcal{N}\left( {0,{\mathbf{I}}_{d}}\right) }\left\lbrack {F\left( {\theta + {\sigma \delta }}\right) }\right\rbrack , \tag{1}
65
+ $$
66
+
67
+ where $\sigma > 0$ controls the precision of the smoothing, and $\delta$ is a random normal vector with the same dimension as the policy parameters $\theta$ . We apply common ES approaches such as state normalization [62, 65], reward normalization [64], and perturbation filtering [62]. We also repeat and average rollouts with the same parameters to reduce variance. See Appendix A for details.
68
+
69
+ § 4 METHOD
70
+
71
+ i-S2R consists of two core components: (1) an iterative procedure for progressively updating and learning from a human behavior model - the human ball distribution in this setting - and (2) a method for modeling human behavior in simulation given a dataset of human play gathered in the real world (see Figure 2 for an overview). We first describe our iterative training procedure, and then discuss how we model human ball distributions.
72
+
73
+ Iterative training procedure An overview of the method can be seen in Figure 2. First we gather an initial dataset, ${D}_{0}$ , from player $P$ hitting table tennis balls across the table without a robot present. From ${D}_{0}$ , we build our first human behavior model ${M}_{0}$ that defines a ball distribution (see below). A robot policy is trained in simulation to return balls sampled from ${M}_{0}$ . Once the policy has converged, we transfer the parameters, ${\theta }_{0S}$ , to a real robotic system. The model is fine-tuned whilst player $P$ plays cooperatively (i.e. trying to maximize rally length) with the robot for a fixed number of parameter updates to produce ${\theta }_{0R}$ . All of the human hits during this fine-tuning phase are added to ${D}_{0}$ to form ${D}_{1}$ , which is used to define ${M}_{1}$ . The policy weights, ${\theta }_{0R}$ , are then transferred back to simulation and training is continued with the new distribution ${M}_{1}$ . After training in sim, the policy weights ${\theta }_{1S}$ are transferred back to the real world. The fine-tuning process is repeated to produce the next set of policy parameters ${\theta }_{1R}$ , dataset ${D}_{2}$ , and human model ${M}_{2}$ . This process can be repeated as many times as needed. One useful method for knowing when to stop is to check the change in human model in each iteration. See Appendix B for more details.
74
+
75
+ Modeling human ball distributions One of our primary goals is to simulate human player behaviors from a set of real world ball trajectories that have been subjected to air drag, gravity, and spin. Due to perception challenges in the real world, we do not explicitly model spin. The input to this procedure is a dataset of ball trajectories, where each trajectory consists of a sequence of ball positions. The output is a uniform ball distribution defined by 16 numbers: the min and max initial ball position (6), velocity (6), and $x$ and $y$ ball landing locations on robot side (4).
76
+
77
+ The ball distribution is derived from the dataset in two stages. The first step is to estimate a ball's initial position and velocity for each trajectory. We do this by selecting the free flight part of the trajectory (before the first bounce) and minimize the Euclidean distance between the simulated and real trajectory using the Nelder-Mead method [66]. We use the model, ${\ddot{x}}_{t} = g - {K}_{d}\left| \right| {\dot{x}}_{t}\left| \right| {\dot{x}}_{t}$ , ${x}_{t + 1} = {x}_{t} + {\Delta t}\left( {{\dot{x}}_{t} + \frac{{\Delta t}{\ddot{x}}_{t}}{2}}\right) ,{\dot{x}}_{t + 1} = {\dot{x}}_{t} + {\Delta t}{\ddot{x}}_{t}$ to simulate a trajectory, where (1) ${x}_{t},{\dot{x}}_{t}$ , and ${\ddot{x}}_{t}$ denote the position, velocity, and acceleration of the ball at time $\mathrm{t}$ ,(2) $g = - {9.81}\mathrm{\;m}/{\mathrm{s}}^{2}{\left\lbrack 0,0,1\right\rbrack }^{T}$ is the gravity, and (3) ${K}_{d} = {C}_{d}\rho \frac{A}{2m}.m = {0.0027kg}$ is the ball’s mass, $\rho = {1.29kg}/{m}^{3}$ is the air density, ${C}_{d} = {0.47}$ is the the drag coefficient, and $A = {1.256} \times {10}^{-3}{m}^{2}$ is the cross-sectional area for a standard table tennis ball.
78
+
79
+ We remove outliers using DBSCAN [67] and take the minimum and maximum per dimension to define the ball distribution. We sample an initial position and velocity from this distribution and generate a ball trajectory in simulation subject to the drag force. Other parameters needed for the simulation, such as coefficient of restitution, friction between the table and ball and the robot paddle and the ball, and so on have been empirically estimated following [68, 69].
80
+
81
+ § 5 SYSTEM, SIMULATION, AND MDP DETAILS
82
+
83
+ Our real world robotic system (see Figure 1) is a combination of an ABB IRB 120T 6-DOF robotic arm mounted to a two-dimensional Festo linear actuator, creating an 8-DOF system, with a table tennis paddle mounted on the end-effector. The 3D ball position is estimated via a stereo pair of Ximea MQ013CG-ON cameras from which we process 2D detections, triangulate to 3D, and filter through a 3D tracker. See Appendix C for more details. We concatenate the ball position with the 8-DOF robot joint angles to form an 11-dimensional observation space. Along with the current observation, we pass the past seven observations (a state space of $8 \times {11}$ ) as an input to the policy. The policy controls the robot by outputting eight individual joint velocities at ${75}\mathrm{{Hz}}$ . Following Gao et al. [51] we use a 3-layer 1-D dilated gated convolutional neural network as our policy architecture. Details of the policy architecture can be found in Appendix D.
84
+
85
+ Our simulation is built on the PyBullet [70] physics engine replicating our real environment. We use PyBullet to model robot and contact dynamics whilst balls are modeled as described in Section 4. We add random uniform noise of $2 \times$ the diameter of a table tennis ball to the ball observation per timestep to aid transfer to a physical system. We also found it necessary to simulate sensor latency, otherwise sim-to-real transfer completely failed. Robot actions as well as ball and robot observation latencies are modeled as parameterized Gaussians based on measurements from the real system. Policies are rewarded for hitting balls and for returning balls in a cooperative manner. See Appendix $\mathrm{F}$ for more details.
86
+
87
+ § 6 EXPERIMENTAL RESULTS
88
+
89
+ Here we aim to answer the following questions; (1) Does i-S2R improve over baseline sim-to-real with fine-tuning (which we refer to as S2R) in a human-robot interactive setting where the human behavior changes in response to the robot policy? (2) How many sim-to-real iterations does the human behavior model need to converge? (3) How much of i-S2R's performance can be attributed to (a) improving the human behavior model vs. (b) the additional training steps in simulation? and (4) Does i-S2R generalize better to new players compared with S2R?
90
+
91
+ Experimental setup To evaluate our method, we completed the procedure described in Section 4 for five different non-professional table tennis players, thus training five independent i-S2R policies. Each player also trained (1) a S2R baseline which was given the same amount of real world training time as the i-S2R policy and (2) a S2R-Oracle which was trained in simulation on the penultimate human behavior model obtained through i-S2R and fine-tuned in the real world for 35% of the i-S2R training budget. This is equivalent to the last round of fine-tuning for i-S2R. (See Figure 2 right). S2R-Oracle is intended to isolate the effect of the human behavior modeling on final performance, enabling us to better understand what aspects of the i-S2R process matter.
92
+
93
+ Each model was evaluated by (a) the model's trainer and (b) two other players. In each evaluation, 50 rallies (defined as a sequence of consecutive hits ending when one player fails to return the ball)
94
+
95
+ < g r a p h i c s >
96
+
97
+ Figure 3: Aggregated results Boxplot details: The white circle is the mean, the horizontal line is the median, box bounds are the 25th and 75th percentiles. "out-of-sim" refers to models that are deployed on the real hardware with zero fine-tuning (see Figure 2). left When aggregated across all players, i-S2R rally length is higher than S2R by about $9\%$ . However, note that simple aggregation puts extra weight on higher skilled players that are able to hold a longer rally. center The normalized rally length distribution (see Appendix I for normalization details) shows a bigger improvement between i-S2R and S2R in terms of the mean, median and 25th and 75th percentiles. right The histogram of rally lengths for i-S2R and S2R (250 rallies per model) shows that a large fraction of the rallies for S2R are shorter (i.e. less than 5), while i-S2R achieves longer rallies more frequently.
98
+
99
+ < g r a p h i c s >
100
+
101
+ Figure 4: Results by player skill. When broken down by player skill, we notice that i-S2R has a significantly longer rally length than S2R and is comparable to S2R-Oracle for beginner and intermediate players. The advanced player is an exception. Note that S2R-Oracle gets just 35% of i-S2R and S2R fine-tuning budget.
102
+
103
+ were played with the human always starting and the rally length calculated as the number of paddle touches for both the human and robot. While the human can be responsible for a rally ending, almost all ended with the robot failing to return the ball or returning it such that the human could not easily continue the rally. The model trainer also evaluated intermediate checkpoints (see Figure 2) using the same methodology to shed light on the training dynamics. To ensure fair evaluation, all models were tested in random order and the identity of the model was kept hidden from the evaluator ( "blind eval"). Further details can be found in Appendix G.
104
+
105
+ Due to the time needed to train and evaluate i-S2R, S2R, and S2R-Oracle (roughly 20 hours per person) we note that 4 of the 5 players are authors on this paper. The non-author player's results appear consistent with our overall findings (see Appendix J for details).
106
+
107
+ (1) Does i-S2R improve over S2R in a human-robot interactive setting? Figure 3 presents rally length distributions aggregated across all players whilst Figure 4 splits the data by skill. Players are grouped in to beginner (40% players), intermediate (40% of players) and advanced (20% players). The non-author player was classified as beginner. Please see Appendix H for skill level definitions. When aggregated over all players, we see that i-S2R is able to hold longer rallies (i.e. rallies that are longer than length 5) at a much higher rate than S2R, as shown in Figure 3. When the players are split by skill level, i-S2R significantly outperforms S2R for both beginner and intermediate players (80% of the players). The improvement differs between the two groups, with i-S2R yielding a $\approx {70}\%$ and $\approx {175}\%$ improvement for beginner and intermediate players respectively.
108
+
109
+ The policy trained by the advanced player has a different trend. Here, S2R dramatically outperforms i-S2R. We hypothesize that a good out-of-sim model (after 1st round of sim training) plays a large part in this difference (see Figure 5). However, due to the time consuming nature of repeating experiments on the physical system it is difficult to fully explain why this is the case, especially since both the training methodology and involvement of humans introduces a high degree of variance.
110
+
111
+ < g r a p h i c s >
112
+
113
+ Figure 5: Policy performance at key checkpoints during training. For beginner players i-S2R performance converges after just two iterations (see fine-tune-65%). For intermediate players i-S2R takes three iterations to converge (see fine-tune-100%). "S2R-Oracle-sim-3" here is same as "S2R-Oracle-out-of-sim" in Figure 4.
114
+
115
+ < g r a p h i c s >
116
+
117
+ Figure 6: While the key distribution parameters change significantly from initial ball distribution (sim1) to that after 1st round of sim training (sim2), the change in the parameters between 1st and 2nd round of sim training is much less (sim2 vs. sim3).
118
+
119
+ (2) How many sim-to-real iterations does the human behavior model take to converge? For beginners we find that it only took two iterations for i-S2R to converge (see Figure 5). In the leftmost chart showing beginner policy data, i-S2R achieves comparable levels of performance at the end of the 2nd (fine-tune-65%) and final (fine-tune-100%) iterations. However, for intermediate skilled players this is not the case. The change in the human behavior model (ball distribution) from iteration to iteration shown in Figure 6 offers a clue. For beginner players, the distribution barely changes after the 2nd round as evidenced by the difference between the left and right charts. Whereas for intermediate players the distribution continues to change substantially from round 2 to 3 (specifically in $y$ and $z$ velocities), which is perhaps why we see the strongest performance of $\mathrm{i} - \mathrm{S}2\mathrm{R}$ after the $2\mathrm{{nd}}$ iteration for beginners but after the 3rd iteration for intermediate players.
120
+
121
+ The advanced player’s distribution hardly changes between the 2nd and 3rd round and the performance of i-S2R is comparable across both. However this does not explain why we observed the best i-S2R performance at the end of the 1st round for this player. We hypothesize that a good out-of-sim model after first round of training (see Figure 5) plays a large part in this. Investigating the effect of playing style on changes in ball distribution every iteration and hence on the sim-to-real gap or training for more iterations for advanced players can shed light on this in future work.
122
+
123
+ (3) What is the impact of the human behavior model? For beginner and intermediate players, S2R-Oracle is in line with i-S2R performance. However S2R-Oracle also achieved this level of performance with just 35% of the real world training time compared to i-S2R and S2R. Therefore much of the benefit of i-S2R likely comes from improving the human behavior model from iteration to iteration. It also suggests that if we had access to the final human behavior model at the beginning of training, the iterative sim-to-real training would not be needed. We could simply fine-tune in real and achieve comparable performance with significantly less human training time. S2R-Oracle's strong performance also validates our motivation for this work, in which we hypothesized that the difficulty of defining a good human behavior model a priori for human-robot cooperative rallies was limiting performance.
124
+
125
+ This result suggests that i-S2R does not benefit from additional training iterations in simulation over and above the improvements to the human behavior model. The evaluations at earlier stages in training (shown in Figure 5) suggest the remaining sim-to-real gap could be responsible. Figure 5 shows that, in all cases, after both the second (sim-2) and third (sim-3) rounds of simulated training, rally length drops noticeably. Reducing the sim-to-real gap might improve i-S2R's performance due to better starting points for the last two rounds of fine-tuning.
126
+
127
+ (4) Does i-S2R offer any generalization benefits in this setting? We now evaluate the generalization capabilities of models trained with i-S2R, and how they compare against models trained using S2R. As shown in Figure 7, i-S2R significantly outperforms S2R when the models are cross evaluated by other players (with similar blind evaluations as earlier) including for the advanced player where S2R was best in self evaluation (see Appendix J for details by player). This observation holds whether we look at absolute or normalized rally length (see Appendix I normalization methodology details). Performance with other players is lower for all models, however i-S2R maintains around ${70}\%$ of performance on average compared to ${30}\%$ for S2R. We hypothesize that the broader training distribution obtained by iterating between simulation and reality leads to policies that can deal with a wider range of ball throws, leading to better generalization to new players. Our confidence in this hypothesis is strengthened by the fact that both i-S2R and S2R-Oracle significantly outperform S2R under this setting.
128
+
129
+ < g r a p h i c s >
130
+
131
+ Figure 7: Cross-evaluations mean rally lengths (with ${95}\%$ CI) aggregated across all players. i-S2R generalizes better to new players compared to S2R.
132
+
133
+ § 7 LIMITATIONS
134
+
135
+ Having a human in the loop poses numerous challenges to robotic reinforcement learning. It slows down the overall learning process to accommodate human participants, and limits the scale at which one can experiment. As one example, while we tested our method on five subjects, time limitations prevented us from training with multiple random seeds for each subject. There is significant variation in how people interact with robots (or sometimes even the same person over time), which introduces extra variance into our experiments. In our experiments, the trends we saw for one particular subject were substantially different from all other subjects, and we could not come to a clear explanation of why that was the case.
136
+
137
+ It is possible for an expert human player to get long rallies by keeping the ball in a very narrow distribution without really improving the inherent capability of the agent to play beyond those balls. In our studies, since we used non-professional players, this was not an issue. However, for future work in cooperative human-robot tasks, it would be interesting to explore ways to disentangle the skill level of the robot from the human participant.
138
+
139
+ Another limitation arising from training a policy with a human in the loop is the possibility that some performance improvements are attributable to human learning and not policy learning. We did our best to mitigate this by asking players to evaluate all models "blind" (i.e. the player is unaware of what model they are evaluating) and at the end of training, after which the majority of human learning was likely to have occurred. Consequently, we think that differences between models reflect differences in policy capability and not human.
140
+
141
+ Finally, we represent humans in simulation in a simple way - by capturing all initial position and velocity ranges during their play - and then we sample each ball in simulation uniformly and independently. This ignores the probability distribution of balls within those ranges and also results in a loss of correlation between subsequent balls in a rally. This could be addressed by developing a more sophisticated ball model that takes these factors into account.
142
+
143
+ § 8 CONCLUSION
144
+
145
+ We present i-S2R to learn RL policies that are able to interact with humans by iteratively training in simulation and fine-tuning in the real world with humans in the loop. The approach starts with a coarse model of human behavior and refines it over a series of fine-tuning iterations. The effectiveness of this method is demonstrated in the context of a table tennis rallying task. Extensive "blind" experiments shed light on various aspects of the method and compare it against a baseline where we train and fine-tune in real only once (S2R). We show that i-S2R outperforms S2R in aggregate, and the difference in performance is particularly significant for beginner and intermediate players (4/5). Moreover, i-S2R generalizes much better than S2R to other players.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/52c5e73SlS2/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,223 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Walk These Ways: Gait-conditioned Policies Yield Diversified Quadrupedal Agility
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: We investigate the utility of structuring learned quadrupedal gaits with dynamically specified gait parameters. We present a fast and robust locomotion controller that can trot, pronk, pace, and bound at variable frequency, posture, and speed. Modulating these parameters at runtime enables a human operator or high-level planner to execute varied sequences of agile behavior useful for downstream tasks. A single policy realizes crouching, hopping, high-speed running, agile leaps, and rhythmic dance. Our analyses suggest new perspectives on the role of gait in learned locomotion. Video is available at https://sites.google.com/view/gait-conditioned-rl/.
12
+
13
+ Keywords: Locomotion, Reinforcement Learning, Task Specification
14
+
15
+ ![01963f74-145a-7a0d-aae6-abefa8297f97_0_340_1081_1118_191_0.jpg](images/01963f74-145a-7a0d-aae6-abefa8297f97_0_340_1081_1118_191_0.jpg)
16
+
17
+ Figure 1: Diverse gaits enable a human to pilot a quadruped expressively and respond to conditions that were not anticipated at training time. A single policy realizes crouching, hopping, high-speed running, agile leaps, and rhythmic dance. The controller is demonstrated on uneven, slippery, and granular terrain and can traverse down stairs, but not up.
18
+
19
+ ## 11 1 Introduction
20
+
21
+ Much work on learning quadrupedal locomotion targets a static locomotion style objective, such as robustness to disturbances $\left\lbrack {1,2,3}\right\rbrack$ , energy efficiency $\left\lbrack 4\right\rbrack$ , or similarity to a particular reference motion [5, 6, 7]. While successful, these approaches offer the user meager control over the robot's locomotion style during deployment. In practical deployments, the end user may prefer to have a choice among a large number of locomotion styles in the hope that some will suit their needs at any given moment. The user's needs might range from interaction with unanticipated environments to the generation of stylish motions for entertainment.
22
+
23
+ One way to place more control in the hands of the user is to provide a set of flexible and intuitive motion primitives that they can compose to produce the locomotion behavior of their preferred style or which suits their downstream task. To this end, we implement and evaluate a task specification language for learned quadrupedal locomotion which enables the online composition of motion sequences combining diverse gaits, postures, and speeds. Such language is common in the setting of analytical control, where it is often necessary to define constraints, objectives, and models $\left\lbrack {8,9,{10}}\right\rbrack$ . It is often avoided in reinforcement learning, where it is possible for motion styles to emerge from a reward function without direct specification $\left\lbrack {1,4,{11},{12}}\right\rbrack$ . We find that a single gait-conditioned policy can execute diverse and controllable motion styles while retaining a high degree of robustness against variations in robot properties and natural terrains. Our definition of gait parameters, which extends previous work by Siekmann et al. [13] from the bipedal setting, enables the composition of behaviors useful for multiple downstream tasks, including navigation through confined spaces, agile jumps longer than the robot's body length, and the execution of dynamic choreographed dance. No previously proposed reinforcement learning agent has been suitable to accomplish any of these tasks, much less to accomplish them all with a single policy.
24
+
25
+ ![01963f74-145a-7a0d-aae6-abefa8297f97_1_309_209_1176_303_0.jpg](images/01963f74-145a-7a0d-aae6-abefa8297f97_1_309_209_1176_303_0.jpg)
26
+
27
+ Figure 2: Transition between trotting, pronking, pacing, and bounding in place at alternating frequencies of $2\mathrm{{Hz}}$ and $4\mathrm{{Hz}}$ . Images show the robot achieving the contact phases of each gait, with stance feet highlighted in red. Black shading in the bottom plot reflects the timing reference variables ${\mathbf{t}}_{t}$ for each foot; colored bars report the contact states measured by foot sensors.
28
+
29
+ ## 2 Related Work
30
+
31
+ Most previous work on learning locomotion addresses the task of achieving a target body velocity [1, $2,3,{11},{13},{14},{15},{12},{16}\rbrack$ . In these works, the operator provides the target velocity as input to a control policy, and the policy coordinates leg motions to track that velocity as closely as possible. There are many ways a robot can move in order to achieve the same target velocity; for example, it may select different foot contact patterns or carry its body in different poses while progressing forward at the same speed. For this reason, we claim that the velocity tracking task is generally underspecified. In response, several forms of implicit or explicit specification have been used in the prior literature:
32
+
33
+ Auxiliary Rewards. Some prior works overcome underspecification by defining auxiliary rewards. For example, penalties on energy expenditure [4], body motion [17], or deviation from a nominal pose [16] may distinguish one locomotion gait as preferred over another. In theory, an objective such as energy minimization may cause substantially different gaits to be preferred at different speeds. In practice, all works that learn joint-space policies with auxiliary rewards have ended up learning relatively static contact patterns and body motion [3, 15, 12, 16]. Fu et al. [4] notably reported that different gaits emerge at different speeds when applying an auxiliary energy minimization reward, but they achieved this by training a separate policy at each speed and combining them through a manual distillation step.
34
+
35
+ Imitation Learning. Another line of work uses reference trajectories to specify robot motions $\left\lbrack {5,6,7,{18},{19},{20}}\right\rbrack$ . Some approaches use an imitation objective to reward states similar to the reference motion [5, 20, 21], while others use adversarial techniques to reward stylistically similar movements [6, 7]. Shao et al. [20] notably generated a library of reference trajectories for a quadrupedal robot from a set of foot phase parameters and trained a goal-conditioned policy to imitate them. However, using a library of detailed reference trajectories penalizes the agent for exploring behaviors that are difficult to procedurally generate. We would prefer to specify the gait in a less explicit way, to allow the policy to discover complex strategies such as dynamic body motion, slip recovery, and resistance to perturbation.
36
+
37
+ ![01963f74-145a-7a0d-aae6-abefa8297f97_2_447_202_1149_568_0.jpg](images/01963f74-145a-7a0d-aae6-abefa8297f97_2_447_202_1149_568_0.jpg)
38
+
39
+ Table 1: Reward terms for task, stability, and smoothness. Terms combined from [11, 16, 22].
40
+
41
+ Reward-based Task Specification. The prior works most closely related to ours explicitly include gait parameters in the task specification through additional reward terms, including tracking rewards for contact patterns [22] and target foot placements [23]. Our task reward is modeled on Siekmann et al. [22], which used gait-dependent reward terms to penalize foot motion during stance and contact force during swing phases. We extend [22] to a more diverse family of quadrupedal gaits and provide an expanded treatment in the setting of robust and high-speed locomotion.
42
+
43
+ ## 3 Method
44
+
45
+ ### 3.1 Gait Specification
46
+
47
+ 70 We parameterize the common quadrupedal gaits by an 8-dimensional command vector,
48
+
49
+ $$
50
+ {\mathbf{c}}_{t} = \left\lbrack {{\mathbf{v}}_{x}^{\mathrm{{cmd}}},{\mathbf{v}}_{y}^{\mathrm{{cmd}}},{\mathbf{\omega }}_{z}^{\mathrm{{cmd}}},{\mathbf{\theta }}_{1}^{\mathrm{{cmd}}},{\mathbf{\theta }}_{2}^{\mathrm{{cmd}}},{\mathbf{\theta }}_{3}^{\mathrm{{cmd}}},{\mathbf{f}}^{\mathrm{{cmd}}},{\mathbf{h}}_{z}^{\mathrm{{cmd}}}}\right\rbrack ,
51
+ $$
52
+
53
+ - ${\mathbf{v}}_{x}^{\text{cmd }},{\mathbf{v}}_{y}^{\text{cmd }},{\mathbf{\omega }}_{z}^{\text{cmd }}$ are the linear velocities in the body-frame x- and y- axes, and the angular velocity in the yaw axis.
54
+
55
+ - ${\mathbf{\theta }}^{\text{cmd }} = \left( {{\mathbf{\theta }}_{1}^{\text{cmd }},{\mathbf{\theta }}_{2}^{\text{cmd }},{\mathbf{\theta }}_{3}^{\text{cmd }}}\right)$ are the timing offsets between pairs of feet. ${\mathbf{\theta }}^{\text{cmd }} = \left( {0,0,0}\right)$ corresponds to a pronking pattern with the contact timings of all four feet synchronized. ${\mathbf{\theta }}^{\text{cmd }} = \left( {{0.5},0,0}\right)$ corresponds to a trotting pattern with diagonal pairs alternating contact. ${\mathbf{\theta }}^{\text{cmd }} = \left( {0,{0.5},0}\right)$ represents a pacing pattern with left and right pairs alternating contact. ${\mathbf{\theta }}^{\text{cmd }} = \left( {0,0,{0.5}}\right)$ represents a bounding pattern with front and rear pairs alternating contact. Our policy also learns to continuously interpolate between the major patterns, enabling variants such as galloping $\left( {{\mathbf{\theta }}^{\text{cmd }} = \left( {{0.25},0,0}\right) }\right)$ . Taken together, the parameters ${\mathbf{\theta }}^{\text{cmd }}$ can express all two-beat quadrupedal contact patterns. For a visual illustration, refer to Figure 2 or the accompanying video.
56
+
57
+ - ${\mathbf{f}}^{\text{cmd }}$ is the stepping frequency expressed in Hz. Commanding ${\mathbf{f}}^{\text{cmd }} = 3\mathrm{{Hz}}$ will result in each foot making contact three times per second.
58
+
59
+ - ${\mathbf{h}}_{z}^{\mathrm{{cmd}}}$ is the body height command.
60
+
61
+ Reward function. All reward terms are given in Table 1. Task rewards are defined as functions of the command vector ${\mathbf{c}}_{t}$ for body velocity tracking, body pose tracking, and contact schedule tracking. To define the desired contact schedule, function ${C}_{\text{foot }}^{\mathrm{{cmd}}}\left( {{\mathbf{\theta }}^{\mathrm{{cmd}}}, t}\right)$ computes the desired contact state of each foot from the phase and timing variable, as described in [22], with details given in the appendix. Stability and smoothness rewards express auxiliary locomotion objectives that are desired across all gaits. As in [16], we force the total reward to be positive by computing it as ${r}_{\text{pos }}\exp \left( {{0.02}{r}_{\text{neg }}}\right)$ where ${r}_{\text{pos }}$ is the sum of all positive reward terms and ${r}_{\text{neg }}$ is the sum of all negative reward terms.
62
+
63
+ <table><tr><td>Term</td><td>Minimum</td><td>Maximum</td><td>Units</td></tr><tr><td>Payload Mass</td><td>-1.0</td><td>3.0</td><td>$\mathrm{{kg}}$</td></tr><tr><td>Motor Strength</td><td>90</td><td>110</td><td>%</td></tr><tr><td>Joint Calibration</td><td>-0.02</td><td>0.02</td><td>rad</td></tr><tr><td>Ground Friction</td><td>0.40</td><td>1.00</td><td>-</td></tr><tr><td>Ground Restitution</td><td>0.00</td><td>1.00</td><td>-</td></tr><tr><td>Gravity Offset</td><td>- 1.0</td><td>1.0</td><td>$\mathrm{m}/{\mathrm{s}}^{2}$</td></tr><tr><td>${\mathbf{v}}_{x}^{\text{cmd }}$</td><td>-</td><td>-</td><td>m/s</td></tr><tr><td>${\mathbf{v}}_{x}^{\text{cmd }}$</td><td>-0.6</td><td>0.6</td><td>m/s</td></tr><tr><td>${\omega }_{z}^{\mathrm{{cmd}}}$</td><td>-</td><td>-</td><td>m/s</td></tr><tr><td>${f}^{\mathrm{{cmd}}}$</td><td>1.5</td><td>4.0</td><td>$\mathrm{{Hz}}$</td></tr><tr><td>${\mathbf{\theta }}_{1}^{\mathrm{{cmd}}},{\mathbf{\theta }}_{2}^{\mathrm{{cmd}}},{\mathbf{\theta }}_{3}^{\mathrm{{cmd}}}$</td><td>0.0</td><td>1.0</td><td>-</td></tr><tr><td>${\mathbf{h}}_{z}^{\text{cmd }}$</td><td>0.10</td><td>0.45</td><td>m</td></tr></table>
64
+
65
+ Table 2: Randomization ranges for dynamics parameters (top) and commands (bottom) during training. ${\mathbf{v}}_{x}^{\mathrm{{cmd}}},{\mathbf{v}}_{y}^{\mathrm{{cmd}}}$ are adapted according to a curriculum.
66
+
67
+ ![01963f74-145a-7a0d-aae6-abefa8297f97_3_903_220_571_421_0.jpg](images/01963f74-145a-7a0d-aae6-abefa8297f97_3_903_220_571_421_0.jpg)
68
+
69
+ Figure 3: Pronking and trotting gaits are easier to learn and tend to dominate pacing and bounding early in training. However, when discovered, pacing and bounding gaits can yield good performance and later become preferred for some downstream tasks (Section 4.4).
70
+
71
+ ### 3.2 Learning Quadrupedal Gaits
72
+
73
+ We implement our training environment in the Isaac Gym simulator [24] and train our locomotion policy using Proximal Policy Optimization [25].
74
+
75
+ Observation Space. The observation ${\mathbf{o}}_{t}$ consists of command ${\mathbf{c}}_{t}$ , previous action ${\mathbf{a}}_{t - 1}$ , sensor data ${\mathbf{s}}_{t}$ , and timing reference variables ${\mathbf{t}}_{t}$ . The sensor data ${\mathbf{s}}_{t}$ includes joint positions and velocities ${\mathbf{q}}_{t},{\dot{\mathbf{q}}}_{t}$ (measured by joint encoders) and the gravity vector in the body frame ${\mathbf{g}}_{t}$ (measured by accelerometer). Timing reference variables ${\mathbf{t}}_{t}$ are sine functions aligned to the contact offset of each foot:
76
+
77
+ $$
78
+ \left\lbrack {{\mathbf{t}}_{t}^{\mathrm{{FR}}},{\mathbf{t}}_{t}^{\mathrm{{FL}}},{\mathbf{t}}_{t}^{\mathrm{{RR}}},{\mathbf{t}}_{t}^{\mathrm{{RL}}}}\right\rbrack = \left\lbrack {\sin \left( {t + {\mathbf{\theta }}_{2}^{\mathrm{{cmd}}} + {\mathbf{\theta }}_{3}^{\mathrm{{cmd}}}}\right) ,\sin \left( {t + {\mathbf{\theta }}_{1}^{\mathrm{{cmd}}} + {\mathbf{\theta }}_{3}^{\mathrm{{cmd}}}}\right) ,\sin \left( {t + {\mathbf{\theta }}_{1}^{\mathrm{{cmd}}}}\right) ,\sin \left( {t + {\mathbf{\theta }}_{2}^{\mathrm{{cmd}}}}\right) }\right\rbrack
79
+ $$
80
+
81
+ where $t$ is a counter variable that advances from 0 to 1 during each gait cycle and ${}^{\mathrm{{FR}}},{}^{\mathrm{{FL}}},{}^{\mathrm{{RR}}},{}^{\mathrm{{RL}}}$ are the four feet. These timing variables are adapted from [22] to represent the quadrupedal gaits.
82
+
83
+ Action Space. The action ${\mathbf{a}}_{t}$ consists of position targets for each of the twelve joints. The position targets are tracked using a proportional-derivative controller with ${k}_{p} = {20},{k}_{d} = {0.5}$ .
84
+
85
+ Domain Randomization. We randomize the robot's body mass, motor strength, and joint position calibration, the friction and restitution of the terrain, and the orientation and magnitude of gravity. This randomization facilitates sim-to-real transfer and makes the controller robust to different terrain geometries and material properties. All randomized parameters and ranges are given in Table 2.
86
+
87
+ Policy Architecture. We apply the Concurrent State Estimation architecture proposed by Ji et al. [16]. The "estimator module", a feedforward neural network, is trained using supervised learning to predict the mass, velocity, and height of the robot body, the friction and restitution of the ground, and the gravity vector. The input to this estimator module is a 15-step history of observations. The estimator output is concatenated with the state history as input to the policy body. The policy body is an MLP with hidden layer sizes $\left\lbrack {{512},{256},{128}}\right\rbrack$ ; the estimator module is an MLP with hidden layer sizes $\left\lbrack {{256},{128}}\right\rbrack$ ; both use ELU activations.
88
+
89
+ <table><tr><td/><td colspan="4"/><td rowspan="13"> <img src="https://cdn.noedgeai.com/01963f74-145a-7a0d-aae6-abefa8297f97_4.jpg?x=1070&y=232&w=401&h=384&r=0"/> </td></tr><tr><td>Gait</td><td>${0.0}\mathrm{\;m}/\mathrm{s}$</td><td>${1.0}\mathrm{\;m}/\mathrm{s}$</td><td>${2.0}\mathrm{\;m}/\mathrm{s}$</td><td>${3.0}\mathrm{\;m}/\mathrm{s}$</td></tr><tr><td>Trotting</td><td>$9 \pm 1$</td><td>${24} \pm 1$</td><td>${53} \pm 5$</td><td>${98} \pm 9$</td></tr><tr><td>Pronking</td><td>${32} \pm 1$</td><td>${43} \pm 2$</td><td>${68} \pm 5$</td><td>${112} \pm 5$</td></tr><tr><td>Pacing</td><td>${13} \pm 3$</td><td>${25} \pm 2$</td><td>${55}_{\pm 3}$</td><td>${99}_{\pm 6}$</td></tr><tr><td>Bounding</td><td>${22}_{\pm 2}$</td><td>${39}_{\pm 4}$</td><td>${78}_{\pm 5}$</td><td>${127}_{\pm {35}}$</td></tr><tr><td>Gait-free Baseline</td><td>${17} \pm 5$</td><td>${35} \pm 5$</td><td>${64}_{\pm {10}}$</td><td>${102}_{\pm {14}}$</td></tr><tr><td>Trotting $\left( {{\mathbf{f}}^{\mathrm{{cmd}}} = 2\mathrm{\;{Hz}}}\right)$</td><td>${11} \pm 2$</td><td>${25} \pm 1$</td><td>${55} \pm 4$</td><td>${104}_{\pm 8}$</td></tr><tr><td>Trotting $\left( {{\mathbf{f}}^{\mathrm{{cmd}}} = 3\mathrm{\;{Hz}}}\right)$</td><td>${9}_{\pm 1}$</td><td>${24} \pm 1$</td><td>${53}_{\pm 5}$</td><td>${98}_{\pm 9}$</td></tr><tr><td>Trotting $\left( {{\mathbf{f}}^{\mathrm{{cmd}}} = 4\mathrm{\;{Hz}}}\right)$</td><td>$9 \pm 1$</td><td>${26} \pm 0$</td><td>${60} \pm 4$</td><td>${114} \pm {12}$</td></tr><tr><td>Trotting $\left( {{\mathbf{h}}_{z}^{\mathrm{{cmd}}} = {20}\mathrm{\;{cm}}}\right)$</td><td>9±1</td><td>${26} \pm 1$</td><td>${56} \pm 3$</td><td>${102}_{\pm 8}$</td></tr><tr><td>Trotting $\left( {{\mathbf{h}}_{z}^{\mathrm{{cmd}}} = {30}\mathrm{\;{cm}}}\right)$</td><td>9±1</td><td>${24} \pm 1$</td><td>${53} \pm 5$</td><td>98±9</td></tr><tr><td>Trotting $\left( {{\mathbf{h}}_{z}^{\mathrm{{cmd}}} = {40}\mathrm{\;{cm}}}\right)$</td><td>${10} \pm 1$</td><td>${23} \pm 1$</td><td>${52} \pm 4$</td><td>${95}_{\pm 9}$</td></tr></table>
90
+
91
+ Table 4: Power consumption (J/s) across speeds for common quadrupedal gaits and for policy without gait constraint. While efficiency varies across gaits and speeds, trotting and pacing gaits achieve efficiency competitive with unconstrained gait across speeds.
92
+
93
+ Task Curriculum. To enable the robot to both run and spin fast, we sample velocity commands using the grid adaptive curriculum strategy from Margolis et al. [12]. A discrete grid tabulates successful combinations of linear and angular velocity thus far. Velocity commands for the policy are sampled from a small neighborhood of the successful region. We grow a separate grid adaptive curriculum for each of the four major gaits: pronking, trotting, bounding, and pacing.
94
+
95
+ The procedure for sampling a command ${\mathbf{c}}_{t}$ is as follows: First, one of the four major gaits (pronk-ing, trotting, bounding, or pacing) is selected with equal probability. Then, a candidate velocity command $\left( {{\mathbf{v}}_{x}^{\text{cmd }},{\mathbf{\omega }}_{z}^{\text{cmd }}}\right)$ for the chosen gait is sampled following [12]. To let the policy learn to interpolate between major gaits, $\left( {{\mathbf{\theta }}_{1}^{\mathrm{{cmd}}},{\mathbf{\theta }}_{2}^{\mathrm{{cmd}}},{\mathbf{\theta }}_{3}^{\mathrm{{cmd}}}}\right)$ are sampled from a Gaussian distribution that is centered at the chosen major gait. Finally, the remaining command parameters $\left( {{\mathbf{v}}_{y}^{\mathrm{{cmd}}},{\mathbf{f}}^{\mathrm{{cmd}}},{\mathbf{h}}_{z}^{\mathrm{{cmd}}}}\right)$ are sampled independently and uniformly. Their ranges are given in Table 2.
96
+
97
+ ### 3.3 Deployment
98
+
99
+ We deploy our controller in the real world on the Unitree Go1 Edu robot [26]. The Go1 is a commercially available and relatively low-cost robot quadruped.
100
+
101
+ Computing Architecture: An onboard Jetson TX2 NX computer runs our trained policy. We implement an interface based on Lightweight Communications and Marshalling (LCM) [27] to pass sensor data, motor commands, and joystick state between our code and the low-level control SDK provided by Unitree.
102
+
103
+ Sensors: In addition to joint position encoders and an inertial measurement unit, the robot is equipped with foot contact sensors. We log the detected contact states for our analysis, but they are not provided to our controller.
104
+
105
+ ## 4 Experimental Results
106
+
107
+ ### 4.1 Learning High-Speed Quadrupedal Gaits
108
+
109
+ Our method enables a single learned controller to run and spin using all common quadrupedal gaits. Figure 3 shows the early progression of the velocity curriculum for each gait. Notably, the delayed progress of pacing and bounding suggests that these gaits may be more challenging to learn than trotting and pronking. This hypothesis is supported by the rarity of pacing and bounding in the reinforcement learning literature. Likewise, the high learnability of trotting aligns with its tendency to emerge from a variety of non-gait-specific auxiliary rewards [2, 11, 16, 17]. However, because our task sampling strategy provides explicit incentive to achieve all gaits, we find that fast and robust control strategies for pacing, bounding, and pronking all emerge as well.
110
+
111
+ <table><tr><td>Gait</td><td>${r}_{v, x, y}$</td><td>${r}_{{\omega }_{z}^{\text{cmd }}}$</td><td>${r}_{{c}_{f}^{\text{cmd }}}$</td><td>${r}_{{c}_{v}}$</td><td>Survival</td></tr><tr><td>Trotting</td><td>${0.80}{}_{\pm {0.01}}^{\left( {0.95}\right) }$</td><td>${0.76}_{\pm {0.00}}^{\left( {0.89}\right) }$</td><td>${0.95}{}_{\pm {0.00}}^{\left( {0.97}\right) }$</td><td>${0.98}{}_{\pm {0.00}}^{\left( {0.98}\right) }$</td><td>${0.88}_{\pm {0.01}}^{\left( {1.00}\right) }$</td></tr><tr><td>Pronking</td><td>${0.84}_{\pm {0.01}}^{\left( {0.94}\right) }$</td><td>${0.77}_{\pm {0.01}}^{\left( {0.85}\right) }$</td><td>${0.96}_{\pm {0.00}}^{\left( {0.96}\right) }$</td><td>${0.97}_{\pm {0.00}}^{\left( {0.98}\right) }$</td><td>${0.82}{}_{\pm {0.02}}^{\left( {1.00}\right) }$</td></tr><tr><td>Pacing</td><td>${0.76}_{\pm {0.01}}^{\left( {0.91}\right) }$</td><td>${0.76}_{\pm {0.01}}^{\left( {0.81}\right) }$</td><td>${0.94}_{\pm {0.00}}^{\left( {0.96}\right) }$</td><td>${0.98}_{\pm {0.00}}^{\left( {0.98}\right) }$</td><td>${0.87}_{\pm {0.02}}^{\left( {1.00}\right) }$</td></tr><tr><td>Bounding</td><td>${0.80}_{\pm {0.01}}^{\left( {0.88}\right) }$</td><td>${0.73}_{\pm {0.01}}^{\left( {0.86}\right) }$</td><td>${0.94}_{ \pm {0.00}}^{({0.96})}$</td><td>${0.98}_{ \pm {0.00}}^{({0.98})}$</td><td>${0.82}_{\pm {0.01}}^{\left( {1.00}\right) }$</td></tr><tr><td>Gait-free</td><td>${0.81}{}_{\pm {0.03}}^{\left( {0.96}\right) }$</td><td>${0.74} \pm {0.06}$</td><td>-</td><td>-</td><td>${0.83}{}_{\pm {0.01}}^{\left( {1.00}\right) }$</td></tr></table>
112
+
113
+ ![01963f74-145a-7a0d-aae6-abefa8297f97_5_1202_217_275_259_0.jpg](images/01963f74-145a-7a0d-aae6-abefa8297f97_5_1202_217_275_259_0.jpg)
114
+
115
+ Table 5: Zero-shot robustness evaluation on discrete platform terrain (visualized right). The pacing and trotting gaits yield the best survival time during zero-shot deployment on this particular terrain, outperforming the gait-free baseline. Pronking attains the best velocity tracking performance of any gait, with a similar survival time to the gait-free baseline. Reward is reported as a fraction of the total possible episodic reward. Superscript reports performance in the flat training environment with no platforms. Subscript reports standard deviation across three random seeds.
116
+
117
+ ### 4.2 Sim-to-Real Transfer and Gait Switching
118
+
119
+ We deploy our controller in zero-shot sim-to-real manner using the robot and software architecture described in Section 3.3. We find that all gait parameters are correctly tracked after sim-to-real transfer. Figure 2 shows torques and contact states during transition between trotting, pronking, bounding, and pacing in place while alternating ${\mathbf{f}}^{\text{cmd }}$ between $2\mathrm{{Hz}}$ and $4\mathrm{{Hz}}$ . In the bottom subfigure, good alignment between the colored rectangles, representing true contacts, and the dark shaded regions, representing desired contacts, reflects successful tracking across gaits and frequencies.
120
+
121
+ ### 4.3 Impact of Gait Specification on Locomotion Performance
122
+
123
+ We probe the gait parameters of our trained model to quantify their role in the energy-efficiency, robustness, and speed of learned quadrupedal locomotion. We compare our gait-conditioned controller to a baseline velocity-tracking controller (the "gait-free baseline"). The gait-free baseline is trained by the same method as our gait-conditioned controller, but without task reward terms for contact timing or body posture.
124
+
125
+ (Intent of Analysis). The goal of our evaluation is not to endorse a specific gait for a specific scenario, for example to recommend pacing over pronking on one class of uneven terrain. The characteristics of specific gaits vary with different robot morphologies, training details, and environmental properties. We aim to illustrate that different styles of locomotion can have different performance characteristics on new tasks, and that by training a model that can achieve multiple styles, we grant the user a useful degree of freedom to exploit for any new task without task-specific retraining.
126
+
127
+ Energy Efficiency. We measure the power consumption (J/s) of symmetric quadrupedal gaits and the gait-free baseline. Table 4 reports the power consumption profile for each gait. The robot expends less energy while trotting or pacing than while pronking or bounding. One might hypothesize that the most energy-efficient gaits are also the easiest to learn, but our results go against this hypothesis (pronking emerges earlier than pacing; see Section 4.1). Additionally, fixed gaits achieve equal or lower energy consumption compared to the gait-free baseline at low and medium speeds. This suggests that policies trained using standard reward structure on flat ground are not achieving a gain in energy efficiency via uncommon adaptation of contact pattern, gait frequency, or body posture.
128
+
129
+ Robustness. We evaluate quadrupedal gaits on a non-flat test terrain consisting of randomly arranged platforms up to $8\mathrm{\;{cm}}$ in height (the "platform terrain", pictured beside Table 5). All policies in this work are trained only on flat ground, so the platform terrain is out-of-distribution. In Table 5 , we report the mean task reward on the platform terrain for each of the major gaits and the gait-free baseline. We also report the survival time as the mean fraction of the maximum episode length (10s)before failure. The episode was terminated if the agent did not fall after 10 seconds. In this experiment, velocity commands were limited to ${\mathbf{v}}_{x}^{\mathrm{{cmd}}} \in \left\lbrack {-2,2}\right\rbrack ,{\mathbf{\omega }}_{z}^{\mathrm{{cmd}}} \in \left\lbrack {-2,2}\right\rbrack$ .
130
+
131
+ ![01963f74-145a-7a0d-aae6-abefa8297f97_6_318_213_1161_395_0.jpg](images/01963f74-145a-7a0d-aae6-abefa8297f97_6_318_213_1161_395_0.jpg)
132
+
133
+ Figure 4: Gait parameter modulation supports the downstream task of crossing a gap wider than the robot's body length with a single leap. We show joint torques (top) and contact states (bottom) during an agile leap on pavement. The robot first accelerates to a target speed of $3\mathrm{\;m}/\mathrm{s}$ at a trot while increasing its step frequency from $2\mathrm{\;{Hz}}$ to $4\mathrm{\;{Hz}}$ , then switches to pronking at $2\mathrm{\;{Hz}}$ for one second, then decelerates to a standstill while trotting.
134
+
135
+ Different gaits yield different robustness characteristics on the platform terrain. Pacing has the highest survival time, higher than the gait-free policy, which indicates that it was able to remain stable on some terrains where the emergent gait on flat ground falls. However, pacing tracked the target velocity less accurately than other gaits. Pronking has the closest characteristics to the gait-free policy in terms of velocity tracking, and with a similar survival time.
136
+
137
+ In the real world, we demonstrate robustness with different gaits while traversing uneven, slippery, and granular terrain as well as stairs. The robot can traverse down stairs with a variety of gaits, but not up. Video showing a number of real world deployment scenarios is at the project website.
138
+
139
+ Velocity Tracking and Top Speed. We report the velocity tracking performance of multiple gaits and the gait-free baseline across a wide range of speeds including high-speed sprinting in Table 6. Removing gait constraints results in an improvement in velocity tracking task performance on flat ground. Heat maps break down the mean task reward for each velocity command, revealing that the gait-free approach does not offer much benefit at low speeds, but is better for achieving combinations of high linear and angular velocity.
140
+
141
+ ### 4.4 The Downstream Utility of Gait Modulation
142
+
143
+ Navigation of confined spaces. Different gaits and postures are useful for navigating confined spaces with limited width or height. We demonstrate crawling beneath an increasingly lowered bar. The robot was able to crawl under a ${22}\mathrm{\;{cm}}$ bar, and the robot body height is ${13}\mathrm{\;{cm}}$ , leaving a maximum of $9\mathrm{\;{cm}}$ of clearance between the robot and the ground, not accounting for vertical motion of the body during locomotion. We also find that the robot can pass between between two narrowly spaced cinder blocks with a pacing gait, but gets stuck with the wider stance of the pronking gait.
144
+
145
+ Agile jumping. Modulation of contact schedule, velocity, and gait frequency at high speed can encode an agile jump. Figure 4 shows the contact states, joint torques, and estimated body velocity during a jump sequence. First, the robot is commanded to accelerate to $3\mathrm{\;m}/\mathrm{s}$ at a trot while increasing its stepping frequency from $2\mathrm{\;{Hz}}$ to $4\mathrm{\;{Hz}}$ . Second, a jump is initiated by commanding one second of pronking at $2\mathrm{{Hz}}$ . Finally, the robot decelerates by reversing the acceleration sequence. During the jump phase, the distance from the location of the robot's front feet at takeoff to the location of the hind feet upon landing is measured to be ${60}\mathrm{\;{cm}}$ .
146
+
147
+ Choreographed Dance. We program a sequence of gait parameters to generate a dance routine synchronized to a jazz song with a tempo of 90 beats per minute. At this tempo, combinations of phases0,0.25, and 0.5 with frequencies of ${1.5}\mathrm{\;{Hz}}$ and $3\mathrm{\;{Hz}}$ yield eighth, quarter, half, and full beat gaps between consecutive footsteps. We also modulate the body height and velocity in time with the music. An assistant script helps a human programmer procedurally generate a list of gait parameters at each timestep, which are fed into the neural network controller in open-loop fashion during deployment. The programmer is a robotics researcher who has no formal experience with dance, music theory, or choreography.
148
+
149
+ ![01963f74-145a-7a0d-aae6-abefa8297f97_7_319_220_1152_352_0.jpg](images/01963f74-145a-7a0d-aae6-abefa8297f97_7_319_220_1152_352_0.jpg)
150
+
151
+ Table 6: Task reward reported as a fraction of the maximum possible reward for different gaits with ablations. Removing gait constraints results in an improvement in velocity tracking task performance on flat ground. Heat maps (right) break down the mean task reward for each velocity command, revealing that the gait-free approach is most beneficial for combinations of high linear and angular velocity.
152
+
153
+ We believe ours to be the most flexible and dynamic published system for legged robotic dance to date. Notable prior works in legged robotic dance include Bi et al. [28], Shao et al. [20], and the unpublished demonstrations by Boston Dynamics [29]. Bi et al. [28] focused on automatic generation of dance choreography, but their model-based controller was restricted to carefully lifting one foot at a time. Shao et al. [20] defined a comparable gait modulation language to ours, but their use of a complete reference motion library, rather than a contact timing reward, produced behaviors that are less fast and dynamic. Boston Dynamics' dance demonstrations on Spot and Atlas [29] are based on an unpublished analytically designed controller.
154
+
155
+ ## 5 Limitations
156
+
157
+ We implement a task parameterization and training procedure that allow a teleoperator or high-level planner to specify the locomotion behavior of a quadruped in diverse ways. Specifying the locomotion task reduces the system's degree of autonomy. Our experiments show that this hinders the robot's performance at very high speeds (Section 4.3; Table 6), suggesting that successful sprinting departs meaningfully from our parameterized family of gaits. To reparameterize the locomotion task without discouraging useful sprinting strategies is an open direction for future work.
158
+
159
+ Our work describes a blind locomotion controller that does not make use of exteroceptive sensor data from the robot's onboard cameras. This limits the system from anticipatively adapting to the terrain, which would be useful for tasks like upward stair traversal where reactive robustness is insufficient.
160
+
161
+ ## 6 Conclusion
162
+
163
+ We have implemented and analyzed a novel combination of flexible gait specification and robust training procedure for learning quadrupedal locomotion. The resulting controller was modulated by a teleoperator to accomplish a number of previously inaccessible downstream tasks without task-specific retraining. This controller also exhibits a high degree of robustness with multiple gaits on uneven, slippery, and granular terrain, as well as stairs. Quantitative experiments suggest that online modulation of gait parameters can benefit RL policies in out-of-distribution environments, with the caveat that imposing such a parameterization can limit top-end sprinting performance.
164
+
165
+ References
166
+
167
+ [1] J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter. Learning quadrupedal locomotion over challenging terrain. Sci. Robot., 5(47):eabc5986, Oct. 2020. doi:10.1126/scirobotics. abc5986.
168
+
169
+ [2] A. Kumar, Z. Fu, D. Pathak, and J. Malik. RMA: Rapid motor adaptation for legged robots. In Proc. Robot.: Sci. and Syst. (RSS), Virtual, July 2021. doi:10.48550/arXiv.2107.04034.
170
+
171
+ [3] T. Miki, J. Lee, J. Hwanbo, L. Wellhausen, V. Koltun, and M. Hutter. Learning robust perceptive locomotion for quadrupedal robots in the wild. Sci. Robot., 7(62):abk2822, Jan. 2022. doi:10.1126/scirobotics.abk2822.
172
+
173
+ [4] Z. Fu, A. Kumar, J. Malik, and D. Pathak. Minimizing energy consumption leads to the emergence of gaits in legged robots. In Proc. Conf. Robot Learn. (CoRL), pages 928-937, London, UK, Nov. 2021. doi:10.48550/arXiv.2111.01674.
174
+
175
+ [5] X. B. Peng, E. Coumans, T. Zhang, T.-W. Lee, J. Tan, and S. Levine. Learning agile robotic locomotion skills by imitating animals. arXiv preprint arXiv:2004.00784, 2020.
176
+
177
+ [6] A. Escontrela, X. B. Peng, W. Yu, T. Zhang, A. Iscen, K. Goldberg, and P. Abbeel. Adversarial motion priors make good substitutes for complex reward functions. arXiv preprint arXiv:2203.15103, 2022.
178
+
179
+ [7] E. Vollenweider, M. Bjelonic, V. Klemm, N. Rudin, J. Lee, and M. Hutter. Advanced skills through multiple adversarial motion priors in reinforcement learning. arXiv preprint arXiv:2203.14912, 2022.
180
+
181
+ [8] M. H. Raibert. Legged Robots That Balance. MIT press, 1986.
182
+
183
+ [9] S. Kuindersma, R. Deits, M. Fallon, A. Valenzuela, H. Dai, F. Permenter, T. Koolen, P. Marion, and R. Tedrake. Optimization-based locomotion planning, estimation, and control design for the atlas humanoid robot. Auton. Robot., 40(3):429-455, July 2015. doi: 10.1007/s10514-015-9479-3.
184
+
185
+ [10] D. Kim, J. Di Carlo, B. Katz, G. Bledt, and S. Kim. Highly dynamic quadruped locomotion via whole-body impulse control and model predictive control. arXiv preprint, 2019. doi: 10.48550/arXiv.1909.06586.
186
+
187
+ [11] N. Rudin, D. Hoeller, P. Reist, and M. Hutter. Learning to walk in minutes using massively parallel deep reinforcement learning. In Proc. Conf. Robot Learn. (CoRL), pages 91-100, London, UK, Nov. 2021. doi:10.48550/arXiv.2109.11978.
188
+
189
+ [12] G. B. Margolis, G. Yang, K. Paigwar, T. Chen, and P. Agrawal. Rapid locomotion via reinforcement learning. Proc. Robot.: Sci. and Syst. (RSS), June 2022.
190
+
191
+ [13] J. Siekmann, K. Green, J. Warila, A. Fern, and J. Hurst. Blind bipedal stair traversal via sim-to-real reinforcement learning. In Proc. Robot.: Sci. and Syst. (RSS), Virtual, July 2021. doi:10.48550/arXiv.2105.08328.
192
+
193
+ [14] J. Tan, T. Zhang, E. Coumans, A. Iscen, Y. Bai, D. Hafner, S. Bohez, and V. Vanhoucke. Sim-to-real: Learning agile locomotion for quadruped robots. In Proc. Robot.: Sci. and Syst. (RSS), pages 1-9, Pittsburgh, Pennsylvania, USA, June 2018. doi:10.15607/RSS.2018.XIV.010.
194
+
195
+ [15] J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V. Tsounis, V. Koltun, and M. Hutter. Learning agile and dynamic motor skills for legged robots. Sci. Robot., 4(26):aau5872, Jan. 2019. doi:10.1126/scirobotics.aau5872.
196
+
197
+ [16] G. Ji, J. Mun, H. Kim, and J. Hwangbo. Concurrent training of a control policy and a state estimator for dynamic and robust legged locomotion. IEEE Robot. Automat. Lett. (RA-L), 7 (2):4630 - 4637, Apr. 2022. doi:10.1109/LRA.2022.3151396.
198
+
199
+ [17] G. B. Margolis, T. Chen, K. Paigwar, X. Fu, D. Kim, S. Kim, and P. Agrawal. Learning to jump from pixels. In Proc. Conf. Robot Learn. (CoRL), pages 1025-1034, London, UK, Nov. 2021. doi:10.48550/arXiv.2110.15344.
200
+
201
+ [18] Z. Xie, P. Clary, J. Dao, P. Morais, J. Hurst, and M. van de Panne. Learning locomotion skills for Cassie: Iterative design and sim-to-real. In Proc. Conf. Robot Learn. (CoRL), pages 1-13, Osaka, Japan, Nov. 2020. doi:10.48550/arXiv.1903.09537.
202
+
203
+ [19] Z. Li, X. Cheng, X. B. Peng, P. Abbeel, S. Levine, G. Berseth, and K. Sreenath. Reinforcement learning for robust parameterized locomotion control of bipedal robots. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 2811-2817. IEEE, 2021.
204
+
205
+ [20] Y. Shao, Y. Jin, X. Liu, W. He, H. Wang, and W. Yang. Learning free gait transition for quadruped robots via phase-guided controller. IEEE Robotics and Automation Letters, 7(2): 1230-1237, 2021.
206
+
207
+ [21] R. Li, A. Jabri, T. Darrell, and P. Agrawal. Towards practical multi-object manipulation using relational reinforcement learning. In Proc. IEEE Int. Conf. Robot. Automat. (ICRA), pages 4051-4058, Virtual, May 2020. doi:10.1109/ICRA40945.2020.9197468.
208
+
209
+ [22] J. Siekmann, Y. Godse, A. Fern, and J. Hurst. Sim-to-real learning of all common bipedal gaits via periodic reward composition. In Proc. IEEE Int. Conf. Robot. Automat. (ICRA), pages 7309-7315, Xi'an, China, June 2021. doi:10.1109/ICRA48506.2021.9561814.
210
+
211
+ [23] H. Duan, A. Malik, J. Dao, A. Saxena, K. Green, J. Siekmann, A. Fern, and J. Hurst. Sim-to-real learning of footstep-constrained bipedal dynamic walking. arXiv preprint arXiv:2203.07589, 2022.
212
+
213
+ [24] V. Makoviychuk, L. Wawrzyniak, Y. Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin, A. Allshire, A. Handa, et al. Isaac Gym: High performance GPU-based physics simulation for robot learning. arXiv preprint, 2021. doi:10.48550/arXiv.2108.10470.
214
+
215
+ [25] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint, 2017. doi:10.48550/arXiv.1707.06347.
216
+
217
+ [26] Unitree Robotics, Go1, 2022, https://www.unitree.com/products/go1, [Online; accessed Jun. 2022].
218
+
219
+ [27] A. S. Huang, E. Olson, and D. C. Moore. Lcm: Lightweight communications and marshalling. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 4057- 4062. IEEE, 2010.
220
+
221
+ [28] T. Bi, P. Fankhauser, D. Bellicoso, and M. Hutter. Real-time dance generation to music for a legged robot. In 2018 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), pages 1038-1044. IEEE, 2018.
222
+
223
+ [29] Boston Dynamics, Spot, 2022, https://www.bostondynamics.com/spot, [Online; accessed Jun. 2022].
papers/CoRL/CoRL 2022/CoRL 2022 Conference/52c5e73SlS2/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § WALK THESE WAYS: GAIT-CONDITIONED POLICIES YIELD DIVERSIFIED QUADRUPEDAL AGILITY
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: We investigate the utility of structuring learned quadrupedal gaits with dynamically specified gait parameters. We present a fast and robust locomotion controller that can trot, pronk, pace, and bound at variable frequency, posture, and speed. Modulating these parameters at runtime enables a human operator or high-level planner to execute varied sequences of agile behavior useful for downstream tasks. A single policy realizes crouching, hopping, high-speed running, agile leaps, and rhythmic dance. Our analyses suggest new perspectives on the role of gait in learned locomotion. Video is available at https://sites.google.com/view/gait-conditioned-rl/.
12
+
13
+ Keywords: Locomotion, Reinforcement Learning, Task Specification
14
+
15
+ < g r a p h i c s >
16
+
17
+ Figure 1: Diverse gaits enable a human to pilot a quadruped expressively and respond to conditions that were not anticipated at training time. A single policy realizes crouching, hopping, high-speed running, agile leaps, and rhythmic dance. The controller is demonstrated on uneven, slippery, and granular terrain and can traverse down stairs, but not up.
18
+
19
+ § 11 1 INTRODUCTION
20
+
21
+ Much work on learning quadrupedal locomotion targets a static locomotion style objective, such as robustness to disturbances $\left\lbrack {1,2,3}\right\rbrack$ , energy efficiency $\left\lbrack 4\right\rbrack$ , or similarity to a particular reference motion [5, 6, 7]. While successful, these approaches offer the user meager control over the robot's locomotion style during deployment. In practical deployments, the end user may prefer to have a choice among a large number of locomotion styles in the hope that some will suit their needs at any given moment. The user's needs might range from interaction with unanticipated environments to the generation of stylish motions for entertainment.
22
+
23
+ One way to place more control in the hands of the user is to provide a set of flexible and intuitive motion primitives that they can compose to produce the locomotion behavior of their preferred style or which suits their downstream task. To this end, we implement and evaluate a task specification language for learned quadrupedal locomotion which enables the online composition of motion sequences combining diverse gaits, postures, and speeds. Such language is common in the setting of analytical control, where it is often necessary to define constraints, objectives, and models $\left\lbrack {8,9,{10}}\right\rbrack$ . It is often avoided in reinforcement learning, where it is possible for motion styles to emerge from a reward function without direct specification $\left\lbrack {1,4,{11},{12}}\right\rbrack$ . We find that a single gait-conditioned policy can execute diverse and controllable motion styles while retaining a high degree of robustness against variations in robot properties and natural terrains. Our definition of gait parameters, which extends previous work by Siekmann et al. [13] from the bipedal setting, enables the composition of behaviors useful for multiple downstream tasks, including navigation through confined spaces, agile jumps longer than the robot's body length, and the execution of dynamic choreographed dance. No previously proposed reinforcement learning agent has been suitable to accomplish any of these tasks, much less to accomplish them all with a single policy.
24
+
25
+ < g r a p h i c s >
26
+
27
+ Figure 2: Transition between trotting, pronking, pacing, and bounding in place at alternating frequencies of $2\mathrm{{Hz}}$ and $4\mathrm{{Hz}}$ . Images show the robot achieving the contact phases of each gait, with stance feet highlighted in red. Black shading in the bottom plot reflects the timing reference variables ${\mathbf{t}}_{t}$ for each foot; colored bars report the contact states measured by foot sensors.
28
+
29
+ § 2 RELATED WORK
30
+
31
+ Most previous work on learning locomotion addresses the task of achieving a target body velocity [1, $2,3,{11},{13},{14},{15},{12},{16}\rbrack$ . In these works, the operator provides the target velocity as input to a control policy, and the policy coordinates leg motions to track that velocity as closely as possible. There are many ways a robot can move in order to achieve the same target velocity; for example, it may select different foot contact patterns or carry its body in different poses while progressing forward at the same speed. For this reason, we claim that the velocity tracking task is generally underspecified. In response, several forms of implicit or explicit specification have been used in the prior literature:
32
+
33
+ Auxiliary Rewards. Some prior works overcome underspecification by defining auxiliary rewards. For example, penalties on energy expenditure [4], body motion [17], or deviation from a nominal pose [16] may distinguish one locomotion gait as preferred over another. In theory, an objective such as energy minimization may cause substantially different gaits to be preferred at different speeds. In practice, all works that learn joint-space policies with auxiliary rewards have ended up learning relatively static contact patterns and body motion [3, 15, 12, 16]. Fu et al. [4] notably reported that different gaits emerge at different speeds when applying an auxiliary energy minimization reward, but they achieved this by training a separate policy at each speed and combining them through a manual distillation step.
34
+
35
+ Imitation Learning. Another line of work uses reference trajectories to specify robot motions $\left\lbrack {5,6,7,{18},{19},{20}}\right\rbrack$ . Some approaches use an imitation objective to reward states similar to the reference motion [5, 20, 21], while others use adversarial techniques to reward stylistically similar movements [6, 7]. Shao et al. [20] notably generated a library of reference trajectories for a quadrupedal robot from a set of foot phase parameters and trained a goal-conditioned policy to imitate them. However, using a library of detailed reference trajectories penalizes the agent for exploring behaviors that are difficult to procedurally generate. We would prefer to specify the gait in a less explicit way, to allow the policy to discover complex strategies such as dynamic body motion, slip recovery, and resistance to perturbation.
36
+
37
+ < g r a p h i c s >
38
+
39
+ Table 1: Reward terms for task, stability, and smoothness. Terms combined from [11, 16, 22].
40
+
41
+ Reward-based Task Specification. The prior works most closely related to ours explicitly include gait parameters in the task specification through additional reward terms, including tracking rewards for contact patterns [22] and target foot placements [23]. Our task reward is modeled on Siekmann et al. [22], which used gait-dependent reward terms to penalize foot motion during stance and contact force during swing phases. We extend [22] to a more diverse family of quadrupedal gaits and provide an expanded treatment in the setting of robust and high-speed locomotion.
42
+
43
+ § 3 METHOD
44
+
45
+ § 3.1 GAIT SPECIFICATION
46
+
47
+ 70 We parameterize the common quadrupedal gaits by an 8-dimensional command vector,
48
+
49
+ $$
50
+ {\mathbf{c}}_{t} = \left\lbrack {{\mathbf{v}}_{x}^{\mathrm{{cmd}}},{\mathbf{v}}_{y}^{\mathrm{{cmd}}},{\mathbf{\omega }}_{z}^{\mathrm{{cmd}}},{\mathbf{\theta }}_{1}^{\mathrm{{cmd}}},{\mathbf{\theta }}_{2}^{\mathrm{{cmd}}},{\mathbf{\theta }}_{3}^{\mathrm{{cmd}}},{\mathbf{f}}^{\mathrm{{cmd}}},{\mathbf{h}}_{z}^{\mathrm{{cmd}}}}\right\rbrack ,
51
+ $$
52
+
53
+ * ${\mathbf{v}}_{x}^{\text{ cmd }},{\mathbf{v}}_{y}^{\text{ cmd }},{\mathbf{\omega }}_{z}^{\text{ cmd }}$ are the linear velocities in the body-frame x- and y- axes, and the angular velocity in the yaw axis.
54
+
55
+ * ${\mathbf{\theta }}^{\text{ cmd }} = \left( {{\mathbf{\theta }}_{1}^{\text{ cmd }},{\mathbf{\theta }}_{2}^{\text{ cmd }},{\mathbf{\theta }}_{3}^{\text{ cmd }}}\right)$ are the timing offsets between pairs of feet. ${\mathbf{\theta }}^{\text{ cmd }} = \left( {0,0,0}\right)$ corresponds to a pronking pattern with the contact timings of all four feet synchronized. ${\mathbf{\theta }}^{\text{ cmd }} = \left( {{0.5},0,0}\right)$ corresponds to a trotting pattern with diagonal pairs alternating contact. ${\mathbf{\theta }}^{\text{ cmd }} = \left( {0,{0.5},0}\right)$ represents a pacing pattern with left and right pairs alternating contact. ${\mathbf{\theta }}^{\text{ cmd }} = \left( {0,0,{0.5}}\right)$ represents a bounding pattern with front and rear pairs alternating contact. Our policy also learns to continuously interpolate between the major patterns, enabling variants such as galloping $\left( {{\mathbf{\theta }}^{\text{ cmd }} = \left( {{0.25},0,0}\right) }\right)$ . Taken together, the parameters ${\mathbf{\theta }}^{\text{ cmd }}$ can express all two-beat quadrupedal contact patterns. For a visual illustration, refer to Figure 2 or the accompanying video.
56
+
57
+ * ${\mathbf{f}}^{\text{ cmd }}$ is the stepping frequency expressed in Hz. Commanding ${\mathbf{f}}^{\text{ cmd }} = 3\mathrm{{Hz}}$ will result in each foot making contact three times per second.
58
+
59
+ * ${\mathbf{h}}_{z}^{\mathrm{{cmd}}}$ is the body height command.
60
+
61
+ Reward function. All reward terms are given in Table 1. Task rewards are defined as functions of the command vector ${\mathbf{c}}_{t}$ for body velocity tracking, body pose tracking, and contact schedule tracking. To define the desired contact schedule, function ${C}_{\text{ foot }}^{\mathrm{{cmd}}}\left( {{\mathbf{\theta }}^{\mathrm{{cmd}}},t}\right)$ computes the desired contact state of each foot from the phase and timing variable, as described in [22], with details given in the appendix. Stability and smoothness rewards express auxiliary locomotion objectives that are desired across all gaits. As in [16], we force the total reward to be positive by computing it as ${r}_{\text{ pos }}\exp \left( {{0.02}{r}_{\text{ neg }}}\right)$ where ${r}_{\text{ pos }}$ is the sum of all positive reward terms and ${r}_{\text{ neg }}$ is the sum of all negative reward terms.
62
+
63
+ max width=
64
+
65
+ Term Minimum Maximum Units
66
+
67
+ 1-4
68
+ Payload Mass -1.0 3.0 $\mathrm{{kg}}$
69
+
70
+ 1-4
71
+ Motor Strength 90 110 %
72
+
73
+ 1-4
74
+ Joint Calibration -0.02 0.02 rad
75
+
76
+ 1-4
77
+ Ground Friction 0.40 1.00 -
78
+
79
+ 1-4
80
+ Ground Restitution 0.00 1.00 -
81
+
82
+ 1-4
83
+ Gravity Offset - 1.0 1.0 $\mathrm{m}/{\mathrm{s}}^{2}$
84
+
85
+ 1-4
86
+ ${\mathbf{v}}_{x}^{\text{ cmd }}$ - - m/s
87
+
88
+ 1-4
89
+ ${\mathbf{v}}_{x}^{\text{ cmd }}$ -0.6 0.6 m/s
90
+
91
+ 1-4
92
+ ${\omega }_{z}^{\mathrm{{cmd}}}$ - - m/s
93
+
94
+ 1-4
95
+ ${f}^{\mathrm{{cmd}}}$ 1.5 4.0 $\mathrm{{Hz}}$
96
+
97
+ 1-4
98
+ ${\mathbf{\theta }}_{1}^{\mathrm{{cmd}}},{\mathbf{\theta }}_{2}^{\mathrm{{cmd}}},{\mathbf{\theta }}_{3}^{\mathrm{{cmd}}}$ 0.0 1.0 -
99
+
100
+ 1-4
101
+ ${\mathbf{h}}_{z}^{\text{ cmd }}$ 0.10 0.45 m
102
+
103
+ 1-4
104
+
105
+ Table 2: Randomization ranges for dynamics parameters (top) and commands (bottom) during training. ${\mathbf{v}}_{x}^{\mathrm{{cmd}}},{\mathbf{v}}_{y}^{\mathrm{{cmd}}}$ are adapted according to a curriculum.
106
+
107
+ < g r a p h i c s >
108
+
109
+ Figure 3: Pronking and trotting gaits are easier to learn and tend to dominate pacing and bounding early in training. However, when discovered, pacing and bounding gaits can yield good performance and later become preferred for some downstream tasks (Section 4.4).
110
+
111
+ § 3.2 LEARNING QUADRUPEDAL GAITS
112
+
113
+ We implement our training environment in the Isaac Gym simulator [24] and train our locomotion policy using Proximal Policy Optimization [25].
114
+
115
+ Observation Space. The observation ${\mathbf{o}}_{t}$ consists of command ${\mathbf{c}}_{t}$ , previous action ${\mathbf{a}}_{t - 1}$ , sensor data ${\mathbf{s}}_{t}$ , and timing reference variables ${\mathbf{t}}_{t}$ . The sensor data ${\mathbf{s}}_{t}$ includes joint positions and velocities ${\mathbf{q}}_{t},{\dot{\mathbf{q}}}_{t}$ (measured by joint encoders) and the gravity vector in the body frame ${\mathbf{g}}_{t}$ (measured by accelerometer). Timing reference variables ${\mathbf{t}}_{t}$ are sine functions aligned to the contact offset of each foot:
116
+
117
+ $$
118
+ \left\lbrack {{\mathbf{t}}_{t}^{\mathrm{{FR}}},{\mathbf{t}}_{t}^{\mathrm{{FL}}},{\mathbf{t}}_{t}^{\mathrm{{RR}}},{\mathbf{t}}_{t}^{\mathrm{{RL}}}}\right\rbrack = \left\lbrack {\sin \left( {t + {\mathbf{\theta }}_{2}^{\mathrm{{cmd}}} + {\mathbf{\theta }}_{3}^{\mathrm{{cmd}}}}\right) ,\sin \left( {t + {\mathbf{\theta }}_{1}^{\mathrm{{cmd}}} + {\mathbf{\theta }}_{3}^{\mathrm{{cmd}}}}\right) ,\sin \left( {t + {\mathbf{\theta }}_{1}^{\mathrm{{cmd}}}}\right) ,\sin \left( {t + {\mathbf{\theta }}_{2}^{\mathrm{{cmd}}}}\right) }\right\rbrack
119
+ $$
120
+
121
+ where $t$ is a counter variable that advances from 0 to 1 during each gait cycle and ${}^{\mathrm{{FR}}},{}^{\mathrm{{FL}}},{}^{\mathrm{{RR}}},{}^{\mathrm{{RL}}}$ are the four feet. These timing variables are adapted from [22] to represent the quadrupedal gaits.
122
+
123
+ Action Space. The action ${\mathbf{a}}_{t}$ consists of position targets for each of the twelve joints. The position targets are tracked using a proportional-derivative controller with ${k}_{p} = {20},{k}_{d} = {0.5}$ .
124
+
125
+ Domain Randomization. We randomize the robot's body mass, motor strength, and joint position calibration, the friction and restitution of the terrain, and the orientation and magnitude of gravity. This randomization facilitates sim-to-real transfer and makes the controller robust to different terrain geometries and material properties. All randomized parameters and ranges are given in Table 2.
126
+
127
+ Policy Architecture. We apply the Concurrent State Estimation architecture proposed by Ji et al. [16]. The "estimator module", a feedforward neural network, is trained using supervised learning to predict the mass, velocity, and height of the robot body, the friction and restitution of the ground, and the gravity vector. The input to this estimator module is a 15-step history of observations. The estimator output is concatenated with the state history as input to the policy body. The policy body is an MLP with hidden layer sizes $\left\lbrack {{512},{256},{128}}\right\rbrack$ ; the estimator module is an MLP with hidden layer sizes $\left\lbrack {{256},{128}}\right\rbrack$ ; both use ELU activations.
128
+
129
+ max width=
130
+
131
+ X 4|c|X 13*
132
+ < g r a p h i c s >
133
+
134
+ 1-5
135
+ Gait ${0.0}\mathrm{\;m}/\mathrm{s}$ ${1.0}\mathrm{\;m}/\mathrm{s}$ ${2.0}\mathrm{\;m}/\mathrm{s}$ ${3.0}\mathrm{\;m}/\mathrm{s}$
136
+
137
+ 1-5
138
+ Trotting $9 \pm 1$ ${24} \pm 1$ ${53} \pm 5$ ${98} \pm 9$
139
+
140
+ 1-5
141
+ Pronking ${32} \pm 1$ ${43} \pm 2$ ${68} \pm 5$ ${112} \pm 5$
142
+
143
+ 1-5
144
+ Pacing ${13} \pm 3$ ${25} \pm 2$ ${55}_{\pm 3}$ ${99}_{\pm 6}$
145
+
146
+ 1-5
147
+ Bounding ${22}_{\pm 2}$ ${39}_{\pm 4}$ ${78}_{\pm 5}$ ${127}_{\pm {35}}$
148
+
149
+ 1-5
150
+ Gait-free Baseline ${17} \pm 5$ ${35} \pm 5$ ${64}_{\pm {10}}$ ${102}_{\pm {14}}$
151
+
152
+ 1-5
153
+ Trotting $\left( {{\mathbf{f}}^{\mathrm{{cmd}}} = 2\mathrm{\;{Hz}}}\right)$ ${11} \pm 2$ ${25} \pm 1$ ${55} \pm 4$ ${104}_{\pm 8}$
154
+
155
+ 1-5
156
+ Trotting $\left( {{\mathbf{f}}^{\mathrm{{cmd}}} = 3\mathrm{\;{Hz}}}\right)$ ${9}_{\pm 1}$ ${24} \pm 1$ ${53}_{\pm 5}$ ${98}_{\pm 9}$
157
+
158
+ 1-5
159
+ Trotting $\left( {{\mathbf{f}}^{\mathrm{{cmd}}} = 4\mathrm{\;{Hz}}}\right)$ $9 \pm 1$ ${26} \pm 0$ ${60} \pm 4$ ${114} \pm {12}$
160
+
161
+ 1-5
162
+ Trotting $\left( {{\mathbf{h}}_{z}^{\mathrm{{cmd}}} = {20}\mathrm{\;{cm}}}\right)$ 9±1 ${26} \pm 1$ ${56} \pm 3$ ${102}_{\pm 8}$
163
+
164
+ 1-5
165
+ Trotting $\left( {{\mathbf{h}}_{z}^{\mathrm{{cmd}}} = {30}\mathrm{\;{cm}}}\right)$ 9±1 ${24} \pm 1$ ${53} \pm 5$ 98±9
166
+
167
+ 1-5
168
+ Trotting $\left( {{\mathbf{h}}_{z}^{\mathrm{{cmd}}} = {40}\mathrm{\;{cm}}}\right)$ ${10} \pm 1$ ${23} \pm 1$ ${52} \pm 4$ ${95}_{\pm 9}$
169
+
170
+ 1-6
171
+
172
+ Table 4: Power consumption (J/s) across speeds for common quadrupedal gaits and for policy without gait constraint. While efficiency varies across gaits and speeds, trotting and pacing gaits achieve efficiency competitive with unconstrained gait across speeds.
173
+
174
+ Task Curriculum. To enable the robot to both run and spin fast, we sample velocity commands using the grid adaptive curriculum strategy from Margolis et al. [12]. A discrete grid tabulates successful combinations of linear and angular velocity thus far. Velocity commands for the policy are sampled from a small neighborhood of the successful region. We grow a separate grid adaptive curriculum for each of the four major gaits: pronking, trotting, bounding, and pacing.
175
+
176
+ The procedure for sampling a command ${\mathbf{c}}_{t}$ is as follows: First, one of the four major gaits (pronk-ing, trotting, bounding, or pacing) is selected with equal probability. Then, a candidate velocity command $\left( {{\mathbf{v}}_{x}^{\text{ cmd }},{\mathbf{\omega }}_{z}^{\text{ cmd }}}\right)$ for the chosen gait is sampled following [12]. To let the policy learn to interpolate between major gaits, $\left( {{\mathbf{\theta }}_{1}^{\mathrm{{cmd}}},{\mathbf{\theta }}_{2}^{\mathrm{{cmd}}},{\mathbf{\theta }}_{3}^{\mathrm{{cmd}}}}\right)$ are sampled from a Gaussian distribution that is centered at the chosen major gait. Finally, the remaining command parameters $\left( {{\mathbf{v}}_{y}^{\mathrm{{cmd}}},{\mathbf{f}}^{\mathrm{{cmd}}},{\mathbf{h}}_{z}^{\mathrm{{cmd}}}}\right)$ are sampled independently and uniformly. Their ranges are given in Table 2.
177
+
178
+ § 3.3 DEPLOYMENT
179
+
180
+ We deploy our controller in the real world on the Unitree Go1 Edu robot [26]. The Go1 is a commercially available and relatively low-cost robot quadruped.
181
+
182
+ Computing Architecture: An onboard Jetson TX2 NX computer runs our trained policy. We implement an interface based on Lightweight Communications and Marshalling (LCM) [27] to pass sensor data, motor commands, and joystick state between our code and the low-level control SDK provided by Unitree.
183
+
184
+ Sensors: In addition to joint position encoders and an inertial measurement unit, the robot is equipped with foot contact sensors. We log the detected contact states for our analysis, but they are not provided to our controller.
185
+
186
+ § 4 EXPERIMENTAL RESULTS
187
+
188
+ § 4.1 LEARNING HIGH-SPEED QUADRUPEDAL GAITS
189
+
190
+ Our method enables a single learned controller to run and spin using all common quadrupedal gaits. Figure 3 shows the early progression of the velocity curriculum for each gait. Notably, the delayed progress of pacing and bounding suggests that these gaits may be more challenging to learn than trotting and pronking. This hypothesis is supported by the rarity of pacing and bounding in the reinforcement learning literature. Likewise, the high learnability of trotting aligns with its tendency to emerge from a variety of non-gait-specific auxiliary rewards [2, 11, 16, 17]. However, because our task sampling strategy provides explicit incentive to achieve all gaits, we find that fast and robust control strategies for pacing, bounding, and pronking all emerge as well.
191
+
192
+ max width=
193
+
194
+ Gait ${r}_{v,x,y}$ ${r}_{{\omega }_{z}^{\text{ cmd }}}$ ${r}_{{c}_{f}^{\text{ cmd }}}$ ${r}_{{c}_{v}}$ Survival
195
+
196
+ 1-6
197
+ Trotting ${0.80}{}_{\pm {0.01}}^{\left( {0.95}\right) }$ ${0.76}_{\pm {0.00}}^{\left( {0.89}\right) }$ ${0.95}{}_{\pm {0.00}}^{\left( {0.97}\right) }$ ${0.98}{}_{\pm {0.00}}^{\left( {0.98}\right) }$ ${0.88}_{\pm {0.01}}^{\left( {1.00}\right) }$
198
+
199
+ 1-6
200
+ Pronking ${0.84}_{\pm {0.01}}^{\left( {0.94}\right) }$ ${0.77}_{\pm {0.01}}^{\left( {0.85}\right) }$ ${0.96}_{\pm {0.00}}^{\left( {0.96}\right) }$ ${0.97}_{\pm {0.00}}^{\left( {0.98}\right) }$ ${0.82}{}_{\pm {0.02}}^{\left( {1.00}\right) }$
201
+
202
+ 1-6
203
+ Pacing ${0.76}_{\pm {0.01}}^{\left( {0.91}\right) }$ ${0.76}_{\pm {0.01}}^{\left( {0.81}\right) }$ ${0.94}_{\pm {0.00}}^{\left( {0.96}\right) }$ ${0.98}_{\pm {0.00}}^{\left( {0.98}\right) }$ ${0.87}_{\pm {0.02}}^{\left( {1.00}\right) }$
204
+
205
+ 1-6
206
+ Bounding ${0.80}_{\pm {0.01}}^{\left( {0.88}\right) }$ ${0.73}_{\pm {0.01}}^{\left( {0.86}\right) }$ ${0.94}_{ \pm {0.00}}^{({0.96})}$ ${0.98}_{ \pm {0.00}}^{({0.98})}$ ${0.82}_{\pm {0.01}}^{\left( {1.00}\right) }$
207
+
208
+ 1-6
209
+ Gait-free ${0.81}{}_{\pm {0.03}}^{\left( {0.96}\right) }$ ${0.74} \pm {0.06}$ - - ${0.83}{}_{\pm {0.01}}^{\left( {1.00}\right) }$
210
+
211
+ 1-6
212
+
213
+ < g r a p h i c s >
214
+
215
+ Table 5: Zero-shot robustness evaluation on discrete platform terrain (visualized right). The pacing and trotting gaits yield the best survival time during zero-shot deployment on this particular terrain, outperforming the gait-free baseline. Pronking attains the best velocity tracking performance of any gait, with a similar survival time to the gait-free baseline. Reward is reported as a fraction of the total possible episodic reward. Superscript reports performance in the flat training environment with no platforms. Subscript reports standard deviation across three random seeds.
216
+
217
+ § 4.2 SIM-TO-REAL TRANSFER AND GAIT SWITCHING
218
+
219
+ We deploy our controller in zero-shot sim-to-real manner using the robot and software architecture described in Section 3.3. We find that all gait parameters are correctly tracked after sim-to-real transfer. Figure 2 shows torques and contact states during transition between trotting, pronking, bounding, and pacing in place while alternating ${\mathbf{f}}^{\text{ cmd }}$ between $2\mathrm{{Hz}}$ and $4\mathrm{{Hz}}$ . In the bottom subfigure, good alignment between the colored rectangles, representing true contacts, and the dark shaded regions, representing desired contacts, reflects successful tracking across gaits and frequencies.
220
+
221
+ § 4.3 IMPACT OF GAIT SPECIFICATION ON LOCOMOTION PERFORMANCE
222
+
223
+ We probe the gait parameters of our trained model to quantify their role in the energy-efficiency, robustness, and speed of learned quadrupedal locomotion. We compare our gait-conditioned controller to a baseline velocity-tracking controller (the "gait-free baseline"). The gait-free baseline is trained by the same method as our gait-conditioned controller, but without task reward terms for contact timing or body posture.
224
+
225
+ (Intent of Analysis). The goal of our evaluation is not to endorse a specific gait for a specific scenario, for example to recommend pacing over pronking on one class of uneven terrain. The characteristics of specific gaits vary with different robot morphologies, training details, and environmental properties. We aim to illustrate that different styles of locomotion can have different performance characteristics on new tasks, and that by training a model that can achieve multiple styles, we grant the user a useful degree of freedom to exploit for any new task without task-specific retraining.
226
+
227
+ Energy Efficiency. We measure the power consumption (J/s) of symmetric quadrupedal gaits and the gait-free baseline. Table 4 reports the power consumption profile for each gait. The robot expends less energy while trotting or pacing than while pronking or bounding. One might hypothesize that the most energy-efficient gaits are also the easiest to learn, but our results go against this hypothesis (pronking emerges earlier than pacing; see Section 4.1). Additionally, fixed gaits achieve equal or lower energy consumption compared to the gait-free baseline at low and medium speeds. This suggests that policies trained using standard reward structure on flat ground are not achieving a gain in energy efficiency via uncommon adaptation of contact pattern, gait frequency, or body posture.
228
+
229
+ Robustness. We evaluate quadrupedal gaits on a non-flat test terrain consisting of randomly arranged platforms up to $8\mathrm{\;{cm}}$ in height (the "platform terrain", pictured beside Table 5). All policies in this work are trained only on flat ground, so the platform terrain is out-of-distribution. In Table 5, we report the mean task reward on the platform terrain for each of the major gaits and the gait-free baseline. We also report the survival time as the mean fraction of the maximum episode length (10s)before failure. The episode was terminated if the agent did not fall after 10 seconds. In this experiment, velocity commands were limited to ${\mathbf{v}}_{x}^{\mathrm{{cmd}}} \in \left\lbrack {-2,2}\right\rbrack ,{\mathbf{\omega }}_{z}^{\mathrm{{cmd}}} \in \left\lbrack {-2,2}\right\rbrack$ .
230
+
231
+ < g r a p h i c s >
232
+
233
+ Figure 4: Gait parameter modulation supports the downstream task of crossing a gap wider than the robot's body length with a single leap. We show joint torques (top) and contact states (bottom) during an agile leap on pavement. The robot first accelerates to a target speed of $3\mathrm{\;m}/\mathrm{s}$ at a trot while increasing its step frequency from $2\mathrm{\;{Hz}}$ to $4\mathrm{\;{Hz}}$ , then switches to pronking at $2\mathrm{\;{Hz}}$ for one second, then decelerates to a standstill while trotting.
234
+
235
+ Different gaits yield different robustness characteristics on the platform terrain. Pacing has the highest survival time, higher than the gait-free policy, which indicates that it was able to remain stable on some terrains where the emergent gait on flat ground falls. However, pacing tracked the target velocity less accurately than other gaits. Pronking has the closest characteristics to the gait-free policy in terms of velocity tracking, and with a similar survival time.
236
+
237
+ In the real world, we demonstrate robustness with different gaits while traversing uneven, slippery, and granular terrain as well as stairs. The robot can traverse down stairs with a variety of gaits, but not up. Video showing a number of real world deployment scenarios is at the project website.
238
+
239
+ Velocity Tracking and Top Speed. We report the velocity tracking performance of multiple gaits and the gait-free baseline across a wide range of speeds including high-speed sprinting in Table 6. Removing gait constraints results in an improvement in velocity tracking task performance on flat ground. Heat maps break down the mean task reward for each velocity command, revealing that the gait-free approach does not offer much benefit at low speeds, but is better for achieving combinations of high linear and angular velocity.
240
+
241
+ § 4.4 THE DOWNSTREAM UTILITY OF GAIT MODULATION
242
+
243
+ Navigation of confined spaces. Different gaits and postures are useful for navigating confined spaces with limited width or height. We demonstrate crawling beneath an increasingly lowered bar. The robot was able to crawl under a ${22}\mathrm{\;{cm}}$ bar, and the robot body height is ${13}\mathrm{\;{cm}}$ , leaving a maximum of $9\mathrm{\;{cm}}$ of clearance between the robot and the ground, not accounting for vertical motion of the body during locomotion. We also find that the robot can pass between between two narrowly spaced cinder blocks with a pacing gait, but gets stuck with the wider stance of the pronking gait.
244
+
245
+ Agile jumping. Modulation of contact schedule, velocity, and gait frequency at high speed can encode an agile jump. Figure 4 shows the contact states, joint torques, and estimated body velocity during a jump sequence. First, the robot is commanded to accelerate to $3\mathrm{\;m}/\mathrm{s}$ at a trot while increasing its stepping frequency from $2\mathrm{\;{Hz}}$ to $4\mathrm{\;{Hz}}$ . Second, a jump is initiated by commanding one second of pronking at $2\mathrm{{Hz}}$ . Finally, the robot decelerates by reversing the acceleration sequence. During the jump phase, the distance from the location of the robot's front feet at takeoff to the location of the hind feet upon landing is measured to be ${60}\mathrm{\;{cm}}$ .
246
+
247
+ Choreographed Dance. We program a sequence of gait parameters to generate a dance routine synchronized to a jazz song with a tempo of 90 beats per minute. At this tempo, combinations of phases0,0.25, and 0.5 with frequencies of ${1.5}\mathrm{\;{Hz}}$ and $3\mathrm{\;{Hz}}$ yield eighth, quarter, half, and full beat gaps between consecutive footsteps. We also modulate the body height and velocity in time with the music. An assistant script helps a human programmer procedurally generate a list of gait parameters at each timestep, which are fed into the neural network controller in open-loop fashion during deployment. The programmer is a robotics researcher who has no formal experience with dance, music theory, or choreography.
248
+
249
+ < g r a p h i c s >
250
+
251
+ Table 6: Task reward reported as a fraction of the maximum possible reward for different gaits with ablations. Removing gait constraints results in an improvement in velocity tracking task performance on flat ground. Heat maps (right) break down the mean task reward for each velocity command, revealing that the gait-free approach is most beneficial for combinations of high linear and angular velocity.
252
+
253
+ We believe ours to be the most flexible and dynamic published system for legged robotic dance to date. Notable prior works in legged robotic dance include Bi et al. [28], Shao et al. [20], and the unpublished demonstrations by Boston Dynamics [29]. Bi et al. [28] focused on automatic generation of dance choreography, but their model-based controller was restricted to carefully lifting one foot at a time. Shao et al. [20] defined a comparable gait modulation language to ours, but their use of a complete reference motion library, rather than a contact timing reward, produced behaviors that are less fast and dynamic. Boston Dynamics' dance demonstrations on Spot and Atlas [29] are based on an unpublished analytically designed controller.
254
+
255
+ § 5 LIMITATIONS
256
+
257
+ We implement a task parameterization and training procedure that allow a teleoperator or high-level planner to specify the locomotion behavior of a quadruped in diverse ways. Specifying the locomotion task reduces the system's degree of autonomy. Our experiments show that this hinders the robot's performance at very high speeds (Section 4.3; Table 6), suggesting that successful sprinting departs meaningfully from our parameterized family of gaits. To reparameterize the locomotion task without discouraging useful sprinting strategies is an open direction for future work.
258
+
259
+ Our work describes a blind locomotion controller that does not make use of exteroceptive sensor data from the robot's onboard cameras. This limits the system from anticipatively adapting to the terrain, which would be useful for tasks like upward stair traversal where reactive robustness is insufficient.
260
+
261
+ § 6 CONCLUSION
262
+
263
+ We have implemented and analyzed a novel combination of flexible gait specification and robust training procedure for learning quadrupedal locomotion. The resulting controller was modulated by a teleoperator to accomplish a number of previously inaccessible downstream tasks without task-specific retraining. This controller also exhibits a high degree of robustness with multiple gaits on uneven, slippery, and granular terrain, as well as stairs. Quantitative experiments suggest that online modulation of gait parameters can benefit RL policies in out-of-distribution environments, with the caveat that imposing such a parameterization can limit top-end sprinting performance.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/52uzsIGV32_/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,249 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Efficient and Stable Off-policy Training via Behavior-aware Evolutionary Learning
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Applying reinforcement learning (RL) algorithms to real-world continuos control problems faces many challenges in terms of sample efficiency, stability and exploration. Off-policy RL algorithms show great sample efficiency but can be unstable to train and require effective exploration techniques for sparse reward environments. A simple yet effective approach to address these challenges is to train a population of policies and ensemble them in certain ways. In this work, a novel population based evolutionary training framework inspired by evolution strategies (ES) called Behavior-aware Evolutionary Learning (BEL) is proposed. The main idea is to train a population of behaviorally diverse policies in parallel and conduct selection with simple linear recombination. BEL consists of two mechanisms called behavior-regularized perturbation (BRP) and behavior-targeted training (BTT) to accomplish stable and fine control of the population behavior divergence. Experimental studies showed that BEL not only has superior sample efficiency and stability compared to existing methods, but can also produce diverse agents in sparse reward environments. Due to the parallel implementation, BEL also exhibits relatively good computation efficiency, making it a practical and competitive method to train policies for real-world robots.
12
+
13
+ Keywords: Continuos control, Reinforcement learning, Evolution strategies
14
+
15
+ ## 1 Introduction
16
+
17
+ Recent advances in reinforcement learning (RL) have proven that off-policy deep reinforcement learning (DRL) algorithms possess great potential in solving continuos control problems, especially in terms of sample efficiency [1] [2] [3]. However, off-policy algorithms are also known to be unstable or brittle [4] [5], which to some extent hinders the application of these algorithms to real-world robots. On the other hand, environments with sparse rewards which are common in real-world scenarios also present challenges in terms of exploration.
18
+
19
+ Along with DRL algorithms, another line of direct policy search method called evolutionary algorithms (EA) also showed significant success as a result of improved computation efficiency of modern hardwares and clever implementations [6] [7] [8]. Different from DRL algorithms which exploit the sequential structure of the Markov Decision Process (MDP), EA algorithms treat policy search as a black-box optimization problem and utilize a population of randomly perturbed policies to search for better policies. While being less sample efficient, EA methods tend to enjoy properties like improved stability, efficient parallelization and better diversity due to the utilization of the EA population and EA operations.
20
+
21
+ Naturally, combining those two paradigms to get the best of both worlds has attracted lots of efforts over the years [9] [10] [11] [12] [13]. The motivation behind these works is to inject the gradient-trained DRL agents into the population and drive the population with policy gradient signals while enjoying the benefits of the EA population. Another perspective is that EA mutation can serve as parameter space noise and improve the exploration ability of RL agents [14]. Such kind of combination turned out to be very successful, the resulting hybrid algorithms can beat both of their EA and DRL components.
22
+
23
+ However, we identified two important problems remain unsolved by previous methods. The first problem is: how to randomly perturb the policy in a meaningful and controlled manner? A too small perturbation would result in no significant changes, while a too large one can lead to divergent training. Inspired by previous works [15] [12], we propose to solve this problem by Behavior-Regularized Perturbation (BRP) which can randomly perturb a policy network within a specified behavior divergence range in an online fashion. The second problem is: although perturbed networks result in different policies, once policies undergo the same RL training process and sample from the sample replay buffer, they may end up being similar and reduce the overall diversity. Ideally, we would like to have a population that is not only diverse after random perturbation, but also diverse after RL training. To accomplish this goal, Behavior-Targeted Training (BTT), is proposed to assign a randomly sampled target behavior divergence and inject it into the actor training process. The final proposed training framework is called Behavior-aware Evolutionary Learning (BEL). In BEL, training is conducted in a generational fashion which closely resembles a traditional evolution strategy (ES). In each generation, policies are first randomly generated by applying BRP to a central mean policy. Then, those offspring policies are trained in parallel with BTT. Finally, all trained policies undergo a weighted linear combination [16] and form the new mean policy for the next generation.
24
+
25
+ ## 2 Related works
26
+
27
+ Model-free off-policy RL algorithms are a class of sample efficient algorithms for continuos control tasks with relatively high dimensional action spaces [1] [17]. Built upon the actor-critic (AC) paradigm, where a pair of actor network and critic network are trained simultaneously, the Twin Delayed Deep Deterministic (TD3) algorithm [2] and the Soft Actor-Critic (SAC) algorithm [3] showed great success, and quickly become the go-to algorithms for sample efficient RL training.
28
+
29
+ Evolutionary algorithms have also gained attention due to the fact that they prove to be competitive alternatives to MDP-based methods [6] [7]. In [6], authors developed a simplified natural evolution strategies (NES) [18]. The resulting OpenAI ES offers massive scalability while matching the performance of MDP-based methods. In [7], it was shown that genetic algorithm (GA) was able to evolve networks with four million parameters and achieved competitive performance compared to gradient-based methods.
30
+
31
+ Combining evolutionary methods and policy gradient-based methods in order to benefit from the best of two worlds soon attracted much attention when [9] first proposed to evolve a population of agents with GA and periodically inject gradient information into the population. Their resulting algorithm ERL outperformed both GA and Deep Deterministic Policy Gradient (DDPG). In [11], authors managed to use a variant of ES called cross entropy method (CEM) to evolve the population half of which was composed of EA agents, and the other half was composed of TD3 agents. Their hybrid algorithm CEM-RL turned out to be very competitive and served as a strong baseline for derivative works. Later, [12] pointed out that traditional crossover and mutation operators widely used in GA can be detrimental in the sense that they could destroy learned behaviors. As a remedy, [12] proposed to conduct crossover with network distillation and mutation with SM-G-SUM [15] which proved to be able to keep learned network behaviors.
32
+
33
+ ## 80 3 Background
34
+
35
+ ## 1 3.1 Evolution Strategies (ES)
36
+
37
+ Evolution strategies (ES) belongs to the gradient-free black-box optimization algorithm family. It was first proposed by Rechenberg [19], and later developed by Schwefel [20]. Mimicking the natural evolution process, ES randomly generates a population of solution vectors (usually with a Gaussian distribution) whose fitness value will be evaluated in a problem-specific manner (for example episodic reward). In its canonical form, ES can be classified into two major versions: the $\left( {\mu ,\lambda }\right) - {ES}$ , where $\mu$ parents of the next generation are selected from the current $\lambda$ offsprings, and the $\left( {\mu + \lambda }\right) - {ES}$ , where the selection pool contains both the current parents and offsprings. The selection operation is usually a simple weighted linear recombination of the population vectors according to their fitness ranks. In this work, we adopt the simplest $\left( {1,\lambda }\right) - {ES}$ scheme, and model the population with a uniform distribution in terms of behavior divergence.
38
+
39
+ ### 3.2 Twin Delayed Deep Deterministic Policy Gradient (TD3)
40
+
41
+ As an RL algorithm, TD3 is built upon the Markov Decision Process (MDP) which is described by $< S, A, P, R,\gamma >$ . In this formulation, $S$ is the state space, $A$ is the action space, $P$ is the transition function, $R$ is the reward function and $\gamma$ is a discount factor [4]. The goal is to learn an optimal policy function $\pi$ to maximize the expected return $J\left( \theta \right) = {\mathbb{E}}_{s \sim {p}_{\pi }, a \sim \pi }\left\lbrack {R}_{0}\right\rbrack$ . TD3 solves this problem by adopting the actor-critic deterministic policy gradient [21] [1] [2], where a Q-function ${Q}_{\phi }$ is learned through the Bellman equation:
42
+
43
+ $$
44
+ {Q}^{\pi }\left( {s, a}\right) = r + \gamma {\mathbb{E}}_{{s}^{\prime },{a}^{\prime }}\left\lbrack {{Q}^{\pi }\left( {{s}^{\prime },{a}^{\prime }}\right) }\right\rbrack ,\;{a}^{\prime } \sim \pi \left( {s}^{\prime }\right) \tag{1}
45
+ $$
46
+
47
+ And then the policy function ${\pi }_{\theta }$ is optimized by the deterministic policy gradient:
48
+
49
+ $$
50
+ {\nabla }_{\theta }J\left( \theta \right) = {\mathbb{E}}_{s \sim {p}_{\pi }}\left\lbrack {{\left. {\nabla }_{a}{Q}^{\pi }\left( s, a\right) \right| }_{a = \pi \left( s\right) }{\nabla }_{\theta }{\pi }_{\theta }\left( s\right) }\right\rbrack \tag{2}
51
+ $$
52
+
53
+ For implementation, both ${Q}_{\phi }$ and ${\pi }_{\theta }$ are optimized with Monte-Carlo estimation with the help of a replay buffer $D$ , the loss function of ${Q}_{\phi }$ and ${\pi }_{\theta }$ are defined as follows:
54
+
55
+ $$
56
+ {\mathcal{L}}_{{Q}_{\phi }}^{TD3} = \underset{\left( {s, a, r,{s}^{\prime }}\right) \sim \mathcal{D}}{\mathrm{E}}\left\lbrack {\left( {Q}_{\phi }\left( s, a\right) - \left( r + \gamma \mathop{\max }\limits_{{a}^{\prime }}{Q}_{\phi }\left( {s}^{\prime },{a}^{\prime }\right) \right) \right) }^{2}\right\rbrack \tag{3}
57
+ $$
58
+
59
+ $$
60
+ {\mathcal{L}}_{{\pi }_{\theta }}^{TD3} = - \underset{s \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {\underset{a \sim {\pi }_{\theta }}{\mathbb{E}}{Q}_{\phi }\left( {s, a}\right) }\right\rbrack \tag{4}
61
+ $$
62
+
63
+ In TD3, three tricks are used to make the above learning process more stable and alleviate the overestimation bias. The first trick is to learn two $Q$ functions and uses the smaller $Q$ -value to form the target $\mathrm{Q}$ in eq. (3). The second trick is to delay the target networks updates with regard to $\mathrm{Q}$ network updates. The third trick is to add noise to target actions to smooth out $\mathrm{Q}$ along changes in action [2].
64
+
65
+ ## 4 Behavior-aware Evolutionary Learning (BEL)
66
+
67
+ ### 4.1 BEL framework
68
+
69
+ The overall structure of BEL is outlined in fig. 1. In BEL, we maintain a center actor as the population center, and $\lambda$ actors as offsprings. In each generation, first, all offspring actors will be initialized around the center actor with Behavior-Regularized Perturbation smoothing (BRP), which will be introduced in detail in section 4.2. Then each offspring actor will undergo Behavior-Targeted Training (BTT) as will be described in section 4.3. After these two processes, all offspring actors will interact with the environment and save their experiences to the replay buffer. Finally, the population selection is conducted with a weighted linear recombination of network parameters according to episodic rewards of the trained offspring actors to form the center actor for next generation. This process is repeated until termination criterion is met.
70
+
71
+ ### 4.2 Behavior-Regularized Perturbation (BRP)
72
+
73
+ Similar to the SM-G-SUM mutation operator used in [12] and [15], BRP relies on the calculation of the so-called parameter sensitivity with regard to network outputs. Given an actor network ${\pi }_{\theta }$ and a batch of transitions $i$ , BRP approximately measures how the overall output will vary with regard to
74
+
75
+ ![01963f45-086a-7423-aadd-df92cc2d6914_3_625_216_529_402_0.jpg](images/01963f45-086a-7423-aadd-df92cc2d6914_3_625_216_529_402_0.jpg)
76
+
77
+ Figure 1: An illustration of the BEL framework.
78
+
79
+ small changes of the neural network’s weights $\theta$ through the aggregation of backward gradients of each output node $k$ on data batch $i$ . For each parameter in ${\pi }_{\theta }$ , its sensitivity sens is calculated by:
80
+
81
+ $$
82
+ \text{sens} = \sqrt{\mathop{\sum }\limits_{k}{\left( \mathop{\sum }\limits_{i}\left| {\nabla }_{\theta }{\pi }_{\theta }{\left( {s}_{i}\right) }_{k}\right| \right) }^{2}} \tag{5}
83
+ $$
84
+
85
+ A large value of sens indicates that the corresponding parameter will lead to a large change of the action output and vice versa. Denote the overall sensitivity for all parameters as $\operatorname{Sens}$ , it is then used as the coefficient of the below linear transformation:
86
+
87
+ $$
88
+ \operatorname{Vec}\left( \widetilde{\pi }\right) = \operatorname{Vec}\left( {\pi }_{\theta }\right) + \frac{\delta }{\text{ Sens }} \tag{6}
89
+ $$
90
+
91
+ In eq. (6), $\operatorname{Vec}\left( {\pi }_{\theta }\right)$ means network parameters represented as a one-dimensional vector. ${\widetilde{\pi }}_{\theta }$ is the perturbed policy network and $\delta$ is a random vector that determines the perturbation magnitude and direction.
92
+
93
+ Unlike previous methods [22] [12] where $\delta$ is randomly sampled from a constant-scaled Gaussian distribution, BRP instead tries to adaptively search for a proper $\delta$ within a certain magnitude that can bound the behavior divergence of the perturbed network. This idea is similar to the parameter noise adaption method in [14]. BRP adopts the Euclidean norm as the behavior divergence measure: $d\left( {{\pi }_{\theta }\left( s\right) ,\widetilde{\pi }\left( s\right) }\right) = \sqrt{\frac{1}{N}\mathop{\sum }\limits_{{k = 1}}^{N}{\mathbb{E}}_{s}\left\lbrack {\left( {\pi }_{\theta }\left( s\right) - \widetilde{\pi }\left( s\right) \right) }^{2}\right\rbrack }$ . Given a behavior divergence ${\Delta }_{\max }^{BRP}$ upper bound, to generate one perturbed network $\widetilde{\pi }$ , BRP first randomly samples a divergence ${\Delta }_{i}$ , and then conduct a simple iterative linear search to find a proper $\delta$ , detailed procedure is summarized in algorithm 1. The final output is a set of $\lambda$ randomly perturbed policy networks following a uniform distribution in the behavior divergence space. Note that unlike previous implementations which only approximately calculated $\mathbf{S}$ , our implementation precisely calculated $\mathbf{S}$ with the help of Pytorch hooks, and generating five perturbed networks can be done within 0.2 seconds.
94
+
95
+ ### 4.3 Behavior-Targeted Training (BTT)
96
+
97
+ BRP generates policy networks through random local perturbation, BTT on the other hand generates trained policies that are within a behavior divergence range to the center policy. Consider one offspring policy ${\widetilde{\pi }}_{i}$ generated by BRP, we would like its behavior divergence after training ${\Delta }_{i}^{\text{trained }}$ to lie in the range defined by an upper bound: ${\Delta }_{i}^{\text{trained }} \in \left\lbrack {0,{\Delta }_{\max }^{BTT}}\right\rbrack$ .
98
+
99
+ To achieve this goal, we gained inspiration from imitation learning. In each generation of policy gradient training, the actor network aims to optimize two objectives. The first objective is the traditional RL objective as in eq. (4). For the second objective, consider a batch of states $s$ sampled from the replay buffer, the actions of both of the center policy ${\pi }_{\theta }$ and the offspring policy ${\widetilde{\pi }}_{i}$ in those states are calculated as ${a}_{\theta } = {\pi }_{\theta }\left( s\right)$ and ${a}_{i} = {\widetilde{\pi }}_{i}\left( s\right)$ . Then as depicted in fig. 2(a), we construct a
100
+
101
+ Algorithm 1 Behavior-Regularized Perturbation
102
+
103
+ ---
104
+
105
+ 1: Input: Population center policy ${\pi }_{\theta }$ , population size $\lambda$ , error bound $\epsilon$ , initial magnitude scalar
106
+
107
+ ${\delta }_{\text{init }}$ , a batch of states $s$ , and $\beta \in \left\lbrack {0,1}\right\rbrack$
108
+
109
+ Calculate Sens for ${\pi }_{\theta }$ according to eq. (5)
110
+
111
+ for $i = 1$ to $\lambda$ do
112
+
113
+ Sample a target divergence ${\Delta }_{i} \sim {U}_{\left\lbrack 0,{\Delta }_{\max }^{BRP}\right\rbrack }$ , sample a random direction from $\delta \sim N\left( {0,1}\right)$
114
+
115
+ Get initial perturbed network ${\widetilde{\pi }}_{i}$ with Sens according to eq. (6)
116
+
117
+ while $\left| {d\left( {{\pi }_{\theta }\left( s\right) ,\widetilde{{\pi }_{i}}\left( s\right) }\right) - {\Delta }_{i}}\right| > \epsilon$ do
118
+
119
+ if $d\left( {{\pi }_{\theta }\left( s\right) ,{\widetilde{\pi }}_{i}\left( s\right) }\right) < {\Delta }_{i}$ then
120
+
121
+ $\delta = \frac{\delta }{\beta }$
122
+
123
+ else
124
+
125
+ $\delta = \delta * \beta$
126
+
127
+ end if
128
+
129
+ Get perturbed network ${\widetilde{\pi }}_{i}$ with Sens and $\delta$
130
+
131
+ end while
132
+
133
+ end for
134
+
135
+ Output: Perturbed policies $\left\{ {{\widetilde{\pi }}_{i} \mid i = 1,\ldots ,\lambda }\right\}$
136
+
137
+ ---
138
+
139
+ "behavior potential well" with a one dimensional Gaussian distribution to force the negative log-likelihood of the Euclidean distance between ${a}_{\theta }$ and ${a}_{i}$ to stay close to the bottom of the Gaussian whose mean is defined by a sampled and fixed ${\Delta }_{i}^{\text{target }}$ , standard deviation is defined by a predefined ${\sigma }_{BTT}$ . This process results in the following training objective for BTT:
140
+
141
+ $$
142
+ {L}_{\widetilde{{\pi }_{i}}}^{BTT} = {L}_{\widetilde{{\pi }_{i}}}^{TD3} - \alpha \ln \left\lbrack {\frac{1}{{\sigma }_{BTT}\sqrt{2\pi }}{e}^{-\frac{{\left\lbrack d\left( {\pi }_{\theta }\left( s\right) ,\widetilde{\pi }\left( s\right) \right) - {\Delta }_{i}^{\text{target }}\right\rbrack }^{2}}{2{\sigma }_{BTT}^{2}}}}\right\rbrack \tag{7}
143
+ $$
144
+
145
+ Following eq. (7), the policy network will try to simultaneously follow the policy gradient and stay inside the behavior potential well to roughly keep a ${\Delta }_{i}^{\text{target }}$ divergence to the center policy. $\alpha$ is a hyper-parameter balancing the two objectives. To determine ${\Delta }_{i}^{\text{target }}$ , we simply sample from a uniform distribution as BRP: ${\Delta }_{i}^{\text{target }} \sim {U}_{\left\lbrack 0,{\Delta }_{\text{max }}^{\text{BTT }}\right\rbrack }$ . As ${\sigma }_{BTT}$ directly controls the steepness of the Gaussian distribution, a larger ${\sigma }_{BTT}$ means less restriction over the policy’s divergence and vice versa.
146
+
147
+ ## 5 Experiments
148
+
149
+ ### 5.1 Exploratory studies
150
+
151
+ As previous studies have shown [11] [14], perturbing Tanh-activated networks is easier than perturbing ReLU-activated networks. As perturbing networks with BRP is straight forward and network architecture agnostic, we conducted a comparative study to see how those two-types of networks respond to BRP. To be specific, we randomly sampled directions and record behavior divergence changes along those directions. As in fig. 3(a) fig. 3(b), where the x-axis is the percentage of the positive sign in one direction, the y-axis is the magnitude along that direction and the color-scale measures the behavior divergence (the brighter the larger), it is clear that randomly perturbing Tanh-activated networks has a larger chance of inducing significant behavior changes, which explains why Tanh-activated networks are generally favored in perturbation-based methods.
152
+
153
+ To verify that actors trained by BTT are uniformly distributed as ${\Delta }_{BTT}^{\text{target }}$ is sampled from a uniform distribution, we trained two BEL instances with $\alpha = {0.0}$ (without BTT) and $\alpha = {1.0}$ (with BTT). From 2(d), it is obvious that without BTT, the trained policy are quite concentrated. And when BTT is applied, the behavior divergences of the population constantly follows the uniform distribution. To further verify that BTT can lead to diverse behaviors, we plotted the state visitation map on the DelayedHalfCheetah-v3 environment to visualize how the offspring policies explore different states. As is shown in 2(e), one generation of BEL's BTT-trained population visits very different states while naively-trained population without BTT doesn't show the same level of diversity.
154
+
155
+ ![01963f45-086a-7423-aadd-df92cc2d6914_5_484_215_836_558_0.jpg](images/01963f45-086a-7423-aadd-df92cc2d6914_5_484_215_836_558_0.jpg)
156
+
157
+ Figure 2: (a) BTT illustration. The circles represent actors. Due to the log-likelihood training objective, actors will try to adapt their behaviors around their Gaussian centers. (b) BRP for a Tanh-activated network. (c) BRP for a ReLU-activated network. (d) Trained offspring behavior divergence illustration. (e) State visitation density map of five actors trained with BTT (first row) versus without BTT (second row) after one generation.
158
+
159
+ ![01963f45-086a-7423-aadd-df92cc2d6914_5_515_993_765_258_0.jpg](images/01963f45-086a-7423-aadd-df92cc2d6914_5_515_993_765_258_0.jpg)
160
+
161
+ Figure 3: (a) BRP ablation. max delta $= 0$ corresponds to no BRP (b) BTT ablation. alpha $= 0$ corresponds to no BTT, larger alpha means more constrained behavior. (c) Recombination ablation.
162
+
163
+ ### 5.2 Ablative studies
164
+
165
+ To verify the effectiveness of BRP, we tested different ${\Delta }_{\max }^{BRP}$ settings on the Walker2d-v3 environment. From fig. 3(a), it is noticeable that BRP not only significantly speeds up the learning process, but also helps avoiding local optimums. As a matter of fact, we find that when BRP is applied, critic networks tend to constantly induce larger training loss throughout training. This phenomenon indicates that BRP indeed brings another level of behavior uncertainty, which forces critics to make better prediction.
166
+
167
+ An ablation study on the DelayedHalfCheetah-v3 environment is conducted to show that BTT is indeed helpful for exploration .DelayedHalfCheetah-v3 is a modified HalfCheetah-v3 environment where the reward is manually delayed for 20 time steps, making it a difficult sparse reward environment. The proportion of the log-likelihood objective is tuned with $\alpha$ . We can observe from fig. 3(b) that when $\alpha$ is set to zero, which means no BTT in the training objective, BEL can’t effectively explore. However, on the other hand, when $\alpha$ is too large, actors may also lose performance since their behaviors are over constrained.
168
+
169
+ As [12] [15] pointed out, many operators in EA are designed for black-box optimization, and can be potentially harmful for neural networks. An experiment comparing weighted linear recombination and distillation-based recombination was designed on the DelayedHalfCheetah-V3 environment, where all parts of BEL are kept the same except for the recombination phase. In this phase, offspring actors are ranked according to their latest episodic rewards, and then, $\lambda$ offspring actors are treated as
170
+
171
+ Table 1: Numerical results for final best mean reward of different algorithms on selected tasks
172
+
173
+ <table><tr><td colspan="2">TASKS\\ALGORITHMS STATISTICS</td><td>BEL (Ours)</td><td>SAC</td><td>TD3</td><td>CEM-RL</td><td>PDERL</td></tr><tr><td rowspan="4">HALFCHEETAH-V3</td><td>MEAN</td><td>12725.39</td><td>10482.39</td><td>10408.62</td><td>10636.94</td><td>6917.24</td></tr><tr><td>STD</td><td>202.89</td><td>1253.81</td><td>1093.64</td><td>2131.36</td><td>444.95</td></tr><tr><td>MEDIAN</td><td>12751.99</td><td>11058.84</td><td>10810.48</td><td>11323.23</td><td>7026.15</td></tr><tr><td>WALLCLOCK</td><td>4.50H</td><td>5.34H</td><td>$\mathbf{{2.31H}}$</td><td>5.52H</td><td>${3.20}\mathrm{H}$</td></tr><tr><td rowspan="4">ANT-V3</td><td>MEAN</td><td>6082.41</td><td>5208.65</td><td>5090.81</td><td>3455.95</td><td>1609.40</td></tr><tr><td>STD</td><td>166.28</td><td>282.64</td><td>651.13</td><td>1359.87</td><td>542.42</td></tr><tr><td>MEDIAN</td><td>6147.19</td><td>5259.90</td><td>5385.89</td><td>3487.73</td><td>1582.24</td></tr><tr><td>WALLCLOCK</td><td>5.81 H</td><td>8.03H</td><td>$\mathbf{{3.09}}\mathrm{H}$</td><td>6.47H</td><td>3.61H</td></tr><tr><td rowspan="4">WALKER2D-V3</td><td>MEAN</td><td>5723.30</td><td>4637.03</td><td>3855.60</td><td>4173.30</td><td>1588.51</td></tr><tr><td>STD</td><td>498.38</td><td>414.19</td><td>760.91</td><td>1153.97</td><td>641.26</td></tr><tr><td>MEDIAN</td><td>6087.36</td><td>4682.27</td><td>4138.82</td><td>4358.34</td><td>1253.77</td></tr><tr><td>WALLCLOCK</td><td>4.14H</td><td>7.54H</td><td>$\mathbf{{2.54H}}$</td><td>5.91H</td><td>3.42H</td></tr><tr><td rowspan="4">HOPPER-V3</td><td>MEAN</td><td>3717.14</td><td>3543.35</td><td>3426.26</td><td>3597.87</td><td>1293.66</td></tr><tr><td>STD</td><td>101.08</td><td>103.29</td><td>192.31</td><td>495.28</td><td>356.54</td></tr><tr><td>Median</td><td>3740.41</td><td>3580.16</td><td>3333.04</td><td>3749.87</td><td>1160.93</td></tr><tr><td>WALLCLOCK</td><td>${4.29}\mathrm{H}$</td><td>7.94H</td><td>$\mathbf{{2.30H}}$</td><td>5.82H</td><td>${3.35}\mathrm{H}$</td></tr><tr><td rowspan="4">HUMANOID-V3</td><td>MEAN</td><td>5337.20</td><td>5617.94</td><td>5319.09</td><td>215.79</td><td>815.96</td></tr><tr><td>STD</td><td>113.53</td><td>133.93</td><td>114.38</td><td>0.44</td><td>90.86</td></tr><tr><td>MEDIAN</td><td>5364.52</td><td>5588.50</td><td>5333.41</td><td>215.76</td><td>821.11</td></tr><tr><td>WALLCLOCK</td><td>7.74H</td><td>8.48H</td><td>$\mathbf{{4.51H}}$</td><td>9.25H</td><td>4.85H</td></tr><tr><td rowspan="4">DELAYED- HALFCHEETAH-V3</td><td>MEAN</td><td>6777.87</td><td>4763.14</td><td>4730.74</td><td>6276.42</td><td>2865.77</td></tr><tr><td>STD</td><td>596.49</td><td>758.29</td><td>806.42</td><td>857.72</td><td>658.37</td></tr><tr><td>MEDIAN</td><td>6857.65</td><td>4734.52</td><td>4469.37</td><td>6372.91</td><td>3095.51</td></tr><tr><td>WALLCLOCK</td><td>4.69 $\mathrm{H}$</td><td>5.73H</td><td>$\mathbf{{2.45H}}$</td><td>6.82H</td><td>${3.15}\mathrm{H}$</td></tr></table>
174
+
175
+ teacher networks. Their actions on $N$ sampled observations are recorded as demonstrations. Then, the center actor is treated as the student network to imitate offspring actors. To give different actors different importance, the following weighted imitation training objective is constructed:
176
+
177
+ $$
178
+ \mathcal{L}\left( {\pi }_{\theta }\right) = \mathop{\sum }\limits_{{k = 1}}^{N}\mathop{\sum }\limits_{{i = 1}}^{\lambda }{\omega }_{i}{\begin{Vmatrix}{\pi }_{\theta }\left( {s}_{k}\right) - {\widetilde{\pi }}_{i}\left( {s}_{k}\right) \end{Vmatrix}}^{2},\mathop{\sum }\limits_{{i = 1}}^{\lambda }{\omega }_{i} = 1 \tag{8}
179
+ $$
180
+
181
+ Much to our surprise, though the distillation-based method seem to learn a bit faster in the early phase, it quickly fell into the local optimum and can hardly made its way out. This experiment showed that though naive linear recombination may break the behavior of the output network to some extent, this kind of behavior uncertainty may result in extra exploration which is beneficial.
182
+
183
+ ### 5.3 Comparison to state-of-the-art RL and EA-RL methods
184
+
185
+ In this section, the performance of the proposed BEL is compared against pure RL methods including TD3 [2], SAC [3] and TD3-Ensemble as well as other EA-RL methods including [11] and an improved version of ERL which is called PDERL [12]. For TD3, CEM-RL, and PDERL, we used the code published by the original authors. For SAC, the stable baselines3 library is used. Every algorithm is run on the same machine, and the results we got were close to what authors had claimed in the original papers. Five tasks from MuJoCo continuous control benchmark are selected. Swimmer-v3 is excluded since it was found that tuning the reward discount factor to 0.9999 could make all algorithms perform more or less the same, reaching approximately 350 reward. Another DelayedHalfCheetah-v3 environment is constructed by delaying the reward signal for 20 time steps, making it a hard exploration task. Following the convention from other literatures, for all algorithms, their learning curves are aggregated over 10 repeated runs across one million time steps. And the evaluated policies are tested for 10 times. For BEL, the population center policy is used for testing. Note that though 18 BEL trains the population in a parallel fashion, for fair comparison, the total time steps are aggregated for every policy interacting with the environment.
186
+
187
+ ![01963f45-086a-7423-aadd-df92cc2d6914_7_481_199_826_487_0.jpg](images/01963f45-086a-7423-aadd-df92cc2d6914_7_481_199_826_487_0.jpg)
188
+
189
+ Figure 4: Learning curves on 6 MuJoCo environments in one million time steps.
190
+
191
+ Sample efficiency It can be seen from fig. 4 that BEL turns out to be very competitive against comparing methods in terms of sample efficiency. On the one hand, it can pick up signals faster than other methods, indicating its high sample efficiency. On the other hand, its final best performance outperforms other methods except on Humanoid-v3.
192
+
193
+ BEL versus TD3-Ensemble Since we trained multiple actor-critic pairs in BEL, it is natural to question if the good performance of BEL comes from the ensemble nature. To answer this question, we tested the performance of TD3-ensemble where equal numbers of actor-critic pairs are trained, and all hyper parameters are kept as close as possible. From table 1, it is clear that BEL outperforms TD3-ensemble on all tasks.
194
+
195
+ Stability As can be seen from fig. 4, BEL also generally has smaller standard deviations across runs, even compared to other population based evolutionary methods whose population sizes are larger, this means BEL is very stable. Another phenomenon that suggests BEL's robustness is in the Humanoid-v3 environment, where the naive TD3-Ensemble share the same learning rate (which is larger than single instance TD3) as BEl, but failed to stably learn.
196
+
197
+ Computation efficiency Since all experiments are conducted on the same machine and all on CPUs, we also recorded the median wall-clock running time of all algorithms. TD3 is the fastest algorithm as it is also the most light-weight one. PDERL ranks the second because not all policies in its population are trained, a great portion of its population are directly evaluated after perturbation. BEL ranks the third among all algorithms, and is generally faster than SAC and CEM-RL. We think BEL reaches a good balance between sample-efficiency and computation overhead.
198
+
199
+ Limitations Though generally good performance can be expected from BEL, it still has the following limitations. First, as multiple networks are trained in parallel, a computation node with a multi-core CPU and relatively large RAM is required. Second, as can be seen from the Humanoid-v3 where BEL does not outperform SAC, it may indicate BRP and BTT do not scale very well as action space dimension grows. Further studies regarding the scalability of BEL need to be conducted.
200
+
201
+ ## 6 Conclusion
202
+
203
+ In this work, a novel population-based evolutionary training framework for off-policy RL algorithms called BEL is proposed. Exploratory and ablative studies show the effectiveness of BRP and BTT. Benchmark comparisons against other methods show BEL outperforms state-of-the-art RL and EA-RL methods in terms of sample efficiency. The training pipeline is conceptually simple and we offer efficient parallel implementation. Along with the improved stability and exploration ability, we believe BEL can serve as a competitive training method for real-world robot learning with off-policy RL algorithms.
204
+
205
+ References
206
+
207
+ [1] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. URL http://arxiv.org/abs/1509.02971.
208
+
209
+ [2] S. Fujimoto, H. van Hoof, and D. Meger. Addressing Function Approximation Error in Actor-Critic Methods. URL http://arxiv.org/abs/1802.09477.
210
+
211
+ [3] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. URL http://arxiv.org/ abs/1801.01290.
212
+
213
+ [4] R. S. Sutton and A. Barto. Reinforcement Learning: An Introduction. Adaptive Computation and Machine Learning. The MIT Press, second edition edition. ISBN 978-0-262-03924-6.
214
+
215
+ [5] H. van Hasselt, Y. Doron, F. Strub, M. Hessel, N. Sonnerat, and J. Modayil. Deep Reinforcement Learning and the Deadly Triad. URL http://arxiv.org/abs/1812.02648.
216
+
217
+ [6] T. Salimans, J. Ho, X. Chen, S. Sidor, and I. Sutskever. Evolution Strategies as a Scalable Alternative to Reinforcement Learning. URL https://arxiv.org/abs/1703.03864v2.
218
+
219
+ [7] F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune. Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning. URL http://arxiv.org/abs/1712.06567.
220
+
221
+ [8] H. Mania, A. Guy, and B. Recht. Simple random search provides a competitive approach to reinforcement learning. URL http://arxiv.org/abs/1803.07055.
222
+
223
+ [9] S. Khadka and K. Tumer. Evolution-Guided Policy Gradient in Reinforcement Learning. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc. URL https://proceedings.neurips.cc/paper/2018/hash/ 85fc37b18c57097425b52fc7afbb6969-Abstract.html.
224
+
225
+ [10] S. Khadka, S. Majumdar, T. Nassar, Z. Dwiel, E. Tumer, S. Miret, Y. Liu, and K. Tumer. Collaborative Evolutionary Reinforcement Learning. URL http://arxiv.org/abs/1905.00976.
226
+
227
+ [11] A. Pourchot, N. Perrin, and O. Sigaud. Importance mixing: Improving sample reuse in evolutionary policy search methods. URL http://arxiv.org/abs/1808.05832.
228
+
229
+ [12] C. Bodnar, B. Day, and P. Lió. Proximal Distilled Evolutionary Reinforcement Learning. 34 (04):3283-3290. ISSN 2374-3468, 2159-5399. doi:10.1609/aaai.v34i04.5728.
230
+
231
+ [13] K. Lee, B.-U. Lee, U. Shin, and I. S. Kweon. An Efficient Asynchronous Method for Integrating Evolutionary and Gradient-based Policy Search. 33:10124-10135. URL https://papers.nips.cc/paper/2020/hash/731309c4bb223491a9f67eac5214fb2e-Abstract.html.
232
+
233
+ [14] M. Plappert, R. Houthooft, P. Dhariwal, S. Sidor, R. Y. Chen, X. Chen, T. Asfour, P. Abbeel, and M. Andrychowicz. Parameter Space Noise for Exploration. URL http://arxiv.org/ abs/1706.01905.
234
+
235
+ [15] J. Lehman, J. Chen, J. Clune, and K. O. Stanley. Safe Mutations for Deep and Recurrent Neural Networks through Output Gradients. URL http://arxiv.org/abs/1712.06563.
236
+
237
+ [16] N. Hansen. The CMA Evolution Strategy: A Tutorial. URL http://arxiv.org/abs/1604.00772.
238
+
239
+ [17] B. Recht. A Tour of Reinforcement Learning: The View from Continuous Control. 2(1): 253-279. doi:10.1146/annurev-control-053018-023825.
240
+
241
+ [18] D. Wierstra, T. Schaul, T. Glasmachers, Y. Sun, and J. Schmidhuber. Natural Evolution Strategies. URL http://arxiv.org/abs/1106.4487.
242
+
243
+ [19] I. Rechenberg. Evolutionsstrategie optimierung technischer systeme nach prinzipien der biologischen evolution. URL https://scholar.google.com/scholar_lookup?title= Evolutionsstrategie+Optimierung+technischer+Systeme+nach+Prinzipien+ der+biologischen+Evolution&author=Rechenberg%2C+Ingo.&publication_year= 1973.
244
+
245
+ [20] H.-P. Schwefel. Evolutionsstrategien für die numerische optimierung. In H.-P. Schwefel, editor, Numerische Optimierung von Computer-Modellen mittels der Evolutionsstrategie: Mit einer vergleichenden Einführung in die Hill-Climbing- und Zufallsstrategie, Interdisciplinary Systems Research / Interdisziplinäre Systemforschung, pages 123-176. Birkhäuser. ISBN 978-3-0348-5927-1. doi:10.1007/978-3-0348-5927-1_5.
246
+
247
+ [21] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic Policy Gradient Algorithms. In International Conference on Machine Learning, pages 387-395. PMLR. URL http://proceedings.mlr.press/v32/silver14.html.
248
+
249
+ [22] J. Lehman, J. Chen, J. Clune, and K. O. Stanley. ES is more than just a traditional finite-difference approximator. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO '18, pages 450-457. Association for Computing Machinery. ISBN 978-1-4503- 5618-3. doi:10.1145/3205455.3205474.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/52uzsIGV32_/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,275 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § EFFICIENT AND STABLE OFF-POLICY TRAINING VIA BEHAVIOR-AWARE EVOLUTIONARY LEARNING
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Applying reinforcement learning (RL) algorithms to real-world continuos control problems faces many challenges in terms of sample efficiency, stability and exploration. Off-policy RL algorithms show great sample efficiency but can be unstable to train and require effective exploration techniques for sparse reward environments. A simple yet effective approach to address these challenges is to train a population of policies and ensemble them in certain ways. In this work, a novel population based evolutionary training framework inspired by evolution strategies (ES) called Behavior-aware Evolutionary Learning (BEL) is proposed. The main idea is to train a population of behaviorally diverse policies in parallel and conduct selection with simple linear recombination. BEL consists of two mechanisms called behavior-regularized perturbation (BRP) and behavior-targeted training (BTT) to accomplish stable and fine control of the population behavior divergence. Experimental studies showed that BEL not only has superior sample efficiency and stability compared to existing methods, but can also produce diverse agents in sparse reward environments. Due to the parallel implementation, BEL also exhibits relatively good computation efficiency, making it a practical and competitive method to train policies for real-world robots.
12
+
13
+ Keywords: Continuos control, Reinforcement learning, Evolution strategies
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Recent advances in reinforcement learning (RL) have proven that off-policy deep reinforcement learning (DRL) algorithms possess great potential in solving continuos control problems, especially in terms of sample efficiency [1] [2] [3]. However, off-policy algorithms are also known to be unstable or brittle [4] [5], which to some extent hinders the application of these algorithms to real-world robots. On the other hand, environments with sparse rewards which are common in real-world scenarios also present challenges in terms of exploration.
18
+
19
+ Along with DRL algorithms, another line of direct policy search method called evolutionary algorithms (EA) also showed significant success as a result of improved computation efficiency of modern hardwares and clever implementations [6] [7] [8]. Different from DRL algorithms which exploit the sequential structure of the Markov Decision Process (MDP), EA algorithms treat policy search as a black-box optimization problem and utilize a population of randomly perturbed policies to search for better policies. While being less sample efficient, EA methods tend to enjoy properties like improved stability, efficient parallelization and better diversity due to the utilization of the EA population and EA operations.
20
+
21
+ Naturally, combining those two paradigms to get the best of both worlds has attracted lots of efforts over the years [9] [10] [11] [12] [13]. The motivation behind these works is to inject the gradient-trained DRL agents into the population and drive the population with policy gradient signals while enjoying the benefits of the EA population. Another perspective is that EA mutation can serve as parameter space noise and improve the exploration ability of RL agents [14]. Such kind of combination turned out to be very successful, the resulting hybrid algorithms can beat both of their EA and DRL components.
22
+
23
+ However, we identified two important problems remain unsolved by previous methods. The first problem is: how to randomly perturb the policy in a meaningful and controlled manner? A too small perturbation would result in no significant changes, while a too large one can lead to divergent training. Inspired by previous works [15] [12], we propose to solve this problem by Behavior-Regularized Perturbation (BRP) which can randomly perturb a policy network within a specified behavior divergence range in an online fashion. The second problem is: although perturbed networks result in different policies, once policies undergo the same RL training process and sample from the sample replay buffer, they may end up being similar and reduce the overall diversity. Ideally, we would like to have a population that is not only diverse after random perturbation, but also diverse after RL training. To accomplish this goal, Behavior-Targeted Training (BTT), is proposed to assign a randomly sampled target behavior divergence and inject it into the actor training process. The final proposed training framework is called Behavior-aware Evolutionary Learning (BEL). In BEL, training is conducted in a generational fashion which closely resembles a traditional evolution strategy (ES). In each generation, policies are first randomly generated by applying BRP to a central mean policy. Then, those offspring policies are trained in parallel with BTT. Finally, all trained policies undergo a weighted linear combination [16] and form the new mean policy for the next generation.
24
+
25
+ § 2 RELATED WORKS
26
+
27
+ Model-free off-policy RL algorithms are a class of sample efficient algorithms for continuos control tasks with relatively high dimensional action spaces [1] [17]. Built upon the actor-critic (AC) paradigm, where a pair of actor network and critic network are trained simultaneously, the Twin Delayed Deep Deterministic (TD3) algorithm [2] and the Soft Actor-Critic (SAC) algorithm [3] showed great success, and quickly become the go-to algorithms for sample efficient RL training.
28
+
29
+ Evolutionary algorithms have also gained attention due to the fact that they prove to be competitive alternatives to MDP-based methods [6] [7]. In [6], authors developed a simplified natural evolution strategies (NES) [18]. The resulting OpenAI ES offers massive scalability while matching the performance of MDP-based methods. In [7], it was shown that genetic algorithm (GA) was able to evolve networks with four million parameters and achieved competitive performance compared to gradient-based methods.
30
+
31
+ Combining evolutionary methods and policy gradient-based methods in order to benefit from the best of two worlds soon attracted much attention when [9] first proposed to evolve a population of agents with GA and periodically inject gradient information into the population. Their resulting algorithm ERL outperformed both GA and Deep Deterministic Policy Gradient (DDPG). In [11], authors managed to use a variant of ES called cross entropy method (CEM) to evolve the population half of which was composed of EA agents, and the other half was composed of TD3 agents. Their hybrid algorithm CEM-RL turned out to be very competitive and served as a strong baseline for derivative works. Later, [12] pointed out that traditional crossover and mutation operators widely used in GA can be detrimental in the sense that they could destroy learned behaviors. As a remedy, [12] proposed to conduct crossover with network distillation and mutation with SM-G-SUM [15] which proved to be able to keep learned network behaviors.
32
+
33
+ § 80 3 BACKGROUND
34
+
35
+ § 1 3.1 EVOLUTION STRATEGIES (ES)
36
+
37
+ Evolution strategies (ES) belongs to the gradient-free black-box optimization algorithm family. It was first proposed by Rechenberg [19], and later developed by Schwefel [20]. Mimicking the natural evolution process, ES randomly generates a population of solution vectors (usually with a Gaussian distribution) whose fitness value will be evaluated in a problem-specific manner (for example episodic reward). In its canonical form, ES can be classified into two major versions: the $\left( {\mu ,\lambda }\right) - {ES}$ , where $\mu$ parents of the next generation are selected from the current $\lambda$ offsprings, and the $\left( {\mu + \lambda }\right) - {ES}$ , where the selection pool contains both the current parents and offsprings. The selection operation is usually a simple weighted linear recombination of the population vectors according to their fitness ranks. In this work, we adopt the simplest $\left( {1,\lambda }\right) - {ES}$ scheme, and model the population with a uniform distribution in terms of behavior divergence.
38
+
39
+ § 3.2 TWIN DELAYED DEEP DETERMINISTIC POLICY GRADIENT (TD3)
40
+
41
+ As an RL algorithm, TD3 is built upon the Markov Decision Process (MDP) which is described by $< S,A,P,R,\gamma >$ . In this formulation, $S$ is the state space, $A$ is the action space, $P$ is the transition function, $R$ is the reward function and $\gamma$ is a discount factor [4]. The goal is to learn an optimal policy function $\pi$ to maximize the expected return $J\left( \theta \right) = {\mathbb{E}}_{s \sim {p}_{\pi },a \sim \pi }\left\lbrack {R}_{0}\right\rbrack$ . TD3 solves this problem by adopting the actor-critic deterministic policy gradient [21] [1] [2], where a Q-function ${Q}_{\phi }$ is learned through the Bellman equation:
42
+
43
+ $$
44
+ {Q}^{\pi }\left( {s,a}\right) = r + \gamma {\mathbb{E}}_{{s}^{\prime },{a}^{\prime }}\left\lbrack {{Q}^{\pi }\left( {{s}^{\prime },{a}^{\prime }}\right) }\right\rbrack ,\;{a}^{\prime } \sim \pi \left( {s}^{\prime }\right) \tag{1}
45
+ $$
46
+
47
+ And then the policy function ${\pi }_{\theta }$ is optimized by the deterministic policy gradient:
48
+
49
+ $$
50
+ {\nabla }_{\theta }J\left( \theta \right) = {\mathbb{E}}_{s \sim {p}_{\pi }}\left\lbrack {{\left. {\nabla }_{a}{Q}^{\pi }\left( s,a\right) \right| }_{a = \pi \left( s\right) }{\nabla }_{\theta }{\pi }_{\theta }\left( s\right) }\right\rbrack \tag{2}
51
+ $$
52
+
53
+ For implementation, both ${Q}_{\phi }$ and ${\pi }_{\theta }$ are optimized with Monte-Carlo estimation with the help of a replay buffer $D$ , the loss function of ${Q}_{\phi }$ and ${\pi }_{\theta }$ are defined as follows:
54
+
55
+ $$
56
+ {\mathcal{L}}_{{Q}_{\phi }}^{TD3} = \underset{\left( {s,a,r,{s}^{\prime }}\right) \sim \mathcal{D}}{\mathrm{E}}\left\lbrack {\left( {Q}_{\phi }\left( s,a\right) - \left( r + \gamma \mathop{\max }\limits_{{a}^{\prime }}{Q}_{\phi }\left( {s}^{\prime },{a}^{\prime }\right) \right) \right) }^{2}\right\rbrack \tag{3}
57
+ $$
58
+
59
+ $$
60
+ {\mathcal{L}}_{{\pi }_{\theta }}^{TD3} = - \underset{s \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {\underset{a \sim {\pi }_{\theta }}{\mathbb{E}}{Q}_{\phi }\left( {s,a}\right) }\right\rbrack \tag{4}
61
+ $$
62
+
63
+ In TD3, three tricks are used to make the above learning process more stable and alleviate the overestimation bias. The first trick is to learn two $Q$ functions and uses the smaller $Q$ -value to form the target $\mathrm{Q}$ in eq. (3). The second trick is to delay the target networks updates with regard to $\mathrm{Q}$ network updates. The third trick is to add noise to target actions to smooth out $\mathrm{Q}$ along changes in action [2].
64
+
65
+ § 4 BEHAVIOR-AWARE EVOLUTIONARY LEARNING (BEL)
66
+
67
+ § 4.1 BEL FRAMEWORK
68
+
69
+ The overall structure of BEL is outlined in fig. 1. In BEL, we maintain a center actor as the population center, and $\lambda$ actors as offsprings. In each generation, first, all offspring actors will be initialized around the center actor with Behavior-Regularized Perturbation smoothing (BRP), which will be introduced in detail in section 4.2. Then each offspring actor will undergo Behavior-Targeted Training (BTT) as will be described in section 4.3. After these two processes, all offspring actors will interact with the environment and save their experiences to the replay buffer. Finally, the population selection is conducted with a weighted linear recombination of network parameters according to episodic rewards of the trained offspring actors to form the center actor for next generation. This process is repeated until termination criterion is met.
70
+
71
+ § 4.2 BEHAVIOR-REGULARIZED PERTURBATION (BRP)
72
+
73
+ Similar to the SM-G-SUM mutation operator used in [12] and [15], BRP relies on the calculation of the so-called parameter sensitivity with regard to network outputs. Given an actor network ${\pi }_{\theta }$ and a batch of transitions $i$ , BRP approximately measures how the overall output will vary with regard to
74
+
75
+ < g r a p h i c s >
76
+
77
+ Figure 1: An illustration of the BEL framework.
78
+
79
+ small changes of the neural network’s weights $\theta$ through the aggregation of backward gradients of each output node $k$ on data batch $i$ . For each parameter in ${\pi }_{\theta }$ , its sensitivity sens is calculated by:
80
+
81
+ $$
82
+ \text{ sens } = \sqrt{\mathop{\sum }\limits_{k}{\left( \mathop{\sum }\limits_{i}\left| {\nabla }_{\theta }{\pi }_{\theta }{\left( {s}_{i}\right) }_{k}\right| \right) }^{2}} \tag{5}
83
+ $$
84
+
85
+ A large value of sens indicates that the corresponding parameter will lead to a large change of the action output and vice versa. Denote the overall sensitivity for all parameters as $\operatorname{Sens}$ , it is then used as the coefficient of the below linear transformation:
86
+
87
+ $$
88
+ \operatorname{Vec}\left( \widetilde{\pi }\right) = \operatorname{Vec}\left( {\pi }_{\theta }\right) + \frac{\delta }{\text{ Sens }} \tag{6}
89
+ $$
90
+
91
+ In eq. (6), $\operatorname{Vec}\left( {\pi }_{\theta }\right)$ means network parameters represented as a one-dimensional vector. ${\widetilde{\pi }}_{\theta }$ is the perturbed policy network and $\delta$ is a random vector that determines the perturbation magnitude and direction.
92
+
93
+ Unlike previous methods [22] [12] where $\delta$ is randomly sampled from a constant-scaled Gaussian distribution, BRP instead tries to adaptively search for a proper $\delta$ within a certain magnitude that can bound the behavior divergence of the perturbed network. This idea is similar to the parameter noise adaption method in [14]. BRP adopts the Euclidean norm as the behavior divergence measure: $d\left( {{\pi }_{\theta }\left( s\right) ,\widetilde{\pi }\left( s\right) }\right) = \sqrt{\frac{1}{N}\mathop{\sum }\limits_{{k = 1}}^{N}{\mathbb{E}}_{s}\left\lbrack {\left( {\pi }_{\theta }\left( s\right) - \widetilde{\pi }\left( s\right) \right) }^{2}\right\rbrack }$ . Given a behavior divergence ${\Delta }_{\max }^{BRP}$ upper bound, to generate one perturbed network $\widetilde{\pi }$ , BRP first randomly samples a divergence ${\Delta }_{i}$ , and then conduct a simple iterative linear search to find a proper $\delta$ , detailed procedure is summarized in algorithm 1. The final output is a set of $\lambda$ randomly perturbed policy networks following a uniform distribution in the behavior divergence space. Note that unlike previous implementations which only approximately calculated $\mathbf{S}$ , our implementation precisely calculated $\mathbf{S}$ with the help of Pytorch hooks, and generating five perturbed networks can be done within 0.2 seconds.
94
+
95
+ § 4.3 BEHAVIOR-TARGETED TRAINING (BTT)
96
+
97
+ BRP generates policy networks through random local perturbation, BTT on the other hand generates trained policies that are within a behavior divergence range to the center policy. Consider one offspring policy ${\widetilde{\pi }}_{i}$ generated by BRP, we would like its behavior divergence after training ${\Delta }_{i}^{\text{ trained }}$ to lie in the range defined by an upper bound: ${\Delta }_{i}^{\text{ trained }} \in \left\lbrack {0,{\Delta }_{\max }^{BTT}}\right\rbrack$ .
98
+
99
+ To achieve this goal, we gained inspiration from imitation learning. In each generation of policy gradient training, the actor network aims to optimize two objectives. The first objective is the traditional RL objective as in eq. (4). For the second objective, consider a batch of states $s$ sampled from the replay buffer, the actions of both of the center policy ${\pi }_{\theta }$ and the offspring policy ${\widetilde{\pi }}_{i}$ in those states are calculated as ${a}_{\theta } = {\pi }_{\theta }\left( s\right)$ and ${a}_{i} = {\widetilde{\pi }}_{i}\left( s\right)$ . Then as depicted in fig. 2(a), we construct a
100
+
101
+ Algorithm 1 Behavior-Regularized Perturbation
102
+
103
+ 1: Input: Population center policy ${\pi }_{\theta }$ , population size $\lambda$ , error bound $\epsilon$ , initial magnitude scalar
104
+
105
+ ${\delta }_{\text{ init }}$ , a batch of states $s$ , and $\beta \in \left\lbrack {0,1}\right\rbrack$
106
+
107
+ Calculate Sens for ${\pi }_{\theta }$ according to eq. (5)
108
+
109
+ for $i = 1$ to $\lambda$ do
110
+
111
+ Sample a target divergence ${\Delta }_{i} \sim {U}_{\left\lbrack 0,{\Delta }_{\max }^{BRP}\right\rbrack }$ , sample a random direction from $\delta \sim N\left( {0,1}\right)$
112
+
113
+ Get initial perturbed network ${\widetilde{\pi }}_{i}$ with Sens according to eq. (6)
114
+
115
+ while $\left| {d\left( {{\pi }_{\theta }\left( s\right) ,\widetilde{{\pi }_{i}}\left( s\right) }\right) - {\Delta }_{i}}\right| > \epsilon$ do
116
+
117
+ if $d\left( {{\pi }_{\theta }\left( s\right) ,{\widetilde{\pi }}_{i}\left( s\right) }\right) < {\Delta }_{i}$ then
118
+
119
+ $\delta = \frac{\delta }{\beta }$
120
+
121
+ else
122
+
123
+ $\delta = \delta * \beta$
124
+
125
+ end if
126
+
127
+ Get perturbed network ${\widetilde{\pi }}_{i}$ with Sens and $\delta$
128
+
129
+ end while
130
+
131
+ end for
132
+
133
+ Output: Perturbed policies $\left\{ {{\widetilde{\pi }}_{i} \mid i = 1,\ldots ,\lambda }\right\}$
134
+
135
+ "behavior potential well" with a one dimensional Gaussian distribution to force the negative log-likelihood of the Euclidean distance between ${a}_{\theta }$ and ${a}_{i}$ to stay close to the bottom of the Gaussian whose mean is defined by a sampled and fixed ${\Delta }_{i}^{\text{ target }}$ , standard deviation is defined by a predefined ${\sigma }_{BTT}$ . This process results in the following training objective for BTT:
136
+
137
+ $$
138
+ {L}_{\widetilde{{\pi }_{i}}}^{BTT} = {L}_{\widetilde{{\pi }_{i}}}^{TD3} - \alpha \ln \left\lbrack {\frac{1}{{\sigma }_{BTT}\sqrt{2\pi }}{e}^{-\frac{{\left\lbrack d\left( {\pi }_{\theta }\left( s\right) ,\widetilde{\pi }\left( s\right) \right) - {\Delta }_{i}^{\text{ target }}\right\rbrack }^{2}}{2{\sigma }_{BTT}^{2}}}}\right\rbrack \tag{7}
139
+ $$
140
+
141
+ Following eq. (7), the policy network will try to simultaneously follow the policy gradient and stay inside the behavior potential well to roughly keep a ${\Delta }_{i}^{\text{ target }}$ divergence to the center policy. $\alpha$ is a hyper-parameter balancing the two objectives. To determine ${\Delta }_{i}^{\text{ target }}$ , we simply sample from a uniform distribution as BRP: ${\Delta }_{i}^{\text{ target }} \sim {U}_{\left\lbrack 0,{\Delta }_{\text{ max }}^{\text{ BTT }}\right\rbrack }$ . As ${\sigma }_{BTT}$ directly controls the steepness of the Gaussian distribution, a larger ${\sigma }_{BTT}$ means less restriction over the policy’s divergence and vice versa.
142
+
143
+ § 5 EXPERIMENTS
144
+
145
+ § 5.1 EXPLORATORY STUDIES
146
+
147
+ As previous studies have shown [11] [14], perturbing Tanh-activated networks is easier than perturbing ReLU-activated networks. As perturbing networks with BRP is straight forward and network architecture agnostic, we conducted a comparative study to see how those two-types of networks respond to BRP. To be specific, we randomly sampled directions and record behavior divergence changes along those directions. As in fig. 3(a) fig. 3(b), where the x-axis is the percentage of the positive sign in one direction, the y-axis is the magnitude along that direction and the color-scale measures the behavior divergence (the brighter the larger), it is clear that randomly perturbing Tanh-activated networks has a larger chance of inducing significant behavior changes, which explains why Tanh-activated networks are generally favored in perturbation-based methods.
148
+
149
+ To verify that actors trained by BTT are uniformly distributed as ${\Delta }_{BTT}^{\text{ target }}$ is sampled from a uniform distribution, we trained two BEL instances with $\alpha = {0.0}$ (without BTT) and $\alpha = {1.0}$ (with BTT). From 2(d), it is obvious that without BTT, the trained policy are quite concentrated. And when BTT is applied, the behavior divergences of the population constantly follows the uniform distribution. To further verify that BTT can lead to diverse behaviors, we plotted the state visitation map on the DelayedHalfCheetah-v3 environment to visualize how the offspring policies explore different states. As is shown in 2(e), one generation of BEL's BTT-trained population visits very different states while naively-trained population without BTT doesn't show the same level of diversity.
150
+
151
+ < g r a p h i c s >
152
+
153
+ Figure 2: (a) BTT illustration. The circles represent actors. Due to the log-likelihood training objective, actors will try to adapt their behaviors around their Gaussian centers. (b) BRP for a Tanh-activated network. (c) BRP for a ReLU-activated network. (d) Trained offspring behavior divergence illustration. (e) State visitation density map of five actors trained with BTT (first row) versus without BTT (second row) after one generation.
154
+
155
+ < g r a p h i c s >
156
+
157
+ Figure 3: (a) BRP ablation. max delta $= 0$ corresponds to no BRP (b) BTT ablation. alpha $= 0$ corresponds to no BTT, larger alpha means more constrained behavior. (c) Recombination ablation.
158
+
159
+ § 5.2 ABLATIVE STUDIES
160
+
161
+ To verify the effectiveness of BRP, we tested different ${\Delta }_{\max }^{BRP}$ settings on the Walker2d-v3 environment. From fig. 3(a), it is noticeable that BRP not only significantly speeds up the learning process, but also helps avoiding local optimums. As a matter of fact, we find that when BRP is applied, critic networks tend to constantly induce larger training loss throughout training. This phenomenon indicates that BRP indeed brings another level of behavior uncertainty, which forces critics to make better prediction.
162
+
163
+ An ablation study on the DelayedHalfCheetah-v3 environment is conducted to show that BTT is indeed helpful for exploration .DelayedHalfCheetah-v3 is a modified HalfCheetah-v3 environment where the reward is manually delayed for 20 time steps, making it a difficult sparse reward environment. The proportion of the log-likelihood objective is tuned with $\alpha$ . We can observe from fig. 3(b) that when $\alpha$ is set to zero, which means no BTT in the training objective, BEL can’t effectively explore. However, on the other hand, when $\alpha$ is too large, actors may also lose performance since their behaviors are over constrained.
164
+
165
+ As [12] [15] pointed out, many operators in EA are designed for black-box optimization, and can be potentially harmful for neural networks. An experiment comparing weighted linear recombination and distillation-based recombination was designed on the DelayedHalfCheetah-V3 environment, where all parts of BEL are kept the same except for the recombination phase. In this phase, offspring actors are ranked according to their latest episodic rewards, and then, $\lambda$ offspring actors are treated as
166
+
167
+ Table 1: Numerical results for final best mean reward of different algorithms on selected tasks
168
+
169
+ max width=
170
+
171
+ 2|c|TASKS \ALGORITHMS STATISTICS BEL (Ours) SAC TD3 CEM-RL PDERL
172
+
173
+ 1-7
174
+ 4*HALFCHEETAH-V3 MEAN 12725.39 10482.39 10408.62 10636.94 6917.24
175
+
176
+ 2-7
177
+ STD 202.89 1253.81 1093.64 2131.36 444.95
178
+
179
+ 2-7
180
+ MEDIAN 12751.99 11058.84 10810.48 11323.23 7026.15
181
+
182
+ 2-7
183
+ WALLCLOCK 4.50H 5.34H $\mathbf{{2.31H}}$ 5.52H ${3.20}\mathrm{H}$
184
+
185
+ 1-7
186
+ 4*ANT-V3 MEAN 6082.41 5208.65 5090.81 3455.95 1609.40
187
+
188
+ 2-7
189
+ STD 166.28 282.64 651.13 1359.87 542.42
190
+
191
+ 2-7
192
+ MEDIAN 6147.19 5259.90 5385.89 3487.73 1582.24
193
+
194
+ 2-7
195
+ WALLCLOCK 5.81 H 8.03H $\mathbf{{3.09}}\mathrm{H}$ 6.47H 3.61H
196
+
197
+ 1-7
198
+ 4*WALKER2D-V3 MEAN 5723.30 4637.03 3855.60 4173.30 1588.51
199
+
200
+ 2-7
201
+ STD 498.38 414.19 760.91 1153.97 641.26
202
+
203
+ 2-7
204
+ MEDIAN 6087.36 4682.27 4138.82 4358.34 1253.77
205
+
206
+ 2-7
207
+ WALLCLOCK 4.14H 7.54H $\mathbf{{2.54H}}$ 5.91H 3.42H
208
+
209
+ 1-7
210
+ 4*HOPPER-V3 MEAN 3717.14 3543.35 3426.26 3597.87 1293.66
211
+
212
+ 2-7
213
+ STD 101.08 103.29 192.31 495.28 356.54
214
+
215
+ 2-7
216
+ Median 3740.41 3580.16 3333.04 3749.87 1160.93
217
+
218
+ 2-7
219
+ WALLCLOCK ${4.29}\mathrm{H}$ 7.94H $\mathbf{{2.30H}}$ 5.82H ${3.35}\mathrm{H}$
220
+
221
+ 1-7
222
+ 4*HUMANOID-V3 MEAN 5337.20 5617.94 5319.09 215.79 815.96
223
+
224
+ 2-7
225
+ STD 113.53 133.93 114.38 0.44 90.86
226
+
227
+ 2-7
228
+ MEDIAN 5364.52 5588.50 5333.41 215.76 821.11
229
+
230
+ 2-7
231
+ WALLCLOCK 7.74H 8.48H $\mathbf{{4.51H}}$ 9.25H 4.85H
232
+
233
+ 1-7
234
+ 4*DELAYED- HALFCHEETAH-V3 MEAN 6777.87 4763.14 4730.74 6276.42 2865.77
235
+
236
+ 2-7
237
+ STD 596.49 758.29 806.42 857.72 658.37
238
+
239
+ 2-7
240
+ MEDIAN 6857.65 4734.52 4469.37 6372.91 3095.51
241
+
242
+ 2-7
243
+ WALLCLOCK 4.69 $\mathrm{H}$ 5.73H $\mathbf{{2.45H}}$ 6.82H ${3.15}\mathrm{H}$
244
+
245
+ 1-7
246
+
247
+ teacher networks. Their actions on $N$ sampled observations are recorded as demonstrations. Then, the center actor is treated as the student network to imitate offspring actors. To give different actors different importance, the following weighted imitation training objective is constructed:
248
+
249
+ $$
250
+ \mathcal{L}\left( {\pi }_{\theta }\right) = \mathop{\sum }\limits_{{k = 1}}^{N}\mathop{\sum }\limits_{{i = 1}}^{\lambda }{\omega }_{i}{\begin{Vmatrix}{\pi }_{\theta }\left( {s}_{k}\right) - {\widetilde{\pi }}_{i}\left( {s}_{k}\right) \end{Vmatrix}}^{2},\mathop{\sum }\limits_{{i = 1}}^{\lambda }{\omega }_{i} = 1 \tag{8}
251
+ $$
252
+
253
+ Much to our surprise, though the distillation-based method seem to learn a bit faster in the early phase, it quickly fell into the local optimum and can hardly made its way out. This experiment showed that though naive linear recombination may break the behavior of the output network to some extent, this kind of behavior uncertainty may result in extra exploration which is beneficial.
254
+
255
+ § 5.3 COMPARISON TO STATE-OF-THE-ART RL AND EA-RL METHODS
256
+
257
+ In this section, the performance of the proposed BEL is compared against pure RL methods including TD3 [2], SAC [3] and TD3-Ensemble as well as other EA-RL methods including [11] and an improved version of ERL which is called PDERL [12]. For TD3, CEM-RL, and PDERL, we used the code published by the original authors. For SAC, the stable baselines3 library is used. Every algorithm is run on the same machine, and the results we got were close to what authors had claimed in the original papers. Five tasks from MuJoCo continuous control benchmark are selected. Swimmer-v3 is excluded since it was found that tuning the reward discount factor to 0.9999 could make all algorithms perform more or less the same, reaching approximately 350 reward. Another DelayedHalfCheetah-v3 environment is constructed by delaying the reward signal for 20 time steps, making it a hard exploration task. Following the convention from other literatures, for all algorithms, their learning curves are aggregated over 10 repeated runs across one million time steps. And the evaluated policies are tested for 10 times. For BEL, the population center policy is used for testing. Note that though 18 BEL trains the population in a parallel fashion, for fair comparison, the total time steps are aggregated for every policy interacting with the environment.
258
+
259
+ < g r a p h i c s >
260
+
261
+ Figure 4: Learning curves on 6 MuJoCo environments in one million time steps.
262
+
263
+ Sample efficiency It can be seen from fig. 4 that BEL turns out to be very competitive against comparing methods in terms of sample efficiency. On the one hand, it can pick up signals faster than other methods, indicating its high sample efficiency. On the other hand, its final best performance outperforms other methods except on Humanoid-v3.
264
+
265
+ BEL versus TD3-Ensemble Since we trained multiple actor-critic pairs in BEL, it is natural to question if the good performance of BEL comes from the ensemble nature. To answer this question, we tested the performance of TD3-ensemble where equal numbers of actor-critic pairs are trained, and all hyper parameters are kept as close as possible. From table 1, it is clear that BEL outperforms TD3-ensemble on all tasks.
266
+
267
+ Stability As can be seen from fig. 4, BEL also generally has smaller standard deviations across runs, even compared to other population based evolutionary methods whose population sizes are larger, this means BEL is very stable. Another phenomenon that suggests BEL's robustness is in the Humanoid-v3 environment, where the naive TD3-Ensemble share the same learning rate (which is larger than single instance TD3) as BEl, but failed to stably learn.
268
+
269
+ Computation efficiency Since all experiments are conducted on the same machine and all on CPUs, we also recorded the median wall-clock running time of all algorithms. TD3 is the fastest algorithm as it is also the most light-weight one. PDERL ranks the second because not all policies in its population are trained, a great portion of its population are directly evaluated after perturbation. BEL ranks the third among all algorithms, and is generally faster than SAC and CEM-RL. We think BEL reaches a good balance between sample-efficiency and computation overhead.
270
+
271
+ Limitations Though generally good performance can be expected from BEL, it still has the following limitations. First, as multiple networks are trained in parallel, a computation node with a multi-core CPU and relatively large RAM is required. Second, as can be seen from the Humanoid-v3 where BEL does not outperform SAC, it may indicate BRP and BTT do not scale very well as action space dimension grows. Further studies regarding the scalability of BEL need to be conducted.
272
+
273
+ § 6 CONCLUSION
274
+
275
+ In this work, a novel population-based evolutionary training framework for off-policy RL algorithms called BEL is proposed. Exploratory and ablative studies show the effectiveness of BRP and BTT. Benchmark comparisons against other methods show BEL outperforms state-of-the-art RL and EA-RL methods in terms of sample efficiency. The training pipeline is conceptually simple and we offer efficient parallel implementation. Along with the improved stability and exploration ability, we believe BEL can serve as a competitive training method for real-world robot learning with off-policy RL algorithms.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/5GJ-_KMLASa/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,249 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Learning the Dynamics of Compliant Tool-Environment Interaction for Visuo-Tactile Contact Servoing
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Many manipulation tasks require the robot to control the contact between a grasped compliant tool and the environment, e.g. scraping a frying pan with a spatula. However, modeling tool-environment interaction is difficult, especially when the tool is compliant, and the robot cannot be expected to have the full geometry and physical properties (e.g., mass, stiffness, and friction) of all the tools it must use. We propose a framework that learns to predict the effects of a robot's actions on the contact between the tool and the environment given visuo-tactile perception. Key to our framework is a novel contact feature representation that consists of a binary contact value, the line of contact, and an end-effector wrench. We propose a method to learn the dynamics of these contact features from real world data that does not require predicting the geometry of the compliant tool. We then propose a controller that uses this dynamics model for visuo-tactile contact servoing and show that it is effective at performing scraping tasks with a spatula, even in scenarios where precise contact needs to be made to avoid obstacles.
12
+
13
+ Keywords: Contact-Rich Manipulation, Multi-Modal Dynamics Learning
14
+
15
+ ## 17 1 Introduction
16
+
17
+ ![01963fb7-0814-7552-8dff-869fb5fbe3da_0_312_1409_1174_373_0.jpg](images/01963fb7-0814-7552-8dff-869fb5fbe3da_0_312_1409_1174_373_0.jpg)
18
+
19
+ Figure 1: We present a method for extrinsic contact servoing, i.e., controlling contact between a compliant tool and the environment. Our method is able to complete the requested contact trajectory, avoiding contact with surface obstacles, and successfully scrape the target object. Note that to do this the spatula must be tilted so that only a corner of it is in contact.
20
+
21
+ Many manipulation tasks require the robot to control the contact between a grasped tool and the environment. The ability to reason over and control this extrinsic contact is crucial to enabling helpful robots that can scrape a frying pan with a spatula, eraser or wipe a surface [1], screw a bottle cap onto a bottle [2], perform peg-in-hole assemblies [3, 4], and perform many other tasks. In this work, we seek to address the problem of controlling the extrinsic contact between a grasped compliant tool (e.g. a spatula) and the environment. In general, the robot cannot expect to have the full geometry and physical properties (e.g., mass, friction, stiffness) of all the tools it must use or the geometries of the environments it must manipulate in. Instead, the robot must utilize multimodal sensory observations, such as pointclouds and tactile feedback, to act on the environment.
22
+
23
+ In recent years, learning-based methods have become increasingly popular to address the complexities of robotic manipulation, including for contact-rich tasks [5]. These methods can be loosely grouped into model-free methods, that directly learn a policy [3, 2, 6], and model-based methods, that learn system dynamics $\left\lbrack {7,8,9}\right\rbrack$ . By focusing on modeling system dynamics, model-based methods can plan to reach new goals without retraining, and are often more data-efficient [9]. Therefore, we propose learning the dynamics of our system to solve the extrinsic contact servoing task.
24
+
25
+ It is not obvious which representation to use for these dynamics. Fully recovering tool or environment geometries from visual data $\left\lbrack {{10},{11}}\right\rbrack$ and tactile feedback $\left\lbrack {12}\right\rbrack$ has been widely explored, with recent extensions to compliant geometries [13]; however, even if the system can be fully identified, contact models to resolve interactions can have limited fidelity [14]. On the other hand, learned dynamics representations can be difficult to interpret and require demonstrations or observations from the desired state to specify goals $\left\lbrack {7,{15}}\right\rbrack$ . Instead, we propose a novel contact feature representation for our learning method that focuses on tool-environment interaction and bypasses explicitly modeling the whole system. We represent the contact configuration as 1) a binary contact mode (indicating if the system is in contact); 2) a contact geometry (as a line in 3D space); and 3) an end-effector wrench.
26
+
27
+ We propose a learning architecture to model the dynamics of the proposed contact representation from raw sensory observations over candidate action trajectories. We propose structuring the model as a latent space dynamics model with a decoder that recovers the contact state. We also propose an impedance action offset term in the dynamics that allows us to accurately propagate robot poses, despite robot impedance. To provide labels to our model, we collect self-supervised data on a 7DoF Franka Emika Panda, using sensor data to automatically label the contact state.
28
+
29
+ We validate our proposed method by completing various desired contact trajectories on the real robot system. We first show that our method can track diverse desired contact trajectories in the absence of obstacles. Next, we demonstrate that we can utilize extrinsic contact servoing to scrape a target object from the table, while handling occlusions and avoiding contact with obstacles.
30
+
31
+ In summary, we make the following contributions:
32
+
33
+ - We present a framework for modeling compliant tool-environment contact interactions by learning contact feature dynamics.
34
+
35
+ - We propose a learned model architecture to capture the dynamics of contact features, trained in a supervised fashion using real world self-supervised data.
36
+
37
+ - We design and demonstrate a controller using the contact feature dynamics to realize diverse goal trajectories, including in the presence of obstacles.
38
+
39
+ ## 2 Related Work
40
+
41
+ Existing research has investigated the task of recovering contact locations. Manuelli et al. [16] localize point contacts on a rigid robot with known geometry by employing a particle filtering approach to update a set of candidate contact locations based on force torque sensing. Kim et al. [4] and Ma et al. [17] model contact between a grasped rigid object and the environment by assuming stationary line contacts and modeling the deformation of a GelSlim gripper. The estimated line contact is then used in a Reinforcement Learning (RL) policy. Neither of these methods extends to compliant tools and neither models the dynamics of the contact configuration.
42
+
43
+ Other works explore tactile servoing methods, where contact at the sensor is driven to a desired configuration. Li et al. [18] use a large tactile pad and define contact configuration features of objects pressed against the sensor. They manually construct a feedback controller based on these features and use it to drive contacts to desired configurations. Sutanto et al. [19] use a smaller profile tactile sensor and learn the dynamics of a learned latent space. They then employ a MPC scheme to drive contacts to desired configurations on the sensor. Both of these works assume contact is happening at the sensing location. We, on the other hand, seek to servo extrinsic contacts, where we do not get direct sensing at the point of contact.
44
+
45
+ ## 3 Problem Formulation
46
+
47
+ We parameterize our contact feature as a binary contact indicator ${c}^{b} \in \{ 0,1\}$ , used to indicate whether the tool is in contact, a contact line ${}^{1}{\mathbf{c}}^{l} \in {\mathbb{R}}^{2 \times 3}$ representing the contact geometry between the tool and the environment, and an end effector wrench ${\mathbf{c}}^{w} \in {\mathbb{R}}^{6}$ . The geometry ${\mathbf{c}}^{l}$ is only active when the tool is in contact ${c}^{b} = 1$ . The contact representation allows extrinsic contact goals to be expressed as desired contact trajectories $G = \left\lbrack {{\mathbf{g}}_{1},{\mathbf{g}}_{2},\ldots ,{\mathbf{g}}_{L}}\right\rbrack$ , where each ${\mathbf{g}}_{i} \in {\mathbb{R}}^{2,3}$ is a desired contact line to reach. We assume that contact should be maintained throughout the task.
48
+
49
+ We formulate extrinsic contact servoing as a model predictive planning problem, given observations of the current state of the system ${\mathbf{o}}_{0}$ . For a given horizon $T$ , we select the next $T$ desired contact lines $\left\lbrack {{\mathbf{g}}_{i + 1},\ldots ,{\mathbf{g}}_{i + T}}\right\rbrack \subseteq G$ to be our current contact goal sequence. The planning problem is:
50
+
51
+ $$
52
+ \mathop{\min }\limits_{{\mathbf{a}}_{0 : T - 1}}\mathop{\sum }\limits_{{t = 1}}^{T}d\left( {{\mathbf{c}}_{t}^{l},{\mathbf{g}}_{i + t}}\right) \tag{1}
53
+ $$
54
+
55
+ $$
56
+ \text{s.t.}{c}_{t}^{b} = 1,\forall t \in \left\lbrack {1, T}\right\rbrack \tag{2}
57
+ $$
58
+
59
+ $$
60
+ \left\{ {{c}_{0 : T}^{b},{\mathbf{c}}_{0 : T}^{l},{\mathbf{c}}_{0 : T}^{w}}\right\} = g\left( {{\mathbf{o}}_{0},{\mathbf{a}}_{0 : T - 1}}\right) \tag{3}
61
+ $$
62
+
63
+ Here $g$ is a model describing the contact feature dynamics. The binary constraint ensures that the tool remains in contact while the cost function $d$ measures the distance between the two contact lines, as the average Euclidean distance between the line endpoints. Finally, if $d\left( {{\mathbf{c}}_{1}^{l},{\mathbf{g}}_{i} + 1}\right) < \epsilon$ we increment $i$ , thus moving to the next sequence of desired contact lines for the next round of planning.
64
+
65
+ ## 4 Method
66
+
67
+ ### 4.1 Contact Feature Dynamics Model
68
+
69
+ To solve our constrained optimization Eq. 1, we require a model $g$ which can map from raw observations ${\mathbf{o}}_{0}$ and a proposed action trajectory ${\mathbf{a}}_{0 : T - 1}$ to the resulting contact states $\left\{ {{c}_{0 : T}^{b},{\mathbf{c}}_{0 : T}^{l},{\mathbf{c}}_{0 : T}^{w}}\right\}$ . We propose modeling the contact feature dynamics as a deep neural network.
70
+
71
+ We assume access to a pointcloud ${\mathbf{v}}_{0}$ and input wrench ${\mathbf{h}}_{0}$ measured at the robot’s wrist as our observations, ${\mathbf{o}}_{0} = \left( {{\mathbf{v}}_{0},{\mathbf{h}}_{0}}\right)$ . Note that end effector wrench is both an input to our method and part of the contact state; predicting future wrench aids the representation learning and provides expected wrenches for planning.
72
+
73
+ We perform all learning in the local end effector frame. We transform the pointcloud to the end effector frame ${}^{E{E}_{0}}{\mathbf{v}}_{0}$ and clip to a ${0.5}{m}^{3}$ bounding box region around the end effector that contains the contact event. We similarly predict our contact lines in the current end effector frame, ${}^{E{E}_{t}}{\mathbf{c}}_{t}^{l},\forall t \in \left\lbrack {1, T}\right\rbrack$ . Learning in the end effector frame provides invariance in the visual domain to translations and rotations of the end effector and removes distractors that do not contribute to the contact state, such as the rest of the robot arm or the scene background.
74
+
75
+ Our contact feature dynamics model (Figure 2) has three components: an encoder $e$ which maps from raw observations to a learned latent space, a decoder $d$ which maps from the latent space to the contact state, and a dynamics model $f$ which captures dynamics in the latent space. We parameterize the models by a set of learned weights $\theta$ . We start by embedding the current observations into the latent space with our encoder ${\widehat{\mathbf{z}}}_{0} =$ $e\left( {{\mathbf{v}}_{0},{\mathbf{h}}_{0}}\right)$ . We unroll actions in the latent space as the contact state alone has insufficient contextual information (e.g. end-effector pose and local geometry information) to predict the next contact state. Because we predict the $t$ th contact state in the current end effector frame $E{E}_{t}$ , an important consideration when designing our dynamics model is being able to accurately recover this frame. Due to the contactful nature of our task, the low-level controller that executes ${\mathbf{a}}_{t}$ is an impedance controller, meaning the resulting action may different from the commanded action. To account for this, we predict an additional term from our dynamics model $\Delta {\widehat{\mathbf{a}}}_{t + 1}$ , which is an ${SE}\left( 3\right)$ transformation that predicts the offset between the commanded and realized next end effector pose. Thus, our dynamics model predicts, ${\widehat{\mathbf{z}}}_{t + 1},\Delta {\widehat{\mathbf{a}}}_{t + 1} = f\left( {{\widehat{\mathbf{z}}}_{t},{\mathbf{a}}_{t}}\right)$ . This allows us to construct the following recursive estimate of our end effector frame ${}^{W}{\widehat{T}}_{E{E}_{t + 1}} = {}^{W}{\widehat{T}}_{E{E}_{t}}T\left( {\mathbf{a}}_{t}\right) T\left( {\Delta {\widehat{\mathbf{a}}}_{t + 1}}\right)$ , where ${}^{W}{\widehat{T}}_{E{E}_{t}}$ is the ${SE}\left( 3\right)$ transformation describing the pose of the end effector at time $t$ . We know the initial transform ${}^{W}{\widehat{T}}_{E{E}_{0}}$ from our robot proprioception, $T\left( {\mathbf{a}}_{t}\right)$ provides the transformation for the action command, and $T\left( {\Delta {\widehat{\mathbf{a}}}_{t + 1}}\right)$ for the predicted offset term. To enforce valid ${SE}\left( 3\right)$ predictions we predict rotations in the axis-angle representation [20].
76
+
77
+ ---
78
+
79
+ ${}^{1}$ In this paper we only consider line contacts between the tool and environment, but we believe our method is straightforward to extend to patch contacts by using a richer contact descriptor, e.g. the convex hull of a set of points.
80
+
81
+ ---
82
+
83
+ ![01963fb7-0814-7552-8dff-869fb5fbe3da_3_319_213_1160_287_0.jpg](images/01963fb7-0814-7552-8dff-869fb5fbe3da_3_319_213_1160_287_0.jpg)
84
+
85
+ Figure 2: Our proposed contact feature dynamics model. Our architecture embeds raw observations into a latent space where dynamics can be unrolled. We then decode the contact state from the latent space. To account for our impedance controller, we also predict an action offset term which "learns the impedance" in order to relate predicted contact geometries to the world frame.
86
+
87
+ Finally, we recover our contact state estimates with our decoder $d$ given the latent state ${\widehat{\mathbf{z}}}_{t}$ : ${\widehat{c}}_{t}^{b},{}^{E{E}_{t}}{\widehat{\mathbf{c}}}_{t}^{l},{\widehat{\mathbf{c}}}_{t}^{w} = d\left( {\widehat{\mathbf{z}}}_{t}\right)$ . We can then recover the predicted contact line in the world frame by composing with the estimate of the end effector frame transformation ${}^{W}{\widehat{T}}_{E{E}_{t}}$ .
88
+
89
+ An overview of the model architectures is shown in Fig. 2 and full architecture details can be found in Appendix A.
90
+
91
+ #### 4.1.1 Training Loss
92
+
93
+ We train our model on rollouts of the system, where a single example is a sequence ${\left\lbrack {\mathbf{v}}_{t},{\mathbf{h}}_{t},{\mathbf{a}}_{t},\Delta {\mathbf{a}}_{t},{\mathbf{c}}_{t}^{l},{\mathbf{c}}_{t}^{w},{c}_{t}^{b}\right\rbrack }_{t = 0}^{T}$ . We define the loss over the example as:
94
+
95
+ $$
96
+ {\mathcal{L}}_{\theta } = \left( {\mathop{\sum }\limits_{{t = 0}}^{T}{BCE}\left( {{\widehat{c}}_{t}^{b},{c}_{t}^{b}}\right) + \alpha \cdot {c}_{t}^{b} \cdot {MSE}\left( {{\widehat{\mathbf{c}}}_{t}^{l},{\mathbf{c}}_{t}^{l}}\right) + \beta \cdot {MSE}\left( {{\widehat{\mathbf{c}}}_{t}^{w},{\mathbf{c}}_{t}^{w}}\right) }\right) \tag{4}
97
+ $$
98
+
99
+ $$
100
+ + \left( {\mathop{\sum }\limits_{{t = 1}}^{T}\rho \cdot \operatorname{MSE}\left( {\Delta {\widehat{\mathbf{a}}}_{t},\Delta {\mathbf{a}}_{t}}\right) + \gamma \cdot \operatorname{MSE}\left( {{\widehat{\mathbf{z}}}_{t}, e\left( {{\mathbf{v}}_{t},{\mathbf{h}}_{t}}\right) }\right) }\right) \tag{5}
101
+ $$
102
+
103
+ Here BCE is the Binary Cross Entropy classification loss and MSE is the Mean Square Error regression loss. $\alpha ,\beta ,\rho$ and $\gamma$ are loss weighting terms. The first four loss terms are prediction losses over the contact mode, contact geometry, end effector wrench, and impedance offset transformation. The final loss term is a latent consistency loss, which encourages latent rollouts to match the latent state yielded by encoding future observations.
104
+
105
+ ### 4.2 Extrinsic Contact Servoing Controller
106
+
107
+ We propose to solve our planning problem using Model Predictive Path Integral (MPPI), which has been shown to be effective for continuous control tasks where sampling is cheap and parallelizable
108
+
109
+ (e.g., neural network representations) [21]. We convert our binary constraint to a penalty, penalizing a trajectory if it yields actions that lead out of contact. Pairing this with the contact line prediction loss yields the following final cost function:
110
+
111
+ $$
112
+ \mathop{\sum }\limits_{{t = 1}}^{T}d\left( {{}^{W}{\widehat{\mathbf{c}}}_{t}^{l},{\mathbf{g}}_{i + t}}\right) + \phi \cdot \left\{ \begin{array}{ll} \left| {{\widehat{c}}_{t}^{b} - \psi }\right| & \text{ if }{\widehat{c}}_{t}^{b} < \psi \\ 0 & \text{ o.w. } \end{array}\right. \tag{6}
113
+ $$
114
+
115
+ The constraint is violated if the likelihood of binary contact is below the classification threshold $\psi$ , in which case we penalize by the distance to the threshold. $\phi$ weights the penalty against the contact line loss. With this cost function we apply MPPI to yield the next action and execute it on the robot.
116
+
117
+ ### 4.3 Extrinsic Contact Dynamics Labeling
118
+
119
+ Our contact dynamics training loss in Eq. 4 requires ground truth contact state labels $\left( {{\mathbf{c}}_{t}^{l},{c}_{t}^{b},{\mathbf{c}}_{t}^{w}}\right)$ at time $t$ . As accurate simulation of contactful interactions is challenging, we propose a method of data acquisition directly in the real world. To generate contact line labels, we use a high resolution, low frequency scanner, a Photoneo PhoXi 3D Scanner, to generate high quality scans of the contact interaction. Using these scans, we generate contact line labels by filtering points just above the table, clipped to the area around the end effector. We then cluster these points to remove noisy points on the tabletop and generate the contact line ${\mathbf{c}}_{t}^{l}$ by selecting the two furthest points in the cluster. See Appendix B for examples of contact labels. We use a force torque sensor to identify the contact state wrench ${\mathbf{c}}_{t}^{w}$ and threshold the wrench to identify binary contact ${c}_{t}^{b}$ automatically. This setup is flexible to new robots and tools, so long as a force torque sensor is available on the robot wrist.
120
+
121
+ ## 5 Results
122
+
123
+ Our experiments seek to answer three questions: First, can we accurately model the contact feature dynamics when there are no objects on the table? Second, can we perform extrinsic contact servoing in this environment? Third, can our controller be applied to an object scraping task, where it must handle visual occlusions and reaction forces arising from contact with the target object?
124
+
125
+ ### 5.1 Experimental Setup
126
+
127
+ We test our method on a Franka Emika Panda with a rigidly-mounted compliant spatula at the end effector (Fig. 1). For our observations ${\mathbf{o}}_{0}$ , we use pointclouds ${\mathbf{v}}_{0}$ from an Intel Realsense D435 sensor ${}^{2}$ and mount an ATI Gamma Force/Torque sensor between the end effector and tool. We use the last four wrench values received after the previous action completed as the tactile input ${\mathbf{h}}_{0}$ . To collect our dataset, we use a random action policy with a heuristic to encourage contact between the tool and the spatula. No other objects are on the tabletop during data collection to allow proper data supervision, as detailed in Sec. 4.3. We collect 489 random trajectories of data (22005 rollout sequences) to train and test the model. We use 80/10/10 train, validate, test splits of the trajectories.
128
+
129
+ ### 5.2 Modeling Contact Feature Dynamics
130
+
131
+ We first investigate the ability of our model to capture the contact feature dynamics exhibited in our dataset. Due to the novelty of our representation, to the authors' knowledge, there are no existing visuo-tactile extrinsic contact feature learning methods that are directly comparable. Thus we focus our comparisons on ablations to understand the importance of the components of our proposed neural network model. We train three variations. First is the full model, as described in Sec. 4.1, hereafter called "Full Model." Second, to understand the importance of modeling the impedance of the robot, we ablate the impedance offset action prediction, thus we propagate the end effector frame only with the commanded action. We call this method "No Impedance." Finally, we investigate our model trained only on visual input data, called "Vision-Only," to demonstrate the drop in performance when we do not use force data.
132
+
133
+ We train our contact feature dynamics on our dataset with a rollout horizon of $T = 3$ . All methods are trained with the Adam optimizer [22] until convergence on the validation set. We set $\alpha =$ ${100.0},\beta = \gamma = \rho = {0.1}$ in our loss term in Eq. 4 to balance the scale of the terms.
134
+
135
+ ---
136
+
137
+ ${}^{2}$ We don’t use the high-fidelity Photoneo scan as it is a very low-frequency scanner.
138
+
139
+ ---
140
+
141
+ ![01963fb7-0814-7552-8dff-869fb5fbe3da_5_321_213_1123_330_0.jpg](images/01963fb7-0814-7552-8dff-869fb5fbe3da_5_321_213_1123_330_0.jpg)
142
+
143
+ Figure 3: Contact Feature Dynamics Performance: without our impedance offset term, contact line quickly drifts; without tactile inputs it is difficult to accurately model end effector wrenches.
144
+
145
+ ![01963fb7-0814-7552-8dff-869fb5fbe3da_5_325_657_1146_265_0.jpg](images/01963fb7-0814-7552-8dff-869fb5fbe3da_5_325_657_1146_265_0.jpg)
146
+
147
+ Figure 4: Qualitative Scraped Areas: our controller (blue) is able to closely match desired contact trajectories (red).
148
+
149
+ We compare the prediction performance of the models on the test split of the dataset (2250 examples) in Fig. 3 (see Appendix C for torque prediction results, whose results are very similar to force results). Our results show that we can effectively learn contact feature dynamics, with the Full Model achieving around ${90}\%$ accuracy, about $5\mathrm{\;{mm}}$ contact line error, and less than a Newton of contact force error. Fig. 3b shows that without the impedance offset term, the contact line estimate drifts significantly, due to the difference between commanded and realized actions, which makes the estimate of the end effector pose increasingly bad. The Full Model shows slight improvement over the Vision-Only method throughout. The vision-only does especially poor in contact force estimation (Fig. 3c), as it does not get contact wrenches as input.
150
+
151
+ ### 5.3 Extrinsic Contact Servoing
152
+
153
+ Next we investigate how our proposed controller performs following specified contact trajectories. Methods that learn latent dynamics for model predictive planning in an unsupervised fashion [7, 9] could be applied to our system to learn the dynamics and potentially plan with them, however, specifying contact trajectory goals in the learned latent spaces would require significant new capabilities for these models. Similarly, methods that recover tool/environment geometries [13, 10] from sensory data would also require significant changes to predict our contact dynamics. We leave these investigations to future work, and focus here on the performance of our proposed methodology.
154
+
155
+ Obstacle-Free: We start by attempting to servo along four different contact trajectories in the obstacle-free environment. The desired contact trajectories are shown in Fig. 4, and explore translation of contact as well as cases where the robot must tilt the tool to achieve a contact smaller than the width of the tool. We use the same labeling technique introduced in Sec. 4.3 to get ground truth contact trajectories executed by the controller.
156
+
157
+ We use the controller described in Sec. 4.2, with $\psi = {0.45},\phi = {0.05}$ . All planning is done with the Full Model. To investigate the planning performance, we run the controller ten times per trajectory. We use the Intersection over Union (IoU) of the desired and swept contact areas as our performance metric. We construct the goal contact area by sweeping the space between the specified goal contact lines and the realized controller swept area by assuming that the space between two consecutive contact states was swept out if the two states were both in contact.
158
+
159
+ ![01963fb7-0814-7552-8dff-869fb5fbe3da_6_311_203_1173_383_0.jpg](images/01963fb7-0814-7552-8dff-869fb5fbe3da_6_311_203_1173_383_0.jpg)
160
+
161
+ Figure 6: Example of extrinsic contact servoing execution for the "Straight" target obstacle scrape experiment. Our method is able to accurately servo along the desired contact trajectory and successfully scrape the target.
162
+
163
+ We show qualitative examples of controller realized scrapes compared to the goal scrapes in Fig. 4. We see that the controller is able to closely match the desired swept areas, including in the difficult tilting problems. Fig. 5 shows the average IoU scores over the ten controller rollouts on each desired contact trajectory. For the two trajectories focused on contact line translation, we achieve high IoU (near ${80}\%$ ). For the tilting case, we see a drop in performance, likely because it is more difficult to control transitions and smaller contact shapes.
164
+
165
+ With Obstacles: We next examine our method's robustness to visual occlusions and reaction forces arising from contact with a target object to be scraped. This task is common in construction, cooking, and cleaning. A deformable and slightly adhesive material (Play-dough) is pressed onto the surface and a contact trajectory is specified through the object. In one case, the target object is alone on the tabletop (Fig. 6), and thus we specify a contact trajectory using the full width of the tool. In the second scenario, obstacles are on the table near the target object (Fig. 1), thus we must specify a contact trajectory that avoids them.
166
+
167
+ ![01963fb7-0814-7552-8dff-869fb5fbe3da_6_926_1018_489_373_0.jpg](images/01963fb7-0814-7552-8dff-869fb5fbe3da_6_926_1018_489_373_0.jpg)
168
+
169
+ Figure 5: Extrinsic Contact Servoing IoU: our controller realizes diverse contact goal trajectories with high overlap. We see reduced performance for the challenging case of contact trajectories that require tilting.
170
+
171
+ Besides running our Full Model in these scenarios, we investigated enhancements of the method to aid its performance in the presence of visual occlusions and object reaction forces. First, we applied data augmentation to our training dataset, randomly generating ellipsoids in the pointcloud and using a hidden point removal algorithm [23] to provide corresponding occlusions in the original point cloud. See Appendix B for examples of augmented inputs. We call this method "Full Model + Aug." Second, we investigate using the difference between the predicted and observed wrenches $\Delta \mathbf{w} = {\widehat{\mathbf{c}}}_{t}^{w} - {\mathbf{c}}_{t}^{w}$ to derive an action offset to compensate for the extra wrench experienced by the robot. From ${\Delta w}$ we derive an action that will counteract this wrench offset ${\widehat{\mathbf{a}}}_{t} = \frac{\Delta \mathbf{w}}{{\mathbf{k}}_{p}} \cdot {\mathbf{k}}_{p}$ are the pose gains of the impedance controller. The offset action is composed with the original action from the controller. We call this method "Full Model + Aug + Wrench Offset."
172
+
173
+ We use two metrics. First, we measure the approximate mass of the target object to be scraped before and after scraping and determine the percentage of mass successfully removed. Second, we compare the $2\mathrm{D}$ footprint of the material before and after scraping and report the percentage of the footprint successfully removed. The second is a more challenging metric, since even a slightly wrong scrape will leave residue. The quantitative results over 5 runs of each method in each experiment setup are shown in Fig. 7. Examples of scrape executions are shown in Fig. 1 and Fig. 6. See Appendix C for more examples of scrape results.
174
+
175
+ ![01963fb7-0814-7552-8dff-869fb5fbe3da_7_331_226_1129_357_0.jpg](images/01963fb7-0814-7552-8dff-869fb5fbe3da_7_331_226_1129_357_0.jpg)
176
+
177
+ Figure 7: Target Scraping Results: Our Full Model and variations for addressing visual occlusions and reaction forces arising from contact with the target object perform comparably on both metrics over 5 trials on each experiment.
178
+
179
+ We see that in each case, all methods were able to remove over 95% of material mass on average and about ${40} - {60}\%$ of material’s footprint from the tabletop. Surprisingly, we don’t see a consistent improvement training our model with visual occlusions or adding action offsets. The Full Model's robustness to occlusions here could be due to the fact that we use a single tool in these experiments, and thus it may be sufficient in most cases to capture the location of the table with respect to the end effector in order to estimate the contact line. Even with visual occlusions near the tool contact, it is likely our method can still recover the relative pose of the tabletop from the surrounding points. The lack of clear improvement from the wrench offset action may be due to the fact that it is sufficient to be able to replan, as we do at every step with our MPPI controller.
180
+
181
+ ## 6 Limitations and Conclusion
182
+
183
+ Limitations: A common failure mode for our method is in controlling contact when the tool is tilted, where it is more likely for the method to yield actions that take the robot out of contact. This could, in part, be due to data imbalance. It is difficult to fully cover the space of desired contact configurations. In future work, we are interested in utilizing online learning [9] or curiosity [24] to more effectively cover the space of contacts in our dataset.
184
+
185
+ There are cases where tool to environment contact is not represented well as a contact line. Consider, for instance, the task of removing a nail with a hammer or putting the cap on a bottle. Extending our contact feature dynamics to these methods will requiring expanding our representation learning method to consider these more diverse contact specifications and consider more complex contact modes. One approach to handling these more complex interactions is to leverage collocated tactile sensing as the robot makes contact with the environment $\left\lbrack {{25},{26}}\right\rbrack$ . The resulting tactile cues can provide sufficient information to characterize these more complex interactions.
186
+
187
+ Finally, our method relies upon supervision. For future contact-rich tasks of interest, the need for labels could become more costly. We hope to investigate how we can remove reliance upon supervision by exploring few/zero shot generalization [27, 28] and domain randomization techniques [29].
188
+
189
+ Conclusion: Our approach simplifies contact rich interactions for compliant tool manipulation, by avoiding the necessity for full system identification while maintaining interpretability and accuracy by explicitly modeling the contact state of the system and how it evolves. In the future, we wish to investigate our method's applicability to other tasks where full state estimation is difficult, but the contact state is crucial, such as wiping with a cloth. Additionally, we wish to investigate our methods ability to generalize to multiple tools and handle more severe occlusions and forceful interactions.
190
+
191
+ References
192
+
193
+ [1] R. Martín-Martín, M. A. Lee, R. Gardner, S. Savarese, J. Bohg, and A. Garg. Variable impedance control in end-effector space: An action space for reinforcement learning in contact-rich tasks. In 2019 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), pages 1010-1017. IEEE, 2019.
194
+
195
+ [2] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334-1373, 2016.
196
+
197
+ [3] M. A. Lee, Y. Zhu, K. Srinivasan, P. Shah, S. Savarese, L. Fei-Fei, A. Garg, and J. Bohg. Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks. In 2019 International Conference on Robotics and Automation (ICRA), pages 8943-8950. IEEE, 2019.
198
+
199
+ [4] S. Kim and A. Rodriguez. Active extrinsic contact sensing: Application to general peg-in-hole insertion. arXiv preprint arXiv:2110.03555, 2021.
200
+
201
+ [5] O. Kroemer, S. Niekum, and G. D. Konidaris. A review of robot learning for manipulation: Challenges, representations, and algorithms. Journal of machine learning research, 22(30), 2021.
202
+
203
+ [6] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International journal of robotics research, 37(4-5):421-436, 2018.
204
+
205
+ [7] W. Yan, A. Vangipuram, P. Abbeel, and L. Pinto. Learning predictive representations for deformable objects using contrastive estimation. In Conference on Robot Learning, pages 564-574. PMLR, 2021.
206
+
207
+ [8] P. Mitrano, D. McConachie, and D. Berenson. Learning where to trust unreliable models in an unstructured world for deformable object manipulation. Science Robotics, 6(54):eabd8170, 2021.
208
+
209
+ [9] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson. Learning latent dynamics for planning from pixels. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2555-2565. PMLR, 09-15 Jun 2019. URL https://proceedings.mlr.press/v97/hafner19a.html.
210
+
211
+ [10] L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4460-4470, 2019.
212
+
213
+ [11] C. B. Choy, D. Xu, J. Gwak, K. Chen, and S. Savarese. 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In European conference on computer vision, pages 628-644. Springer, 2016.
214
+
215
+ [12] D. Watkins-Valls, J. Varley, and P. Allen. Multi-modal geometric learning for grasping and manipulation. In 2019 International conference on robotics and automation (ICRA), pages 7339-7345. IEEE, 2019.
216
+
217
+ [13] Y. Wi, P. Florence, A. Zeng, and N. Fazeli. Virdo: Visio-tactile implicit representations of deformable objects. arXiv preprint arXiv:2202.00868, 2022.
218
+
219
+ [14] N. Fazeli, S. Zapolsky, E. Drumwright, and A. Rodriguez. Fundamental limitations in performance and interpretability of common planar rigid-body contact models. In Robotics Research, pages 555-571. Springer, 2020.
220
+
221
+ [15] L. Manuelli, Y. Li, P. Florence, and R. Tedrake. Keypoints into the future: Self-supervised correspondence in model-based reinforcement learning. arXiv preprint arXiv:2009.05085, 2020.
222
+
223
+ [16] L. Manuelli and R. Tedrake. Localizing external contact using proprioceptive sensors: The contact particle filter. In 2016 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), pages 5062-5069. IEEE, 2016.
224
+
225
+ [17] D. Ma, S. Dong, and A. Rodriguez. Extrinsic contact sensing with relative-motion tracking from distributed tactile measurements. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 11262-11268. IEEE, 2021.
226
+
227
+ [18] Q. Li, C. Schürmann, R. Haschke, and H. J. Ritter. A control framework for tactile servoing. In Robotics: Science and systems. Citeseer, 2013.
228
+
229
+ [19] G. Sutanto, N. Ratliff, B. Sundaralingam, Y. Chebotar, Z. Su, A. Handa, and D. Fox. Learning latent space dynamics for tactile servoing. In 2019 International Conference on Robotics and Automation (ICRA), pages 3622-3628. IEEE, 2019.
230
+
231
+ [20] A. Byravan and D. Fox. Se3-nets: Learning rigid body motion using deep neural networks. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 173-180, 2017. doi:10.1109/ICRA.2017.7989023.
232
+
233
+ [21] G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou. Information theoretic mpc for model-based reinforcement learning. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 1714-1721, 2017. doi: 10.1109/ICRA.2017.7989202.
234
+
235
+ [22] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
236
+
237
+ [23] S. Katz, A. Tal, and R. Basri. Direct visibility of point sets. In ACM SIGGRAPH 2007 papers, pages 24-es. 2007.
238
+
239
+ [24] S. Rajeswar, C. Ibrahim, N. Surya, F. Golemo, D. Vazquez, A. Courville, and P. O. Pinheiro. Haptics-based curiosity for sparse-reward tasks. In Conference on Robot Learning, pages 395- 405. PMLR, 2022.
240
+
241
+ [25] I. Taylor, S. Dong, and A. Rodriguez. Gelslim3. 0: High-resolution measurement of shape, force and slip in a compact tactile-sensing finger. arXiv preprint arXiv:2103.12269, 2021.
242
+
243
+ [26] A. Alspach, K. Hashimoto, N. Kuppuswamy, and R. Tedrake. Soft-bubble: A highly compliant dense geometry tactile sensor for robot manipulation. In 2019 2nd IEEE International Conference on Soft Robotics (RoboSoft), pages 597-604. IEEE, 2019.
244
+
245
+ [27] S. Gidaris and N. Komodakis. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4367- 4375, 2018.
246
+
247
+ [28] F. Zhao, J. Zhao, S. Yan, and J. Feng. Dynamic conditional networks for few-shot learning. In Proceedings of the European conference on computer vision (ECCV), pages 19-35, 2018.
248
+
249
+ [29] P. Mitrano and D. Berenson. Data augmentation for manipulation. arXiv preprint arXiv:2205.02886, 2022.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/5GJ-_KMLASa/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § LEARNING THE DYNAMICS OF COMPLIANT TOOL-ENVIRONMENT INTERACTION FOR VISUO-TACTILE CONTACT SERVOING
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Many manipulation tasks require the robot to control the contact between a grasped compliant tool and the environment, e.g. scraping a frying pan with a spatula. However, modeling tool-environment interaction is difficult, especially when the tool is compliant, and the robot cannot be expected to have the full geometry and physical properties (e.g., mass, stiffness, and friction) of all the tools it must use. We propose a framework that learns to predict the effects of a robot's actions on the contact between the tool and the environment given visuo-tactile perception. Key to our framework is a novel contact feature representation that consists of a binary contact value, the line of contact, and an end-effector wrench. We propose a method to learn the dynamics of these contact features from real world data that does not require predicting the geometry of the compliant tool. We then propose a controller that uses this dynamics model for visuo-tactile contact servoing and show that it is effective at performing scraping tasks with a spatula, even in scenarios where precise contact needs to be made to avoid obstacles.
12
+
13
+ Keywords: Contact-Rich Manipulation, Multi-Modal Dynamics Learning
14
+
15
+ § 17 1 INTRODUCTION
16
+
17
+ < g r a p h i c s >
18
+
19
+ Figure 1: We present a method for extrinsic contact servoing, i.e., controlling contact between a compliant tool and the environment. Our method is able to complete the requested contact trajectory, avoiding contact with surface obstacles, and successfully scrape the target object. Note that to do this the spatula must be tilted so that only a corner of it is in contact.
20
+
21
+ Many manipulation tasks require the robot to control the contact between a grasped tool and the environment. The ability to reason over and control this extrinsic contact is crucial to enabling helpful robots that can scrape a frying pan with a spatula, eraser or wipe a surface [1], screw a bottle cap onto a bottle [2], perform peg-in-hole assemblies [3, 4], and perform many other tasks. In this work, we seek to address the problem of controlling the extrinsic contact between a grasped compliant tool (e.g. a spatula) and the environment. In general, the robot cannot expect to have the full geometry and physical properties (e.g., mass, friction, stiffness) of all the tools it must use or the geometries of the environments it must manipulate in. Instead, the robot must utilize multimodal sensory observations, such as pointclouds and tactile feedback, to act on the environment.
22
+
23
+ In recent years, learning-based methods have become increasingly popular to address the complexities of robotic manipulation, including for contact-rich tasks [5]. These methods can be loosely grouped into model-free methods, that directly learn a policy [3, 2, 6], and model-based methods, that learn system dynamics $\left\lbrack {7,8,9}\right\rbrack$ . By focusing on modeling system dynamics, model-based methods can plan to reach new goals without retraining, and are often more data-efficient [9]. Therefore, we propose learning the dynamics of our system to solve the extrinsic contact servoing task.
24
+
25
+ It is not obvious which representation to use for these dynamics. Fully recovering tool or environment geometries from visual data $\left\lbrack {{10},{11}}\right\rbrack$ and tactile feedback $\left\lbrack {12}\right\rbrack$ has been widely explored, with recent extensions to compliant geometries [13]; however, even if the system can be fully identified, contact models to resolve interactions can have limited fidelity [14]. On the other hand, learned dynamics representations can be difficult to interpret and require demonstrations or observations from the desired state to specify goals $\left\lbrack {7,{15}}\right\rbrack$ . Instead, we propose a novel contact feature representation for our learning method that focuses on tool-environment interaction and bypasses explicitly modeling the whole system. We represent the contact configuration as 1) a binary contact mode (indicating if the system is in contact); 2) a contact geometry (as a line in 3D space); and 3) an end-effector wrench.
26
+
27
+ We propose a learning architecture to model the dynamics of the proposed contact representation from raw sensory observations over candidate action trajectories. We propose structuring the model as a latent space dynamics model with a decoder that recovers the contact state. We also propose an impedance action offset term in the dynamics that allows us to accurately propagate robot poses, despite robot impedance. To provide labels to our model, we collect self-supervised data on a 7DoF Franka Emika Panda, using sensor data to automatically label the contact state.
28
+
29
+ We validate our proposed method by completing various desired contact trajectories on the real robot system. We first show that our method can track diverse desired contact trajectories in the absence of obstacles. Next, we demonstrate that we can utilize extrinsic contact servoing to scrape a target object from the table, while handling occlusions and avoiding contact with obstacles.
30
+
31
+ In summary, we make the following contributions:
32
+
33
+ * We present a framework for modeling compliant tool-environment contact interactions by learning contact feature dynamics.
34
+
35
+ * We propose a learned model architecture to capture the dynamics of contact features, trained in a supervised fashion using real world self-supervised data.
36
+
37
+ * We design and demonstrate a controller using the contact feature dynamics to realize diverse goal trajectories, including in the presence of obstacles.
38
+
39
+ § 2 RELATED WORK
40
+
41
+ Existing research has investigated the task of recovering contact locations. Manuelli et al. [16] localize point contacts on a rigid robot with known geometry by employing a particle filtering approach to update a set of candidate contact locations based on force torque sensing. Kim et al. [4] and Ma et al. [17] model contact between a grasped rigid object and the environment by assuming stationary line contacts and modeling the deformation of a GelSlim gripper. The estimated line contact is then used in a Reinforcement Learning (RL) policy. Neither of these methods extends to compliant tools and neither models the dynamics of the contact configuration.
42
+
43
+ Other works explore tactile servoing methods, where contact at the sensor is driven to a desired configuration. Li et al. [18] use a large tactile pad and define contact configuration features of objects pressed against the sensor. They manually construct a feedback controller based on these features and use it to drive contacts to desired configurations. Sutanto et al. [19] use a smaller profile tactile sensor and learn the dynamics of a learned latent space. They then employ a MPC scheme to drive contacts to desired configurations on the sensor. Both of these works assume contact is happening at the sensing location. We, on the other hand, seek to servo extrinsic contacts, where we do not get direct sensing at the point of contact.
44
+
45
+ § 3 PROBLEM FORMULATION
46
+
47
+ We parameterize our contact feature as a binary contact indicator ${c}^{b} \in \{ 0,1\}$ , used to indicate whether the tool is in contact, a contact line ${}^{1}{\mathbf{c}}^{l} \in {\mathbb{R}}^{2 \times 3}$ representing the contact geometry between the tool and the environment, and an end effector wrench ${\mathbf{c}}^{w} \in {\mathbb{R}}^{6}$ . The geometry ${\mathbf{c}}^{l}$ is only active when the tool is in contact ${c}^{b} = 1$ . The contact representation allows extrinsic contact goals to be expressed as desired contact trajectories $G = \left\lbrack {{\mathbf{g}}_{1},{\mathbf{g}}_{2},\ldots ,{\mathbf{g}}_{L}}\right\rbrack$ , where each ${\mathbf{g}}_{i} \in {\mathbb{R}}^{2,3}$ is a desired contact line to reach. We assume that contact should be maintained throughout the task.
48
+
49
+ We formulate extrinsic contact servoing as a model predictive planning problem, given observations of the current state of the system ${\mathbf{o}}_{0}$ . For a given horizon $T$ , we select the next $T$ desired contact lines $\left\lbrack {{\mathbf{g}}_{i + 1},\ldots ,{\mathbf{g}}_{i + T}}\right\rbrack \subseteq G$ to be our current contact goal sequence. The planning problem is:
50
+
51
+ $$
52
+ \mathop{\min }\limits_{{\mathbf{a}}_{0 : T - 1}}\mathop{\sum }\limits_{{t = 1}}^{T}d\left( {{\mathbf{c}}_{t}^{l},{\mathbf{g}}_{i + t}}\right) \tag{1}
53
+ $$
54
+
55
+ $$
56
+ \text{ s.t. }{c}_{t}^{b} = 1,\forall t \in \left\lbrack {1,T}\right\rbrack \tag{2}
57
+ $$
58
+
59
+ $$
60
+ \left\{ {{c}_{0 : T}^{b},{\mathbf{c}}_{0 : T}^{l},{\mathbf{c}}_{0 : T}^{w}}\right\} = g\left( {{\mathbf{o}}_{0},{\mathbf{a}}_{0 : T - 1}}\right) \tag{3}
61
+ $$
62
+
63
+ Here $g$ is a model describing the contact feature dynamics. The binary constraint ensures that the tool remains in contact while the cost function $d$ measures the distance between the two contact lines, as the average Euclidean distance between the line endpoints. Finally, if $d\left( {{\mathbf{c}}_{1}^{l},{\mathbf{g}}_{i} + 1}\right) < \epsilon$ we increment $i$ , thus moving to the next sequence of desired contact lines for the next round of planning.
64
+
65
+ § 4 METHOD
66
+
67
+ § 4.1 CONTACT FEATURE DYNAMICS MODEL
68
+
69
+ To solve our constrained optimization Eq. 1, we require a model $g$ which can map from raw observations ${\mathbf{o}}_{0}$ and a proposed action trajectory ${\mathbf{a}}_{0 : T - 1}$ to the resulting contact states $\left\{ {{c}_{0 : T}^{b},{\mathbf{c}}_{0 : T}^{l},{\mathbf{c}}_{0 : T}^{w}}\right\}$ . We propose modeling the contact feature dynamics as a deep neural network.
70
+
71
+ We assume access to a pointcloud ${\mathbf{v}}_{0}$ and input wrench ${\mathbf{h}}_{0}$ measured at the robot’s wrist as our observations, ${\mathbf{o}}_{0} = \left( {{\mathbf{v}}_{0},{\mathbf{h}}_{0}}\right)$ . Note that end effector wrench is both an input to our method and part of the contact state; predicting future wrench aids the representation learning and provides expected wrenches for planning.
72
+
73
+ We perform all learning in the local end effector frame. We transform the pointcloud to the end effector frame ${}^{E{E}_{0}}{\mathbf{v}}_{0}$ and clip to a ${0.5}{m}^{3}$ bounding box region around the end effector that contains the contact event. We similarly predict our contact lines in the current end effector frame, ${}^{E{E}_{t}}{\mathbf{c}}_{t}^{l},\forall t \in \left\lbrack {1,T}\right\rbrack$ . Learning in the end effector frame provides invariance in the visual domain to translations and rotations of the end effector and removes distractors that do not contribute to the contact state, such as the rest of the robot arm or the scene background.
74
+
75
+ Our contact feature dynamics model (Figure 2) has three components: an encoder $e$ which maps from raw observations to a learned latent space, a decoder $d$ which maps from the latent space to the contact state, and a dynamics model $f$ which captures dynamics in the latent space. We parameterize the models by a set of learned weights $\theta$ . We start by embedding the current observations into the latent space with our encoder ${\widehat{\mathbf{z}}}_{0} =$ $e\left( {{\mathbf{v}}_{0},{\mathbf{h}}_{0}}\right)$ . We unroll actions in the latent space as the contact state alone has insufficient contextual information (e.g. end-effector pose and local geometry information) to predict the next contact state. Because we predict the $t$ th contact state in the current end effector frame $E{E}_{t}$ , an important consideration when designing our dynamics model is being able to accurately recover this frame. Due to the contactful nature of our task, the low-level controller that executes ${\mathbf{a}}_{t}$ is an impedance controller, meaning the resulting action may different from the commanded action. To account for this, we predict an additional term from our dynamics model $\Delta {\widehat{\mathbf{a}}}_{t + 1}$ , which is an ${SE}\left( 3\right)$ transformation that predicts the offset between the commanded and realized next end effector pose. Thus, our dynamics model predicts, ${\widehat{\mathbf{z}}}_{t + 1},\Delta {\widehat{\mathbf{a}}}_{t + 1} = f\left( {{\widehat{\mathbf{z}}}_{t},{\mathbf{a}}_{t}}\right)$ . This allows us to construct the following recursive estimate of our end effector frame ${}^{W}{\widehat{T}}_{E{E}_{t + 1}} = {}^{W}{\widehat{T}}_{E{E}_{t}}T\left( {\mathbf{a}}_{t}\right) T\left( {\Delta {\widehat{\mathbf{a}}}_{t + 1}}\right)$ , where ${}^{W}{\widehat{T}}_{E{E}_{t}}$ is the ${SE}\left( 3\right)$ transformation describing the pose of the end effector at time $t$ . We know the initial transform ${}^{W}{\widehat{T}}_{E{E}_{0}}$ from our robot proprioception, $T\left( {\mathbf{a}}_{t}\right)$ provides the transformation for the action command, and $T\left( {\Delta {\widehat{\mathbf{a}}}_{t + 1}}\right)$ for the predicted offset term. To enforce valid ${SE}\left( 3\right)$ predictions we predict rotations in the axis-angle representation [20].
76
+
77
+ ${}^{1}$ In this paper we only consider line contacts between the tool and environment, but we believe our method is straightforward to extend to patch contacts by using a richer contact descriptor, e.g. the convex hull of a set of points.
78
+
79
+ < g r a p h i c s >
80
+
81
+ Figure 2: Our proposed contact feature dynamics model. Our architecture embeds raw observations into a latent space where dynamics can be unrolled. We then decode the contact state from the latent space. To account for our impedance controller, we also predict an action offset term which "learns the impedance" in order to relate predicted contact geometries to the world frame.
82
+
83
+ Finally, we recover our contact state estimates with our decoder $d$ given the latent state ${\widehat{\mathbf{z}}}_{t}$ : ${\widehat{c}}_{t}^{b},{}^{E{E}_{t}}{\widehat{\mathbf{c}}}_{t}^{l},{\widehat{\mathbf{c}}}_{t}^{w} = d\left( {\widehat{\mathbf{z}}}_{t}\right)$ . We can then recover the predicted contact line in the world frame by composing with the estimate of the end effector frame transformation ${}^{W}{\widehat{T}}_{E{E}_{t}}$ .
84
+
85
+ An overview of the model architectures is shown in Fig. 2 and full architecture details can be found in Appendix A.
86
+
87
+ § 4.1.1 TRAINING LOSS
88
+
89
+ We train our model on rollouts of the system, where a single example is a sequence ${\left\lbrack {\mathbf{v}}_{t},{\mathbf{h}}_{t},{\mathbf{a}}_{t},\Delta {\mathbf{a}}_{t},{\mathbf{c}}_{t}^{l},{\mathbf{c}}_{t}^{w},{c}_{t}^{b}\right\rbrack }_{t = 0}^{T}$ . We define the loss over the example as:
90
+
91
+ $$
92
+ {\mathcal{L}}_{\theta } = \left( {\mathop{\sum }\limits_{{t = 0}}^{T}{BCE}\left( {{\widehat{c}}_{t}^{b},{c}_{t}^{b}}\right) + \alpha \cdot {c}_{t}^{b} \cdot {MSE}\left( {{\widehat{\mathbf{c}}}_{t}^{l},{\mathbf{c}}_{t}^{l}}\right) + \beta \cdot {MSE}\left( {{\widehat{\mathbf{c}}}_{t}^{w},{\mathbf{c}}_{t}^{w}}\right) }\right) \tag{4}
93
+ $$
94
+
95
+ $$
96
+ + \left( {\mathop{\sum }\limits_{{t = 1}}^{T}\rho \cdot \operatorname{MSE}\left( {\Delta {\widehat{\mathbf{a}}}_{t},\Delta {\mathbf{a}}_{t}}\right) + \gamma \cdot \operatorname{MSE}\left( {{\widehat{\mathbf{z}}}_{t},e\left( {{\mathbf{v}}_{t},{\mathbf{h}}_{t}}\right) }\right) }\right) \tag{5}
97
+ $$
98
+
99
+ Here BCE is the Binary Cross Entropy classification loss and MSE is the Mean Square Error regression loss. $\alpha ,\beta ,\rho$ and $\gamma$ are loss weighting terms. The first four loss terms are prediction losses over the contact mode, contact geometry, end effector wrench, and impedance offset transformation. The final loss term is a latent consistency loss, which encourages latent rollouts to match the latent state yielded by encoding future observations.
100
+
101
+ § 4.2 EXTRINSIC CONTACT SERVOING CONTROLLER
102
+
103
+ We propose to solve our planning problem using Model Predictive Path Integral (MPPI), which has been shown to be effective for continuous control tasks where sampling is cheap and parallelizable
104
+
105
+ (e.g., neural network representations) [21]. We convert our binary constraint to a penalty, penalizing a trajectory if it yields actions that lead out of contact. Pairing this with the contact line prediction loss yields the following final cost function:
106
+
107
+ $$
108
+ \mathop{\sum }\limits_{{t = 1}}^{T}d\left( {{}^{W}{\widehat{\mathbf{c}}}_{t}^{l},{\mathbf{g}}_{i + t}}\right) + \phi \cdot \left\{ \begin{array}{ll} \left| {{\widehat{c}}_{t}^{b} - \psi }\right| & \text{ if }{\widehat{c}}_{t}^{b} < \psi \\ 0 & \text{ o.w. } \end{array}\right. \tag{6}
109
+ $$
110
+
111
+ The constraint is violated if the likelihood of binary contact is below the classification threshold $\psi$ , in which case we penalize by the distance to the threshold. $\phi$ weights the penalty against the contact line loss. With this cost function we apply MPPI to yield the next action and execute it on the robot.
112
+
113
+ § 4.3 EXTRINSIC CONTACT DYNAMICS LABELING
114
+
115
+ Our contact dynamics training loss in Eq. 4 requires ground truth contact state labels $\left( {{\mathbf{c}}_{t}^{l},{c}_{t}^{b},{\mathbf{c}}_{t}^{w}}\right)$ at time $t$ . As accurate simulation of contactful interactions is challenging, we propose a method of data acquisition directly in the real world. To generate contact line labels, we use a high resolution, low frequency scanner, a Photoneo PhoXi 3D Scanner, to generate high quality scans of the contact interaction. Using these scans, we generate contact line labels by filtering points just above the table, clipped to the area around the end effector. We then cluster these points to remove noisy points on the tabletop and generate the contact line ${\mathbf{c}}_{t}^{l}$ by selecting the two furthest points in the cluster. See Appendix B for examples of contact labels. We use a force torque sensor to identify the contact state wrench ${\mathbf{c}}_{t}^{w}$ and threshold the wrench to identify binary contact ${c}_{t}^{b}$ automatically. This setup is flexible to new robots and tools, so long as a force torque sensor is available on the robot wrist.
116
+
117
+ § 5 RESULTS
118
+
119
+ Our experiments seek to answer three questions: First, can we accurately model the contact feature dynamics when there are no objects on the table? Second, can we perform extrinsic contact servoing in this environment? Third, can our controller be applied to an object scraping task, where it must handle visual occlusions and reaction forces arising from contact with the target object?
120
+
121
+ § 5.1 EXPERIMENTAL SETUP
122
+
123
+ We test our method on a Franka Emika Panda with a rigidly-mounted compliant spatula at the end effector (Fig. 1). For our observations ${\mathbf{o}}_{0}$ , we use pointclouds ${\mathbf{v}}_{0}$ from an Intel Realsense D435 sensor ${}^{2}$ and mount an ATI Gamma Force/Torque sensor between the end effector and tool. We use the last four wrench values received after the previous action completed as the tactile input ${\mathbf{h}}_{0}$ . To collect our dataset, we use a random action policy with a heuristic to encourage contact between the tool and the spatula. No other objects are on the tabletop during data collection to allow proper data supervision, as detailed in Sec. 4.3. We collect 489 random trajectories of data (22005 rollout sequences) to train and test the model. We use 80/10/10 train, validate, test splits of the trajectories.
124
+
125
+ § 5.2 MODELING CONTACT FEATURE DYNAMICS
126
+
127
+ We first investigate the ability of our model to capture the contact feature dynamics exhibited in our dataset. Due to the novelty of our representation, to the authors' knowledge, there are no existing visuo-tactile extrinsic contact feature learning methods that are directly comparable. Thus we focus our comparisons on ablations to understand the importance of the components of our proposed neural network model. We train three variations. First is the full model, as described in Sec. 4.1, hereafter called "Full Model." Second, to understand the importance of modeling the impedance of the robot, we ablate the impedance offset action prediction, thus we propagate the end effector frame only with the commanded action. We call this method "No Impedance." Finally, we investigate our model trained only on visual input data, called "Vision-Only," to demonstrate the drop in performance when we do not use force data.
128
+
129
+ We train our contact feature dynamics on our dataset with a rollout horizon of $T = 3$ . All methods are trained with the Adam optimizer [22] until convergence on the validation set. We set $\alpha =$ ${100.0},\beta = \gamma = \rho = {0.1}$ in our loss term in Eq. 4 to balance the scale of the terms.
130
+
131
+ ${}^{2}$ We don’t use the high-fidelity Photoneo scan as it is a very low-frequency scanner.
132
+
133
+ < g r a p h i c s >
134
+
135
+ Figure 3: Contact Feature Dynamics Performance: without our impedance offset term, contact line quickly drifts; without tactile inputs it is difficult to accurately model end effector wrenches.
136
+
137
+ < g r a p h i c s >
138
+
139
+ Figure 4: Qualitative Scraped Areas: our controller (blue) is able to closely match desired contact trajectories (red).
140
+
141
+ We compare the prediction performance of the models on the test split of the dataset (2250 examples) in Fig. 3 (see Appendix C for torque prediction results, whose results are very similar to force results). Our results show that we can effectively learn contact feature dynamics, with the Full Model achieving around ${90}\%$ accuracy, about $5\mathrm{\;{mm}}$ contact line error, and less than a Newton of contact force error. Fig. 3b shows that without the impedance offset term, the contact line estimate drifts significantly, due to the difference between commanded and realized actions, which makes the estimate of the end effector pose increasingly bad. The Full Model shows slight improvement over the Vision-Only method throughout. The vision-only does especially poor in contact force estimation (Fig. 3c), as it does not get contact wrenches as input.
142
+
143
+ § 5.3 EXTRINSIC CONTACT SERVOING
144
+
145
+ Next we investigate how our proposed controller performs following specified contact trajectories. Methods that learn latent dynamics for model predictive planning in an unsupervised fashion [7, 9] could be applied to our system to learn the dynamics and potentially plan with them, however, specifying contact trajectory goals in the learned latent spaces would require significant new capabilities for these models. Similarly, methods that recover tool/environment geometries [13, 10] from sensory data would also require significant changes to predict our contact dynamics. We leave these investigations to future work, and focus here on the performance of our proposed methodology.
146
+
147
+ Obstacle-Free: We start by attempting to servo along four different contact trajectories in the obstacle-free environment. The desired contact trajectories are shown in Fig. 4, and explore translation of contact as well as cases where the robot must tilt the tool to achieve a contact smaller than the width of the tool. We use the same labeling technique introduced in Sec. 4.3 to get ground truth contact trajectories executed by the controller.
148
+
149
+ We use the controller described in Sec. 4.2, with $\psi = {0.45},\phi = {0.05}$ . All planning is done with the Full Model. To investigate the planning performance, we run the controller ten times per trajectory. We use the Intersection over Union (IoU) of the desired and swept contact areas as our performance metric. We construct the goal contact area by sweeping the space between the specified goal contact lines and the realized controller swept area by assuming that the space between two consecutive contact states was swept out if the two states were both in contact.
150
+
151
+ < g r a p h i c s >
152
+
153
+ Figure 6: Example of extrinsic contact servoing execution for the "Straight" target obstacle scrape experiment. Our method is able to accurately servo along the desired contact trajectory and successfully scrape the target.
154
+
155
+ We show qualitative examples of controller realized scrapes compared to the goal scrapes in Fig. 4. We see that the controller is able to closely match the desired swept areas, including in the difficult tilting problems. Fig. 5 shows the average IoU scores over the ten controller rollouts on each desired contact trajectory. For the two trajectories focused on contact line translation, we achieve high IoU (near ${80}\%$ ). For the tilting case, we see a drop in performance, likely because it is more difficult to control transitions and smaller contact shapes.
156
+
157
+ With Obstacles: We next examine our method's robustness to visual occlusions and reaction forces arising from contact with a target object to be scraped. This task is common in construction, cooking, and cleaning. A deformable and slightly adhesive material (Play-dough) is pressed onto the surface and a contact trajectory is specified through the object. In one case, the target object is alone on the tabletop (Fig. 6), and thus we specify a contact trajectory using the full width of the tool. In the second scenario, obstacles are on the table near the target object (Fig. 1), thus we must specify a contact trajectory that avoids them.
158
+
159
+ < g r a p h i c s >
160
+
161
+ Figure 5: Extrinsic Contact Servoing IoU: our controller realizes diverse contact goal trajectories with high overlap. We see reduced performance for the challenging case of contact trajectories that require tilting.
162
+
163
+ Besides running our Full Model in these scenarios, we investigated enhancements of the method to aid its performance in the presence of visual occlusions and object reaction forces. First, we applied data augmentation to our training dataset, randomly generating ellipsoids in the pointcloud and using a hidden point removal algorithm [23] to provide corresponding occlusions in the original point cloud. See Appendix B for examples of augmented inputs. We call this method "Full Model + Aug." Second, we investigate using the difference between the predicted and observed wrenches $\Delta \mathbf{w} = {\widehat{\mathbf{c}}}_{t}^{w} - {\mathbf{c}}_{t}^{w}$ to derive an action offset to compensate for the extra wrench experienced by the robot. From ${\Delta w}$ we derive an action that will counteract this wrench offset ${\widehat{\mathbf{a}}}_{t} = \frac{\Delta \mathbf{w}}{{\mathbf{k}}_{p}} \cdot {\mathbf{k}}_{p}$ are the pose gains of the impedance controller. The offset action is composed with the original action from the controller. We call this method "Full Model + Aug + Wrench Offset."
164
+
165
+ We use two metrics. First, we measure the approximate mass of the target object to be scraped before and after scraping and determine the percentage of mass successfully removed. Second, we compare the $2\mathrm{D}$ footprint of the material before and after scraping and report the percentage of the footprint successfully removed. The second is a more challenging metric, since even a slightly wrong scrape will leave residue. The quantitative results over 5 runs of each method in each experiment setup are shown in Fig. 7. Examples of scrape executions are shown in Fig. 1 and Fig. 6. See Appendix C for more examples of scrape results.
166
+
167
+ < g r a p h i c s >
168
+
169
+ Figure 7: Target Scraping Results: Our Full Model and variations for addressing visual occlusions and reaction forces arising from contact with the target object perform comparably on both metrics over 5 trials on each experiment.
170
+
171
+ We see that in each case, all methods were able to remove over 95% of material mass on average and about ${40} - {60}\%$ of material’s footprint from the tabletop. Surprisingly, we don’t see a consistent improvement training our model with visual occlusions or adding action offsets. The Full Model's robustness to occlusions here could be due to the fact that we use a single tool in these experiments, and thus it may be sufficient in most cases to capture the location of the table with respect to the end effector in order to estimate the contact line. Even with visual occlusions near the tool contact, it is likely our method can still recover the relative pose of the tabletop from the surrounding points. The lack of clear improvement from the wrench offset action may be due to the fact that it is sufficient to be able to replan, as we do at every step with our MPPI controller.
172
+
173
+ § 6 LIMITATIONS AND CONCLUSION
174
+
175
+ Limitations: A common failure mode for our method is in controlling contact when the tool is tilted, where it is more likely for the method to yield actions that take the robot out of contact. This could, in part, be due to data imbalance. It is difficult to fully cover the space of desired contact configurations. In future work, we are interested in utilizing online learning [9] or curiosity [24] to more effectively cover the space of contacts in our dataset.
176
+
177
+ There are cases where tool to environment contact is not represented well as a contact line. Consider, for instance, the task of removing a nail with a hammer or putting the cap on a bottle. Extending our contact feature dynamics to these methods will requiring expanding our representation learning method to consider these more diverse contact specifications and consider more complex contact modes. One approach to handling these more complex interactions is to leverage collocated tactile sensing as the robot makes contact with the environment $\left\lbrack {{25},{26}}\right\rbrack$ . The resulting tactile cues can provide sufficient information to characterize these more complex interactions.
178
+
179
+ Finally, our method relies upon supervision. For future contact-rich tasks of interest, the need for labels could become more costly. We hope to investigate how we can remove reliance upon supervision by exploring few/zero shot generalization [27, 28] and domain randomization techniques [29].
180
+
181
+ Conclusion: Our approach simplifies contact rich interactions for compliant tool manipulation, by avoiding the necessity for full system identification while maintaining interpretability and accuracy by explicitly modeling the contact state of the system and how it evolves. In the future, we wish to investigate our method's applicability to other tasks where full state estimation is difficult, but the contact state is crucial, such as wiping with a cloth. Additionally, we wish to investigate our methods ability to generalize to multiple tools and handle more severe occlusions and forceful interactions.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/6BIffCl6gsM/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Efficient Tactile Simulation with Differentiability for Robotic Manipulation
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Efficient simulation of tactile sensors can unlock new opportunities for learning tactile-based manipulation policies in simulation and then transferring the learned policy to real systems, but fast and reliable simulators for dense tactile normal and shear force fields are still under-explored. We present a novel approach for efficiently simulating both the normal and shear tactile force field covering the entire contact surface with an arbitrary tactile sensor spatial layout. Our simulator also provides analytical gradients of the tactile forces to accelerate policy learning. We conduct extensive simulation experiments to showcase our approach and demonstrate successful zero-shot sim-to-real transfer for a high-precision peg-insertion task with high-resolution vision-based GelSlim tactile sensors.
12
+
13
+ Keywords: Tactile Simulation, Tactile Manipulation, Sim-to-Real
14
+
15
+ ## 1 Introduction
16
+
17
+ Just as humans heavily rely on the rich and precise tactile cues for dexterous grasping and in-hand manipulation tasks, robots can also utilize tactile cues as an important source of sensing for interacting with the surrounding environments, especially when the visual information is unavailable or occluded. With the recent development of various tactile sensors capable of generating dense normal or shear load information $\left\lbrack {1,2,3,4}\right\rbrack$ , researchers have been exploring how to leverage this important mode of information for robotic manipulation tasks. With the dense tactile normal load field, the static spatial relation between the object and the robot manipulators can easily be inferred, which is useful for tasks such as edge following [5], pose estimation [6], object reconstruction and recognition $\left\lbrack {7,8}\right\rbrack$ . On the other hand, the dense tactile shear force feedback more readily gives rich information about the dynamic tangential motions between the object and the manipulators, and thus can be utilized in tasks such as stable grasp [9], precise insertion [10, 11], and slip detection [12, 13, 14, 15]. However, most of the tactile manipulation work still requires significant amount of human effort on real hardware system for collecting data, cleverly building automatic resetting mechanism, and carefully designing the learning strategy [10]. Such manual work can be time-consuming, cost expensive, and more importantly unsafe during policy exploration.
18
+
19
+ Due to its capability to replicate the real world with high fidelity and low cost, physics-based simulation has become a powerful recipe for learning robotic control policies [16, 17, 18, 19]. Previous work has demonstrated that the policy can be efficiently learned in simulation and successfully transferred to real robots via proper sim-to-real techniques [20, 21, 22]. Despite the prevalence of simulation and the importance of tactile sensory in robotics, physics-based simulation is still under-explored to efficiently simulate dense tactile normal and shear force fields for robotic applications. Most popular simulators [18, 19] only support force-torque sensors which are attached to each robot link, only producing the contact force values at a few points on each body. Although one can acquire a dense tactile force field via attaching many small cuboids to the robot body and querying the force sensor on each small cuboid from simulation [23, 24], the obtained tactile force values are usually sparse and are unable to match the uniform force distribution on a real elastic tactile sensor such as GelSlim [1]. While researchers have also tried simulating realistic tactile feedback via pure geometric methods $\left\lbrack {5,{25}}\right\rbrack$ , such methods typically only compute the normal tactile force and cannot simulate the tactile effects in shear directions. On the other hand, the tactile shear forces have been successfully simulated via finite element method (FEM) [26, 27, 28] or data-driven approach [29], but these simulators suffer from expensive computation costs, and cannot be easily used for data-hungry policy-learning approaches such as reinforcement learning (RL).
20
+
21
+ We present a novel tactile simulator that can efficiently and reliably simulate both normal and shear tactile force fields covering the entire contact surface. We build upon rigid body dynamics formulation and develop a fast penalty-based tactile model which can run at 1000 frames/s on a single core of Intel i7-9700 CPU. Our tactile model can reasonably approximate the soft contact nature of soft tactile sensor material such as the elastomer used in GelSlim [30], generate dense tactile force fields (e.g., the dense marker array on GelSlim), and is compatible with arbitrary tactile sensor spatial layout (i.e., flat plane, hemisphere, etc.). Furthermore, our compact tactile formulation is differentiable, which allows the simulator to provide fast analytical gradients for the entire dynamics chain. We conduct extensive experiments in simulation to demonstrate the capabilities of our tactile simulator, including policy learning with reinforcement learning algorithms and gradient-based algorithms. We also conduct a zero-shot sim-to-real experiment for a high-precision tactile-based peg-insertion task, demonstrating that our simulator provides realistic tactile simulation.
22
+
23
+ ## 2 Related Work
24
+
25
+ While there have been many physics-based simulators developed to simulate various types of robots, efficiently and reliably simulating dense tactile sensing fields is less explored. As mentioned above, most robotics simulators such as MuJoCo [18] and PyBullet [19] only support force-torque sensors that are attached to each robot link. While it is possible to augment these simulators with high-resolution tactile forces, they become computationally cumbersome. In order to acquire more realistic and dense tactile forces, Narang et al. [26, 27] and Ding et al. [28] use soft materials to model the tactile sensors and apply finite element method (FEM) to simulate the deformation and force fields of the tactile sensors. Despite its high fidelity of the simulated tactile feedback, these simulators suffer from expensive computation cost and are primarily used to collect supervised tactile dataset instead of learning policies which are typically data-hungry and requires fast simulations. Vision-based tactile sensors produce high-resolution tactile feedback. To simulate vision-based sensors, Wang et al. [25] and Church et al. [5] use PyBullet [19] and render the intersecting part between the object and the tactile manipulator as depth images, from which tactile information is generated. However such purely geometry-based approaches cannot simulate the tactile effects in the shear directions such as the marker displacements of GelSlim. Si and Yuan [29] compute the marker displacement field by presenting a superposition method to approximate the FEM dynamics. While they are able to simulate the tactile shear effects, the speed of the simulation is still slow, and no control tasks are demonstrated. Bi et al. [31] build an efficient simulation specialized for a tactile-based pole swing-up task with a customized vision-based tactile sensor, but the proposed technique is not readily extensible to simulate other types of tasks and tactile sensor types. Similarly to our work, Habib et al. [32] use a spring-mass-damper model to simulate tactile normal forces, and Moisio et al. [33] use a soft bristle deflection model for simulating the tactile forces. However, there is no control tasks demonstrated to be learned with the presented simulators and no gradients information is available. In contrast to these previous works, we present a generic simulator with analytical gradients for tactile forces by leveraging the penalty-based rigid body dynamics, and we demonstrate that our simulation is efficient enough for policy learning, and simulated tactile force field can be successfully used for a sim-to-real task on the high-resolution vision-based GelSlim sensor.
26
+
27
+ ## 3 Method
28
+
29
+ We now present our approach to simulate tactile forces for real-world tactile sensors. In $§{3.1}$ , we introduce our flexible representation for tactile sensors. In $\$ {3.2} - {3.3}$ , we present our penalty-based tactile model for simulation and derive the analytical gradients of the dynamics. In $§{3.4}$ , we describe our intermediate tactile signal representation for the sim-to-real transfer of the policies.
30
+
31
+ ### 3.1 Tactile Sensor Representation
32
+
33
+ ![01963f8d-a143-78e4-9385-245f69466b19_2_1181_195_300_207_0.jpg](images/01963f8d-a143-78e4-9385-245f69466b19_2_1181_195_300_207_0.jpg)
34
+
35
+ Figure 1: Tactile Sensor Representation.
36
+
37
+ Each tactile point $i$ on a sensor pad is represented by a tuple $\left\langle {{\mathbf{B}}_{i},{\mathbf{E}}_{i},{\xi }_{i}}\right\rangle$ as shown in Fig. 1. ${\mathbf{B}}_{i}$ is the rigid body the tactile point is attached to, and ${\mathbf{E}}_{i} \in \mathrm{{SE}}\left( 3\right)$ is the position/orientation of the point in the local coordinate frame of the body, with the ${\mathbf{x}}_{i}$ and ${\mathbf{y}}_{i}$ axes in the shear-direction plane and the ${\mathbf{z}}_{i}$ axis along the normal tactile direction. (These axes are the same for all points for a planar sensor pad.) Finally, ${\xi }_{i}$ are the simulation parameters of the penalty-based tactile model, which will be introduced later in $§{3.2}$ . Our representation for tactile points is flexible, allowing us to specify any number of points in arbitrary geometry layouts on a robot, and each tactile sensor can have its individual configuration parameters.
38
+
39
+ ### 3.2 Penalty-based Tactile Model
40
+
41
+ We use a penalty-based tactile model to characterize the force on each tactile point. For each point $\left\langle {{\mathbf{B}}_{i},{\mathbf{E}}_{i},{\xi }_{i}}\right\rangle$ , we use the following contact model [34] to obtain the contact force at the tactile point’s location represented in the local coordinate frame ${\mathbf{B}}_{i}$ . (For brevity, we drop the subscript $i$ .)
42
+
43
+ $$
44
+ {\mathbf{f}}_{n} = \left( {-{k}_{n} + {k}_{d}\dot{d}}\right) d\mathbf{n},\;{\mathbf{f}}_{t} = - \frac{{\mathbf{v}}_{t}}{\begin{Vmatrix}{\mathbf{v}}_{t}\end{Vmatrix}}\min \left( {{k}_{t}\begin{Vmatrix}{\mathbf{v}}_{t}\end{Vmatrix},\mu \begin{Vmatrix}{\mathbf{f}}_{n}\end{Vmatrix}}\right) , \tag{1}
45
+ $$
46
+
47
+ where ${\mathbf{f}}_{n}$ is the contact force at the tactile point along the contact normal direction $\mathbf{n}$ , and ${\mathbf{f}}_{t}$ is the contact friction force in the plane tangential to the contact normal direction. The scalar $d$ (nonpositive) is the penetration depth between the point and the collision object, and $\dot{d}$ is its time derivative. The vector ${\mathbf{v}}_{t}$ is the relative velocity at the contact point along the contact tangential direction. Scalars ${k}_{n},{k}_{d},{k}_{t},\mu$ are contact stiffness, contact damping coefficient, friction stiffness, and coefficient of friction respectively, and they together form the simulation parameters $\xi$ of the tactile point: i.e., for the ${i}^{th}$ tactile point, ${\xi }_{i} = \left\{ {{k}_{n}^{i},{k}_{d}^{i},{k}_{t}^{i},{\mu }^{i}}\right\}$ . After the frictional contact force is computed for each point as $\mathbf{f} = {\mathbf{f}}_{n} + {\mathbf{f}}_{t}$ , we transform this force into the local coordinate frame of the tactile point to acquire the desired shear and normal tactile force magnitudes:
48
+
49
+ $$
50
+ {T}_{sx} = {\mathbf{f}}^{\top }\mathbf{x},\;{T}_{sy} = {\mathbf{f}}^{\top }\mathbf{y},\;{T}_{n} = {\mathbf{f}}^{\top }\mathbf{z}, \tag{2}
51
+ $$
52
+
53
+ where $\mathbf{x},\mathbf{y},\mathbf{z}$ are the axes of frame $\mathbf{E}$ .
54
+
55
+ Our penalty-based tactile model can be integrated into any simulator as long as the required values, such as the world-frame location of the tactile points, the contact normal, the collision penetration depth and its time derivatives, can be acquired from the simulator. We implement our tactile model in C++ and integrate it into differentiable RedMax (DiffRedMax) [34, 35] since DiffRedMax is open-source and readily provides all the required information for our computation, and more importantly, its differentiability allows us to make our tactile simulation differentiable with a moderate amount of modifications to its backward gradients computation.
56
+
57
+ ### 3.3 Differentiable Tactile Simulation
58
+
59
+ Since we use an implicit time integration scheme for forward dynamics, the core step of gradients computation is to differentiate through the nonlinear equations of motion. We start by formulating a finite-horizon tactile-based policy optimization problem:
60
+
61
+ $$
62
+ \mathop{\operatorname{minimize}}\limits_{\mathbf{\theta }}\mathcal{L} = \mathop{\sum }\limits_{{t = 1}}^{H}{\mathcal{L}}_{t}\left( {{\mathbf{u}}_{t},{\mathbf{q}}_{t},{\mathbf{v}}_{t}\left( {\mathbf{q}}_{t}\right) }\right) \tag{3a}
63
+ $$
64
+
65
+ $$
66
+ \text{s.t.}g\left( {{\mathbf{q}}_{t - 1},{\dot{\mathbf{q}}}_{t - 1},{\mathbf{u}}_{t},{\mathbf{q}}_{t}}\right) = 0
67
+ $$
68
+
69
+ (Equations of Motion)(3b)
70
+
71
+ $$
72
+ {\mathbf{u}}_{t} = {\pi }_{\theta }\left( {{\widetilde{\mathbf{q}}}_{t - 1},{\widetilde{\mathbf{v}}}_{t - 1}\left( {\mathbf{q}}_{t - 1}\right) ,{T}_{t - 1}\left( {{\mathbf{q}}_{t - 1},{\dot{\mathbf{q}}}_{t - 1}}\right) }\right) .\;\text{(Policy Execution)} \tag{3c}
73
+ $$
74
+
75
+ Here, $H$ is the task horizon, ${\mathcal{L}}_{t}$ is a step-wise task-dependent reward function, $\mathbf{u}$ is the action (e.g., joint torque), $\mathbf{q}$ is the simulation state (i.e., joint angles), and $\mathbf{v}$ is the derived auxiliary simulation variables (e.g., fingertip positions) which themselves are a function of $\mathbf{q}$ . Eq. 3b describes the nonlinear equations of motion ( $§$ A.1). Eq. 3c represents the inference of the control policy ${\pi }_{\theta }$ to obtain the desired action given the partial observation of the simulation state $\widetilde{\mathbf{q}}$ , partial observation of the simulation computed variables $\widetilde{\mathbf{v}}$ , and the tactile force values $T$ from Eq. 2. We compute the gradients $d\mathcal{L}/{d\theta } = \left( {d\mathcal{L}/d{\mathbf{u}}_{t}}\right) \left( {d{\mathbf{u}}_{t}/{d\theta }}\right)$ for policy optimization. We embed our simulator as a differentiable layer into the PyTorch computation graph and use reverse mode differentiation to backward differentiate through dynamics time integration. The first gradient, $d\mathcal{L}/d{\mathbf{u}}_{t}$ , which includes the tactile derivatives, is derived analytically, as shown below. The second gradient, $d{\mathbf{u}}_{t}/{d\theta }$ , is computed by PyTorch’s auto-differentiation.
76
+
77
+ ![01963f8d-a143-78e4-9385-245f69466b19_3_313_209_1172_347_0.jpg](images/01963f8d-a143-78e4-9385-245f69466b19_3_313_209_1172_347_0.jpg)
78
+
79
+ Figure 2: Sim-to-Real Pipeline for Insertion Task (§4.5). Gray Box: During training, we convert the tactile force output from the simulator into the normalized flow map representation (shaded in green). Yellow Box: When executing the policy on a real system, we convert the sensor output into the same normalized flow map. This intermediate representation is then treated as the observation input to a neural network policy to output the pose adjustment for the next attempt. Here we only visualize the tactile output from one tactile sensor pad.
80
+
81
+ At each time step $t$ , we derive $\partial \mathcal{L}/\partial {\mathbf{u}}_{t}$ given the analytically computed gradients with respect to the system states, auxiliary variables, and tactile forces $\left( {\partial \mathcal{L}/\partial {\mathbf{q}}_{t},\partial \mathcal{L}/\partial {\mathbf{v}}_{t},\partial \mathcal{L}/\partial {T}_{t}}\right.$ ; see $§$ A.2-A.3):
82
+
83
+ $$
84
+ \frac{d\mathcal{L}}{d{\mathbf{u}}_{t}} = \underset{a}{\underbrace{\frac{\partial {\mathcal{L}}_{t}}{\partial {\mathbf{u}}_{t}}}} + \underset{b}{\underbrace{\left( \frac{\partial \mathcal{L}}{\partial {\mathbf{q}}_{t}} + \frac{\partial \mathcal{L}}{\partial {\mathbf{v}}_{t}}\frac{\partial {\mathbf{v}}_{t}}{\partial {\mathbf{q}}_{t}} + \frac{\partial \mathcal{L}}{\partial {T}_{t}}\left( \frac{\partial {T}_{t}}{\partial {\mathbf{q}}_{t}} + \frac{\partial {T}_{t}}{\partial {\dot{\mathbf{q}}}_{t}}\frac{\partial {\dot{\mathbf{q}}}_{t}}{\partial {\mathbf{q}}_{t}}\right) \right) }}\underset{-{A}^{-1}D}{\underbrace{\frac{\partial {\mathbf{q}}_{t}}{\partial {\mathbf{u}}_{t}}}}. \tag{4}
85
+ $$
86
+
87
+ The right-most derivative can be computed by applying the implicit function theorem on Eq. 3b, which gives us $\partial {\mathbf{q}}_{t}/\partial {\mathbf{u}}_{t} = - {\left( \partial g/\partial {\mathbf{q}}_{t}\right) }^{-1}\left( {\partial g/\partial {\mathbf{u}}_{t}}\right)$ . Writing this as $\partial {\mathbf{q}}_{t}/\partial {\mathbf{u}}_{t} = - {A}^{-1}D$ and combining with Eq. 4, we first solve the linear system ${A}^{\top }\mathbf{c} = - {\mathbf{b}}^{\top }$ for $\mathbf{c}$ , and then we compute the final gradient as $d\mathcal{L}/d{\mathbf{u}}_{t} = \mathbf{a} + \mathbf{c} \cdot D$ using the adjoint approach.
88
+
89
+ ### 3.4 Normalized Tactile Flow Map for Sim-to-Real
90
+
91
+ We use the GelSlim 3.0 sensor [4], which utilizes small markers to track motions in the shear direction, to demonstrate the sim-to-real capability (§4.5). There is an unavoidable sim-to-real gap between our simulator emulating tactile forces and the physical GelSlim sensor that relies on imaging. In this section, we demonstrate how we overcome this gap by constructing a common intermediate tactile representation for policy input observation. We assume that the stiffness of the sensor along different shear directions is isotropic and that there exists a linear relationship between the displacement and the contact shear forces $\left( {T}_{sx}\right.$ and ${T}_{sy}$ from Eq. 2) at each tactile point. We connect these two different sensor output formats via a unitless normalized tactile flow map representation.
92
+
93
+ Specifically, we use the raw tactile sensor images from the past $k$ steps from the $n$ tactile sensor pads on a real robot as our policy observation ${T}_{\text{image }}^{\{ 1 : k,1 : n\} }$ . As shown in Fig. 2, we first detect and identify the marker positions in each image, and obtain the marker displacement field ${T}_{\text{displacement }}^{\{ 1 : k,1 : n\} } \in {\mathbb{R}}^{r \times c \times 2}$ by subtracting the marker positions in the rest configuration from their positions in the deformed configuration, where $r$ and $c$ are the rows and columns of the tactile marker array on each sensor pad, with each marker giving us the $x$ and $y$ displacement information. Then we normalize the displacement field so that the maximal length of the marker displacement across all tactile points and images is of unit length; i.e.,
94
+
95
+ $$
96
+ {T}_{\text{normalized }}^{\{ 1 : k,1 : n\} } = \frac{{T}_{\text{displacement }}^{\{ 1 : k,1 : n\} }}{\max \left( {\mathop{\max }\limits_{{k, n, r, c}}\left( \begin{Vmatrix}{{T}_{\text{displacement }}^{\{ k, n\} }\left( {r, c}\right) }\end{Vmatrix}\right) ,\epsilon }\right) }, \tag{5}
97
+ $$
98
+
99
+ where $\epsilon$ ensures that the output is zero when there is no any displacement on the markers (i.e., no contact). We concatenate these flow maps into a single tensor ${T}_{\text{normalized }} \in {\mathbb{R}}^{k \times n \times r \times c \times 2}$ , which is our normalized tactile flow map representation. For the tactile shear forces $\left\{ {{T}_{sx}^{\{ 1 : k,1 : n\} },{T}_{sy}^{\{ 1 : k,1 : n\} }}\right\}$ acquired from the simulation, we conduct the same normalization process as Eq. 5.
100
+
101
+ ![01963f8d-a143-78e4-9385-245f69466b19_4_315_205_1167_344_0.jpg](images/01963f8d-a143-78e4-9385-245f69466b19_4_315_205_1167_344_0.jpg)
102
+
103
+ Figure 3: (I) Ball rolling experiment: The tactile sensors are installed on the lower surface of the pad. The depth map of the tactile normal forces is shown in (b). The tactile force field is shown in (c) with the arrow denoting the shear forces and the color denoting the magnitude of the normal force. (II) Stable Grasp Task: The bar composed of 11 blocks with random densities (the deeper the color, the heavier the block). (a) An unsuccessful grasp results in rotational patterns in the tactile force field and (b) a successful grasp requires the gripper to adjust the grasp location to the center of mass of the bar. (III) D'Claw Cap Opening Task: The tactile sensors (white dots) are installed at the three hemisphere fingertips of the hand. We map each tactile point at one fingertip onto a 2D image plane and visualize the tactile forces field of three fingertips on the right.
104
+
105
+ Intuitively speaking, the normalized tactile flow map provides the directional information about the relative motion of the markers induced by the contact forces, and it also keeps the relative tactile load magnitude relationships among different sensors and different time steps so as to preserve the meaningful spatial and temporal information about the contact. For our sim-to-real experiments, we only use the shear directional information from the sensor, but the same technique can also be applied to the normal directional information via normalizing the depth map of the contact surface reconstructed from the GelSlim image [4] across different frames and different sensor pads.
106
+
107
+ ## 4 Experiments
108
+
109
+ We conduct extensive tactile-based experiments to demonstrate the capability of our approach. ${}^{1}$ We investigate the following questions: (§4.1, §4.2) Can our simulator reliably simulate the high-resolution tactile force field at a high speed for RL algorithms? (§4.3) Does the differentiability of our simulator provide advantages in policy learning? (§4.4) Is our tactile sensor representation flexible enough for sensors with arbitrary geometrical layouts? (§4.5) How does our simulated tactile force field compare to the tactile feedback from real sensors, and does our normalized tactile flow map representation help to transfer the policies learned in the simulator to a real robot?
110
+
111
+ ### 4.1 Speed and Reliability: High-Resolution Tactile Ball Rolling Experiment
112
+
113
+ We design a ball rolling experiment to show the efficacy of the tactile force field generated by our simulator and to test the simulation speed. The simulation setup is shown in Fig. 3(I). A high-resolution tactile pad $\left( {{200} \times {200}\text{markers}}\right)$ touches the marble ball and moves it around. The simulation step size $h = 5\mathrm{\;{ms}}$ , and we compute the tactile force field every 5 steps (i.e., ${40}\mathrm{\;{Hz}}$ ). Fig. 3(I) also shows the normal tactile force (represented by a depth map) and the tactile shear forces acquired from our simulator. For this example, our simulation runs at 1050 frames per second (FPS) on a single core of Intel Core i7-9700K CPU. The simulation speed can be further accelerated by simply parallelizing it across multiple CPU cores, as we do in the RL experiments.
114
+
115
+ ### 4.2 RL Training: Tactile-Based Stable Grasp Task
116
+
117
+ Our tactile simulator provides the shear force information on the contact surfaces, which is critical for many manipulation tasks. Inspired by the setup in [9], we show the usage of shear force information for control and the effectiveness of our tactile simulator in a parallel-jaw bar grasping task. As shown in Figure 3(II), the task requires a WSG-50 parallel-jaw gripper to stably grasp a bar 192 with unknown mass distribution in fewer than 10 attempts. The gripper has two tactile sensors with a tactile marker resolution of ${13} \times {10}$ . The bar is composed of 11 blocks where the density of each block is randomized. The total mass of the bar ranges in $\left\lbrack {{45},{110}}\right\rbrack \mathrm{g}$ . We consider a grasp to be a failure if the bar tilts more than 0.02 rad after the gripper grasps a bar.
118
+
119
+ ---
120
+
121
+ ${}^{1}$ See also the supplementary video.
122
+
123
+ ---
124
+
125
+ ![01963f8d-a143-78e4-9385-245f69466b19_5_305_236_683_304_0.jpg](images/01963f8d-a143-78e4-9385-245f69466b19_5_305_236_683_304_0.jpg)
126
+
127
+ Figure 4: Tactile-based box pushing task. (a) The goal of the gripper policy is to use its tactile feedback to push a box to a randomized target position/orientation. A time-varying external force is randomly applied on the box during the task. (b) the training curve for each policy variation is averaged from the five independent runs with different random seeds.
128
+
129
+ <table><tr><td/><td>Pos. ERROR</td><td>ORI. ERROR</td></tr><tr><td>${GD} - {Privileged}$</td><td>${0.037} \pm {0.002m}$</td><td>${0.043} \pm {0.003}^{ \circ }$</td></tr><tr><td>${GD} - {Reduced}$</td><td>${0.126} \pm {0.009m}$</td><td>${0.255} \pm {0.021}{}^{ \circ }$</td></tr><tr><td>PPO-Tactile</td><td>${0.123} \pm {0.034m}$</td><td>${0.241} \pm {0.123}^{ \circ }$</td></tr><tr><td>GD-Tactile</td><td>${0.058} \pm {0.003m}$</td><td>${0.074} \pm {0.020}{}^{ \circ }$</td></tr></table>
130
+
131
+ Table 1: Metrics comparison on box pushing task. We compute the final position/orientation errors of the best policy in each run and average the metrics from five runs for each policy variation. ${GD}$ - Privileged gives a reference of the best possible metrics, and without the privileged state information of the box, our GD-Tactile achieves much better position error and rotation error than other two variations.
132
+
133
+ We use RL to train a control policy that determines the grasp location. The initial grasp location is the geometric center of the bar. Based on the tactile sensor readings (the only observation input to the policy), the policy outputs a delta change in the grasping location. The policy is a shallow CNN (Conv-ReLU-MaxPool-Conv-ReLU-FC-FC) that takes as input the two tactile sensor readings. We train the policies with PPO [36] using 32 parallel environments with ${20}\mathrm{K}$ environment steps in total. We train the policies with 3 different random seeds and test them 320 times. The success rate is ${93.2} \pm {1.6}\%$ . The average number of attempts taken to stably grasp the bars is 2.1, meaning the policy can compute the correct grasping location after a single failed attempt in most cases.
134
+
135
+ ### 4.3 Differentiability: Tactile-Based Box Pushing Task
136
+
137
+ In this experiment, we design a box pushing task similar to [5] to demonstrate how we can leverage the provided analytical gradients to help learn tactile-based control policies better and faster.
138
+
139
+ Task Specification As shown in Fig. 4(a), the task here is to use the same WSG-50 parallel jaw gripper as $\$ {4.2}$ (with only one finger kept) to push the box to a randomly sampled goal location and orientation. The initial position of the box is randomly disturbed. A random external force is applied continually on the box, which changes every ${0.25}\mathrm{\;s}$ . More details of the task are in $§\mathrm{B}{.3}$ .
140
+
141
+ Comparing Policy Learning Algorithms We train the control policies through four different combinations of learning algorithms and observation spaces. GD-Privileged: This variation uses the gradient-based optimizer Adam by utilizing the analytical policy gradients computed from our differentiable simulation. The policy observation contains all the privileged state information of the gripper, the box, and the goal. This policy provides an upper-bound performance reference. GD-Reduced: Similar to GD-Privileged, except that the observation space only contains the state information that can be acquired on a real system such as the gripper state and the goal. GD-Tactile: Other than the state information used in ${GD}$ -Reduced, we also include the tactile sensor readings in the policy input. The policy is trained using the analytical policy gradients. PPO-Tactile: Similar to ${GD}$ -Tactile, but the policy is trained by PPO.
142
+
143
+ All the policies are trained to maximize the same reward function (§B.3). We run each variation five times with different random seeds, and plot their training curves averaged from five runs in Fig. 4(b). We also randomly sample 300 goal poses and measure the final position and orientation errors between the box and the goal pose of the best policy from each run, and report the average metrics across five seeds in Table 1. The results show that when neither state information of the box or tactile information is available (i.e., GD-Reduced), the policy cannot reliably push the box to the target location since the gripper has no any clue when the box goes outside of the control of the gripper due to the random initial box position perturbation and the random external forces. With tactile information feedback (i.e., PPO-Tactile and GD-Tactile), the gripper has this tactile information to keep the gripper touching the box and allowing it to push the box to the goal effectively. However, the high dimensional tactile observation space results in higher computational cost with PPO which relies on stochastic samples to estimate the policy gradients. In contrast, with the help of our differentiable tactile simulation, GD-Tactile makes use of the analytical policy gradients and leads to faster policy learning and better policy performance.
144
+
145
+ ### 4.4 Flexibility: Tactile Sensor Simulation on Curved Surfaces
146
+
147
+ To demonstrate that our method supports tactile sensors on curved surfaces, we train a D'Claw [37] tri-finger hand to open a cap on a bottle. We put the tactile sensors on the three rounded fingertips as shown in Fig. 3(III). The sensor layout on each fingertip is a hemisphere, and we use evenly-spaced 302 tactile markers. We build a coordinate mapping to project the marker positions on the $3\mathrm{D}$ surface into a ${20} \times {202}\mathrm{D}$ array (with some empty values around the boundary). Fig. 3(III) shows that our simulation can produce reliable and realistic tactile sensor readings on a rounded fingertip.
148
+
149
+ The task is to open a cap using the D'Claw hand. The position and the radius of the cap are randomized and unknown. There is also unknown random damping between the cap and the bottle. The task is considered a success if the cap is rotated by ${45}^{ \circ }$ . The only observation data that the policy gets are the angles of each joint, fingertip positions, and the tactile sensor readings. This task is similar to how we open caps by just using proprioception sensory data and tactile feedback on the fingers without knowing the exact size and location of the cap. The policy outputs the delta change on the joint angles. We again use PPO to train the policy (a shallow CNN) using 32 parallel environments. To show that the tactile sensors are useful in this task, we also train a baseline policy (a simple MLP policy) where the policy only takes as input the joint angles and fingertip positions. With tactile sensor readings, policies learn significantly faster and achieve an 87.3% success rate, while policies only achieve a 59.7% success rate when tactile sensor information is unavailable. More details about task setup and results are provided in $§\mathrm{B}.4$ .
150
+
151
+ ### 4.5 Zero-Shot Sim-to-Real: Tactile RL Insertion Task
152
+
153
+ In this experiment, we show the quality of the simulated tactile feedback when compared to the tactile sensing obtained from the real system, and demonstrate how to do zero-shot transfer for the policies learned in simulation to the real robot via our normalized tactile flow map representation.
154
+
155
+ Task Specification We experiment on the tactile-RL insertion task similar to [10]. In this task, a gripper (same as in $\$ {4.2}$ ) is controlled to insert a cuboid object into a rectangle-shaped hole with a random initial pose misalignment. The insertion process is modeled as an episodic policy that iterates between open-loop insertion attempts followed by insertion pose adjustments (shown in Fig. 2). The robot has up to 15 pose correction attempts, and the robot only has access to tactile feedback from the sensors installed on both gripper fingers. For the real robot system, we use a 6-DoF ABB IRB 120 robot arm with a WSG-50 parallel jaw gripper. On each side of the gripper finger, we mount a GelSlim 3.0 tactile sensor that captures the tactile interaction between the fingers and the grasped object as a high resolution tactile image. More details are in $§\mathrm{B}{.5}$ .
156
+
157
+ This task is more challenging than the stable grasp task (§4.2) because it not only needs to recognize the rotational pattern of the tactile field when the object contacts the front/back edges of the hole, but also needs to leverage the different magnitude relationship of two sensors' outputs to tell whether the object hits the left or the right hole edge (Fig. 5). It becomes even more challenging when the object hits the hole at four corners, because the robot must recognize nuances in the tactile pattern to decide the insertion pose adjustment. Therefore, this task requires a high-quality simulated tactile force field in order to transfer the learned policy to the real system successfully.
158
+
159
+ Policy Learning via RL We train the control policies with PPO [36] for three types of misalignments. Rotation: The object is initialized at the hole's center and has random rotation misalignment around the vertical axis. The action of the policy is the angle adjustment in the next insertion attempt. Translation: The object has randomly initialized translation misalignment to the hole and no rotation misalignment. The action space in this case is two dimensional for the translational correction on the $x - y$ plane. Rotation &Translation: The object has both rotation and translation initial misalignments. The action space of the policy is three dimensional.
160
+
161
+ ![01963f8d-a143-78e4-9385-245f69466b19_7_308_146_1176_223_0.jpg](images/01963f8d-a143-78e4-9385-245f69466b19_7_308_146_1176_223_0.jpg)
162
+
163
+ Figure 5: Comparison of the normalized tactile flow maps. The flow maps in the top blue boxes are from simulation (with noise added), while the flow maps in the bottom red boxes are produced from the real GelSlim sensor. In each box, the two flow maps (left and right) are for two tactile pads on the two gripper fingers.
164
+
165
+ During training, we convert the simulated tactile force field into our normalized tactile flow map representation (§3.4), and treat the resulting tactile flow map as a ${13} \times {10}$ flow "image" with 4 channels ( 2 sensors and 2 shear components of tactile forces). We model the policy by a convolutional RNN to leverage more information from previous attempts. For better sim-to-real performance, we also apply the domain randomization technique [38] on contact parameters, tactile sensor parameters, grasp forces, grasp height and tactile readings, to increase the robustness of the learned policies. More details of policy learning are provided in $§\mathrm{B}.5$ .
166
+
167
+ Experiment Results We first qualitatively compare the normalized tactile flow maps generated by simulation and by real GelSlim sensors. We plot the normalized tactile flow maps at four representative contact configurations (i.e., object contacts at different edges of the hole) [10] in Fig. 5, which shows that our simulation is able to produce highly realistic patterns in those contact configurations. We then deploy the policies learned in simulation on real hardware and quantitatively test its zero-shot performance by conducting 100 insertion experiments under different initial pose misalignments. As reported in Table 2, our zero-shot policy transfer achieves ${100}\%$ success rates on Rotation and Translation tasks. We also calculate the average number of pose corrections for the successful experiments. The average number of pose corrections is 1.53 for the Rotation task and 2.33 for the Translation task, which means that the policy is able to successfully infer the pose misalignment after just one or two failed attempts in most experiments. Given the policies being purely trained with simulated tactile data, the high success rates indicate that our simulation is able to produce normalized tactile flow maps with highly realistic tactile pattern and magnitude to help the gripper to infer the exact adjustment. For challenging Rotation & Translation task, our zero-shot transferred policy also achieves 83% success rate and 4.81 pose corrections in average. For comparison, Dong et al. [10] achieve ${89.6}\%$ success rate and 5.42 times pose adjustments for the cuboid object, with a policy trained directly on the real hardware from a pre-trained policy and with a carefully designed task curriculum. Our policy is trained from scratch only in simulation without observing any real-world data.
168
+
169
+ <table><tr><td>TASK</td><td>Success</td><td>ATTEMPTS</td></tr><tr><td>$R$</td><td>100%</td><td>1.53</td></tr><tr><td>$T$</td><td>100%</td><td>2.33</td></tr><tr><td>$R\& T$</td><td>83%</td><td>4.81</td></tr></table>
170
+
171
+ Table 2: Zero-shot sim-to-real performance of the tactile RL insertion policies.
172
+
173
+ ## 5 Limitations and Future Work
174
+
175
+ We presented an efficient differentiable simulator that can handle dense tactile force fields with both normal and shear components. When the tactile pad is very soft (e.g., TacTip), its dynamics cannot be well approximated by our penalty-based approach. An interesting direction to explore is how to efficiently simulate such soft tactile sensors. We demonstrated with the box pushing task (§4.3) the potential advantage of differentiable simulators. However, how to effectively leverage analytical gradients for more complex tactile-based tasks is still an open question and it may require more advanced policy learning algorithms [39]. Furthermore, in the sim-to-real experiment (§4.5), the zero-shot success rate of Rotation & Translation is not perfect. This is probably due to some intricacies of the real hardware that are difficult to model in our simulation. We believe that further fine-tuning the learned policies with a few shots on the real hardware will likely lead to improved performance.
176
+
177
+ References
178
+
179
+ [1] E. Donlon, S. Dong, M. Liu, J. Li, E. Adelson, and A. Rodriguez. Gelslim: A high-resolution, compact, robust, and calibrated tactile-sensing finger. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1927-1934. IEEE, 2018.
180
+
181
+ [2] S. Sundaram, P. Kellnhofer, Y. Li, J.-Y. Zhu, A. Torralba, and W. Matusik. Learning the signatures of the human grasp using a scalable tactile glove. Nature, 569(7758):698-702, 2019.
182
+
183
+ [3] M. Lambeta, P.-W. Chou, S. Tian, B. Yang, B. Maloon, V. R. Most, D. Stroud, R. Santos, A. Byagowi, G. Kammerer, et al. Digit: A novel design for a low-cost compact high-resolution tactile sensor with application to in-hand manipulation. IEEE Robotics and Automation Letters, 5(3):3838-3845, 2020.
184
+
185
+ [4] I. Taylor, S. Dong, and A. Rodriguez. Gelslim3. 0: High-resolution measurement of shape, force and slip in a compact tactile-sensing finger. arXiv preprint arXiv:2103.12269, 2021.
186
+
187
+ [5] A. Church, J. Lloyd, R. Hadsell, and N. F. Lepora. Optical tactile sim-to-real policy transfer via real-to-sim tactile image translation. arXiv preprint arXiv:2106.08796, 2021.
188
+
189
+ [6] M. Bauza, A. Bronars, and A. Rodriguez. Tac2pose: Tactile object pose estimation from the first touch. arXiv preprint arXiv:2204.11701, 2022.
190
+
191
+ [7] E. Smith, R. Calandra, A. Romero, G. Gkioxari, D. Meger, J. Malik, and M. Drozdzal. 3d shape reconstruction from vision and touch. Advances in Neural Information Processing Systems, 33: 14193-14206, 2020.
192
+
193
+ [8] S. Suresh, Z. Si, J. G. Mangelson, W. Yuan, and M. Kaess. Shapemap 3-d: Efficient shape mapping through dense touch and vision.
194
+
195
+ [9] R. Kolamuri, Z. Si, Y. Zhang, A. Agarwal, and W. Yuan. Improving grasp stability with rotation measurement from tactile sensing. In 2021 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), pages 6809-6816. IEEE, 2021.
196
+
197
+ [10] S. Dong, D. K. Jha, D. Romeres, S. Kim, D. Nikovski, and A. Rodriguez. Tactile-rl for insertion: Generalization to objects of unknown geometry. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 6437-6443. IEEE, 2021.
198
+
199
+ [11] S. Kim and A. Rodriguez. Active extrinsic contact sensing: Application to general peg-in-hole insertion. arXiv preprint arXiv:2110.03555, 2021.
200
+
201
+ [12] W. Yuan, R. Li, M. A. Srinivasan, and E. H. Adelson. Measurement of shear and slip with a gelsight tactile sensor. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pages 304-311. IEEE, 2015.
202
+
203
+ [13] S. Dong, W. Yuan, and E. H. Adelson. Improved gelsight tactile sensor for measuring geometry and slip. In 2017 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), pages 137-144. IEEE, 2017.
204
+
205
+ [14] F. Veiga, H. Van Hoof, J. Peters, and T. Hermans. Stabilizing novel objects by learning to predict tactile slip. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5065-5072. IEEE, 2015.
206
+
207
+ [15] M. Li, Y. Bekiroglu, D. Kragic, and A. Billard. Learning of grasp adaptation through experience and tactile sensing. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 3339-3346. Ieee, 2014.
208
+
209
+ [16] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym, 2016.
210
+
211
+ [17] V. Makoviychuk, L. Wawrzyniak, Y. Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin, A. Allshire, A. Handa, et al. Isaac gym: High performance gpu based physics simulation for robot learning. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021.
212
+
213
+ [18] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026- 5033. IEEE, 2012.
214
+
215
+ [19] E. Coumans and Y. Bai. Pybullet, a python module for physics simulation for games, robotics and machine learning. 2016.
216
+
217
+ [20] O. . M. Andrychowicz, B. Baker, M. Chociej, R. Józefowicz, B. McGrew, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder, L. Weng, and W. Zaremba. Learning dexterous in-hand manipulation. The International Journal of Robotics Research, 39(1):3-20, 2020.
218
+
219
+ [21] J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V. Tsounis, V. Koltun, and M. Hutter. Learning agile and dynamic motor skills for legged robots. Science Robotics, 4(26), 2019.
220
+
221
+ [22] J. Tan, T. Zhang, E. Coumans, A. Iscen, Y. Bai, D. Hafner, S. Bohez, and V. Vanhoucke. Sim-to-real: Learning agile locomotion for quadruped robots. In Robotics: Science and Systems, 2018.
222
+
223
+ [23] Z. Ding, Y.-Y. Tsai, W. W. Lee, and B. Huang. Sim-to-real transfer for robotic manipulation with tactile sensory. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 6778-6785. IEEE, 2021.
224
+
225
+ [24] A. Melnik, L. Lach, M. Plappert, T. Korthals, R. Haschke, and H. Ritter. Using tactile sensing to improve the sample efficiency and performance of deep deterministic policy gradients for simulated in-hand manipulation tasks. Frontiers in Robotics and AI, page 57, 2021.
226
+
227
+ [25] S. Wang, M. Lambeta, P.-W. Chou, and R. Calandra. Tacto: A fast, flexible, and open-source simulator for high-resolution vision-based tactile sensors. IEEE Robotics and Automation Letters, 7(2):3930-3937, 2022.
228
+
229
+ [26] Y. S. Narang, K. Van Wyk, A. Mousavian, and D. Fox. Interpreting and predicting tactile signals via a physics-based and data-driven framework. arXiv preprint arXiv:2006.03777, 2020.
230
+
231
+ [27] Y. Narang, B. Sundaralingam, M. Macklin, A. Mousavian, and D. Fox. Sim-to-real for robotic tactile sensing via physics-based simulation and learned latent projections. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 6444-6451. IEEE, 2021.
232
+
233
+ [28] Z. Ding, N. F. Lepora, and E. Johns. Sim-to-real transfer for optical tactile sensing. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 1639-1645. IEEE, 2020.
234
+
235
+ [29] Z. Si and W. Yuan. Taxim: An example-based simulation model for gelsight tactile sensors. IEEE Robotics and Automation Letters, 2022.
236
+
237
+ [30] D. Ma, E. Donlon, S. Dong, and A. Rodriguez. Dense tactile force estimation using gelslim and inverse fem. In 2019 International Conference on Robotics and Automation (ICRA), pages 5418-5424. IEEE, 2019.
238
+
239
+ [31] T. Bi, C. Sferrazza, and R. D'Andrea. Zero-shot sim-to-real transfer of tactile control policies for aggressive swing-up manipulation. IEEE Robotics and Automation Letters, 6(3):5761- 5768, 2021.
240
+
241
+ [32] A. Habib, I. Ranatunga, K. Shook, and D. O. Popa. Skinsim: A simulation environment for multimodal robot skin. In 2014 IEEE International Conference on Automation Science and Engineering (CASE), pages 1226-1231. IEEE, 2014.
242
+
243
+ [33] S. Moisio, B. León, P. Korkealaakso, and A. Morales. Model of tactile sensors using soft contacts and its application in robot grasping simulation. Robotics and Autonomous Systems, 61(1):1-12, 2013.
244
+
245
+ [34] J. Xu, T. Chen, L. Zlokapa, M. Foshey, W. Matusik, S. Sueda, and P. Agrawal. An End-to-End Differentiable Framework for Contact-Aware Robot Design. In Proceedings of Robotics: Science and Systems, Virtual, July 2021. doi:10.15607/RSS.2021.XVII.008.
246
+
247
+ [35] Y. Wang, N. J. Weidner, M. A. Baxter, Y. Hwang, D. M. Kaufman, and S. Sueda. RED-MAX: Efficient & flexible approach for articulated dynamics. ACM Trans. Graph., 38(4), July 2019. ISSN 0730-0301. doi:10.1145/3306346.3322952. URL https://doi.org/10.1145/ 3306346.3322952.
248
+
249
+ [36] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
250
+
251
+ [37] M. Ahn, H. Zhu, K. Hartikainen, H. Ponte, A. Gupta, S. Levine, and V. Kumar. Robel: Robotics benchmarks for learning with low-cost robots. arxiv e-prints, page. arXiv preprint arXiv:1909.11639, 2019.
252
+
253
+ [38] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pages 23-30. IEEE, 2017.
254
+
255
+ [39] J. Xu, V. Makoviychuk, Y. Narang, F. Ramos, W. Matusik, A. Garg, and M. Macklin. Accelerated policy learning with parallel differentiable simulation. In International Conference on Learning Representations, 2021.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/6BIffCl6gsM/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,200 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § EFFICIENT TACTILE SIMULATION WITH DIFFERENTIABILITY FOR ROBOTIC MANIPULATION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Efficient simulation of tactile sensors can unlock new opportunities for learning tactile-based manipulation policies in simulation and then transferring the learned policy to real systems, but fast and reliable simulators for dense tactile normal and shear force fields are still under-explored. We present a novel approach for efficiently simulating both the normal and shear tactile force field covering the entire contact surface with an arbitrary tactile sensor spatial layout. Our simulator also provides analytical gradients of the tactile forces to accelerate policy learning. We conduct extensive simulation experiments to showcase our approach and demonstrate successful zero-shot sim-to-real transfer for a high-precision peg-insertion task with high-resolution vision-based GelSlim tactile sensors.
12
+
13
+ Keywords: Tactile Simulation, Tactile Manipulation, Sim-to-Real
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Just as humans heavily rely on the rich and precise tactile cues for dexterous grasping and in-hand manipulation tasks, robots can also utilize tactile cues as an important source of sensing for interacting with the surrounding environments, especially when the visual information is unavailable or occluded. With the recent development of various tactile sensors capable of generating dense normal or shear load information $\left\lbrack {1,2,3,4}\right\rbrack$ , researchers have been exploring how to leverage this important mode of information for robotic manipulation tasks. With the dense tactile normal load field, the static spatial relation between the object and the robot manipulators can easily be inferred, which is useful for tasks such as edge following [5], pose estimation [6], object reconstruction and recognition $\left\lbrack {7,8}\right\rbrack$ . On the other hand, the dense tactile shear force feedback more readily gives rich information about the dynamic tangential motions between the object and the manipulators, and thus can be utilized in tasks such as stable grasp [9], precise insertion [10, 11], and slip detection [12, 13, 14, 15]. However, most of the tactile manipulation work still requires significant amount of human effort on real hardware system for collecting data, cleverly building automatic resetting mechanism, and carefully designing the learning strategy [10]. Such manual work can be time-consuming, cost expensive, and more importantly unsafe during policy exploration.
18
+
19
+ Due to its capability to replicate the real world with high fidelity and low cost, physics-based simulation has become a powerful recipe for learning robotic control policies [16, 17, 18, 19]. Previous work has demonstrated that the policy can be efficiently learned in simulation and successfully transferred to real robots via proper sim-to-real techniques [20, 21, 22]. Despite the prevalence of simulation and the importance of tactile sensory in robotics, physics-based simulation is still under-explored to efficiently simulate dense tactile normal and shear force fields for robotic applications. Most popular simulators [18, 19] only support force-torque sensors which are attached to each robot link, only producing the contact force values at a few points on each body. Although one can acquire a dense tactile force field via attaching many small cuboids to the robot body and querying the force sensor on each small cuboid from simulation [23, 24], the obtained tactile force values are usually sparse and are unable to match the uniform force distribution on a real elastic tactile sensor such as GelSlim [1]. While researchers have also tried simulating realistic tactile feedback via pure geometric methods $\left\lbrack {5,{25}}\right\rbrack$ , such methods typically only compute the normal tactile force and cannot simulate the tactile effects in shear directions. On the other hand, the tactile shear forces have been successfully simulated via finite element method (FEM) [26, 27, 28] or data-driven approach [29], but these simulators suffer from expensive computation costs, and cannot be easily used for data-hungry policy-learning approaches such as reinforcement learning (RL).
20
+
21
+ We present a novel tactile simulator that can efficiently and reliably simulate both normal and shear tactile force fields covering the entire contact surface. We build upon rigid body dynamics formulation and develop a fast penalty-based tactile model which can run at 1000 frames/s on a single core of Intel i7-9700 CPU. Our tactile model can reasonably approximate the soft contact nature of soft tactile sensor material such as the elastomer used in GelSlim [30], generate dense tactile force fields (e.g., the dense marker array on GelSlim), and is compatible with arbitrary tactile sensor spatial layout (i.e., flat plane, hemisphere, etc.). Furthermore, our compact tactile formulation is differentiable, which allows the simulator to provide fast analytical gradients for the entire dynamics chain. We conduct extensive experiments in simulation to demonstrate the capabilities of our tactile simulator, including policy learning with reinforcement learning algorithms and gradient-based algorithms. We also conduct a zero-shot sim-to-real experiment for a high-precision tactile-based peg-insertion task, demonstrating that our simulator provides realistic tactile simulation.
22
+
23
+ § 2 RELATED WORK
24
+
25
+ While there have been many physics-based simulators developed to simulate various types of robots, efficiently and reliably simulating dense tactile sensing fields is less explored. As mentioned above, most robotics simulators such as MuJoCo [18] and PyBullet [19] only support force-torque sensors that are attached to each robot link. While it is possible to augment these simulators with high-resolution tactile forces, they become computationally cumbersome. In order to acquire more realistic and dense tactile forces, Narang et al. [26, 27] and Ding et al. [28] use soft materials to model the tactile sensors and apply finite element method (FEM) to simulate the deformation and force fields of the tactile sensors. Despite its high fidelity of the simulated tactile feedback, these simulators suffer from expensive computation cost and are primarily used to collect supervised tactile dataset instead of learning policies which are typically data-hungry and requires fast simulations. Vision-based tactile sensors produce high-resolution tactile feedback. To simulate vision-based sensors, Wang et al. [25] and Church et al. [5] use PyBullet [19] and render the intersecting part between the object and the tactile manipulator as depth images, from which tactile information is generated. However such purely geometry-based approaches cannot simulate the tactile effects in the shear directions such as the marker displacements of GelSlim. Si and Yuan [29] compute the marker displacement field by presenting a superposition method to approximate the FEM dynamics. While they are able to simulate the tactile shear effects, the speed of the simulation is still slow, and no control tasks are demonstrated. Bi et al. [31] build an efficient simulation specialized for a tactile-based pole swing-up task with a customized vision-based tactile sensor, but the proposed technique is not readily extensible to simulate other types of tasks and tactile sensor types. Similarly to our work, Habib et al. [32] use a spring-mass-damper model to simulate tactile normal forces, and Moisio et al. [33] use a soft bristle deflection model for simulating the tactile forces. However, there is no control tasks demonstrated to be learned with the presented simulators and no gradients information is available. In contrast to these previous works, we present a generic simulator with analytical gradients for tactile forces by leveraging the penalty-based rigid body dynamics, and we demonstrate that our simulation is efficient enough for policy learning, and simulated tactile force field can be successfully used for a sim-to-real task on the high-resolution vision-based GelSlim sensor.
26
+
27
+ § 3 METHOD
28
+
29
+ We now present our approach to simulate tactile forces for real-world tactile sensors. In $§{3.1}$ , we introduce our flexible representation for tactile sensors. In $\$ {3.2} - {3.3}$ , we present our penalty-based tactile model for simulation and derive the analytical gradients of the dynamics. In $§{3.4}$ , we describe our intermediate tactile signal representation for the sim-to-real transfer of the policies.
30
+
31
+ § 3.1 TACTILE SENSOR REPRESENTATION
32
+
33
+ < g r a p h i c s >
34
+
35
+ Figure 1: Tactile Sensor Representation.
36
+
37
+ Each tactile point $i$ on a sensor pad is represented by a tuple $\left\langle {{\mathbf{B}}_{i},{\mathbf{E}}_{i},{\xi }_{i}}\right\rangle$ as shown in Fig. 1. ${\mathbf{B}}_{i}$ is the rigid body the tactile point is attached to, and ${\mathbf{E}}_{i} \in \mathrm{{SE}}\left( 3\right)$ is the position/orientation of the point in the local coordinate frame of the body, with the ${\mathbf{x}}_{i}$ and ${\mathbf{y}}_{i}$ axes in the shear-direction plane and the ${\mathbf{z}}_{i}$ axis along the normal tactile direction. (These axes are the same for all points for a planar sensor pad.) Finally, ${\xi }_{i}$ are the simulation parameters of the penalty-based tactile model, which will be introduced later in $§{3.2}$ . Our representation for tactile points is flexible, allowing us to specify any number of points in arbitrary geometry layouts on a robot, and each tactile sensor can have its individual configuration parameters.
38
+
39
+ § 3.2 PENALTY-BASED TACTILE MODEL
40
+
41
+ We use a penalty-based tactile model to characterize the force on each tactile point. For each point $\left\langle {{\mathbf{B}}_{i},{\mathbf{E}}_{i},{\xi }_{i}}\right\rangle$ , we use the following contact model [34] to obtain the contact force at the tactile point’s location represented in the local coordinate frame ${\mathbf{B}}_{i}$ . (For brevity, we drop the subscript $i$ .)
42
+
43
+ $$
44
+ {\mathbf{f}}_{n} = \left( {-{k}_{n} + {k}_{d}\dot{d}}\right) d\mathbf{n},\;{\mathbf{f}}_{t} = - \frac{{\mathbf{v}}_{t}}{\begin{Vmatrix}{\mathbf{v}}_{t}\end{Vmatrix}}\min \left( {{k}_{t}\begin{Vmatrix}{\mathbf{v}}_{t}\end{Vmatrix},\mu \begin{Vmatrix}{\mathbf{f}}_{n}\end{Vmatrix}}\right) , \tag{1}
45
+ $$
46
+
47
+ where ${\mathbf{f}}_{n}$ is the contact force at the tactile point along the contact normal direction $\mathbf{n}$ , and ${\mathbf{f}}_{t}$ is the contact friction force in the plane tangential to the contact normal direction. The scalar $d$ (nonpositive) is the penetration depth between the point and the collision object, and $\dot{d}$ is its time derivative. The vector ${\mathbf{v}}_{t}$ is the relative velocity at the contact point along the contact tangential direction. Scalars ${k}_{n},{k}_{d},{k}_{t},\mu$ are contact stiffness, contact damping coefficient, friction stiffness, and coefficient of friction respectively, and they together form the simulation parameters $\xi$ of the tactile point: i.e., for the ${i}^{th}$ tactile point, ${\xi }_{i} = \left\{ {{k}_{n}^{i},{k}_{d}^{i},{k}_{t}^{i},{\mu }^{i}}\right\}$ . After the frictional contact force is computed for each point as $\mathbf{f} = {\mathbf{f}}_{n} + {\mathbf{f}}_{t}$ , we transform this force into the local coordinate frame of the tactile point to acquire the desired shear and normal tactile force magnitudes:
48
+
49
+ $$
50
+ {T}_{sx} = {\mathbf{f}}^{\top }\mathbf{x},\;{T}_{sy} = {\mathbf{f}}^{\top }\mathbf{y},\;{T}_{n} = {\mathbf{f}}^{\top }\mathbf{z}, \tag{2}
51
+ $$
52
+
53
+ where $\mathbf{x},\mathbf{y},\mathbf{z}$ are the axes of frame $\mathbf{E}$ .
54
+
55
+ Our penalty-based tactile model can be integrated into any simulator as long as the required values, such as the world-frame location of the tactile points, the contact normal, the collision penetration depth and its time derivatives, can be acquired from the simulator. We implement our tactile model in C++ and integrate it into differentiable RedMax (DiffRedMax) [34, 35] since DiffRedMax is open-source and readily provides all the required information for our computation, and more importantly, its differentiability allows us to make our tactile simulation differentiable with a moderate amount of modifications to its backward gradients computation.
56
+
57
+ § 3.3 DIFFERENTIABLE TACTILE SIMULATION
58
+
59
+ Since we use an implicit time integration scheme for forward dynamics, the core step of gradients computation is to differentiate through the nonlinear equations of motion. We start by formulating a finite-horizon tactile-based policy optimization problem:
60
+
61
+ $$
62
+ \mathop{\operatorname{minimize}}\limits_{\mathbf{\theta }}\mathcal{L} = \mathop{\sum }\limits_{{t = 1}}^{H}{\mathcal{L}}_{t}\left( {{\mathbf{u}}_{t},{\mathbf{q}}_{t},{\mathbf{v}}_{t}\left( {\mathbf{q}}_{t}\right) }\right) \tag{3a}
63
+ $$
64
+
65
+ $$
66
+ \text{ s.t. }g\left( {{\mathbf{q}}_{t - 1},{\dot{\mathbf{q}}}_{t - 1},{\mathbf{u}}_{t},{\mathbf{q}}_{t}}\right) = 0
67
+ $$
68
+
69
+ (Equations of Motion)(3b)
70
+
71
+ $$
72
+ {\mathbf{u}}_{t} = {\pi }_{\theta }\left( {{\widetilde{\mathbf{q}}}_{t - 1},{\widetilde{\mathbf{v}}}_{t - 1}\left( {\mathbf{q}}_{t - 1}\right) ,{T}_{t - 1}\left( {{\mathbf{q}}_{t - 1},{\dot{\mathbf{q}}}_{t - 1}}\right) }\right) .\;\text{ (Policy Execution) } \tag{3c}
73
+ $$
74
+
75
+ Here, $H$ is the task horizon, ${\mathcal{L}}_{t}$ is a step-wise task-dependent reward function, $\mathbf{u}$ is the action (e.g., joint torque), $\mathbf{q}$ is the simulation state (i.e., joint angles), and $\mathbf{v}$ is the derived auxiliary simulation variables (e.g., fingertip positions) which themselves are a function of $\mathbf{q}$ . Eq. 3b describes the nonlinear equations of motion ( $§$ A.1). Eq. 3c represents the inference of the control policy ${\pi }_{\theta }$ to obtain the desired action given the partial observation of the simulation state $\widetilde{\mathbf{q}}$ , partial observation of the simulation computed variables $\widetilde{\mathbf{v}}$ , and the tactile force values $T$ from Eq. 2. We compute the gradients $d\mathcal{L}/{d\theta } = \left( {d\mathcal{L}/d{\mathbf{u}}_{t}}\right) \left( {d{\mathbf{u}}_{t}/{d\theta }}\right)$ for policy optimization. We embed our simulator as a differentiable layer into the PyTorch computation graph and use reverse mode differentiation to backward differentiate through dynamics time integration. The first gradient, $d\mathcal{L}/d{\mathbf{u}}_{t}$ , which includes the tactile derivatives, is derived analytically, as shown below. The second gradient, $d{\mathbf{u}}_{t}/{d\theta }$ , is computed by PyTorch’s auto-differentiation.
76
+
77
+ < g r a p h i c s >
78
+
79
+ Figure 2: Sim-to-Real Pipeline for Insertion Task (§4.5). Gray Box: During training, we convert the tactile force output from the simulator into the normalized flow map representation (shaded in green). Yellow Box: When executing the policy on a real system, we convert the sensor output into the same normalized flow map. This intermediate representation is then treated as the observation input to a neural network policy to output the pose adjustment for the next attempt. Here we only visualize the tactile output from one tactile sensor pad.
80
+
81
+ At each time step $t$ , we derive $\partial \mathcal{L}/\partial {\mathbf{u}}_{t}$ given the analytically computed gradients with respect to the system states, auxiliary variables, and tactile forces $\left( {\partial \mathcal{L}/\partial {\mathbf{q}}_{t},\partial \mathcal{L}/\partial {\mathbf{v}}_{t},\partial \mathcal{L}/\partial {T}_{t}}\right.$ ; see $§$ A.2-A.3):
82
+
83
+ $$
84
+ \frac{d\mathcal{L}}{d{\mathbf{u}}_{t}} = \underset{a}{\underbrace{\frac{\partial {\mathcal{L}}_{t}}{\partial {\mathbf{u}}_{t}}}} + \underset{b}{\underbrace{\left( \frac{\partial \mathcal{L}}{\partial {\mathbf{q}}_{t}} + \frac{\partial \mathcal{L}}{\partial {\mathbf{v}}_{t}}\frac{\partial {\mathbf{v}}_{t}}{\partial {\mathbf{q}}_{t}} + \frac{\partial \mathcal{L}}{\partial {T}_{t}}\left( \frac{\partial {T}_{t}}{\partial {\mathbf{q}}_{t}} + \frac{\partial {T}_{t}}{\partial {\dot{\mathbf{q}}}_{t}}\frac{\partial {\dot{\mathbf{q}}}_{t}}{\partial {\mathbf{q}}_{t}}\right) \right) }}\underset{-{A}^{-1}D}{\underbrace{\frac{\partial {\mathbf{q}}_{t}}{\partial {\mathbf{u}}_{t}}}}. \tag{4}
85
+ $$
86
+
87
+ The right-most derivative can be computed by applying the implicit function theorem on Eq. 3b, which gives us $\partial {\mathbf{q}}_{t}/\partial {\mathbf{u}}_{t} = - {\left( \partial g/\partial {\mathbf{q}}_{t}\right) }^{-1}\left( {\partial g/\partial {\mathbf{u}}_{t}}\right)$ . Writing this as $\partial {\mathbf{q}}_{t}/\partial {\mathbf{u}}_{t} = - {A}^{-1}D$ and combining with Eq. 4, we first solve the linear system ${A}^{\top }\mathbf{c} = - {\mathbf{b}}^{\top }$ for $\mathbf{c}$ , and then we compute the final gradient as $d\mathcal{L}/d{\mathbf{u}}_{t} = \mathbf{a} + \mathbf{c} \cdot D$ using the adjoint approach.
88
+
89
+ § 3.4 NORMALIZED TACTILE FLOW MAP FOR SIM-TO-REAL
90
+
91
+ We use the GelSlim 3.0 sensor [4], which utilizes small markers to track motions in the shear direction, to demonstrate the sim-to-real capability (§4.5). There is an unavoidable sim-to-real gap between our simulator emulating tactile forces and the physical GelSlim sensor that relies on imaging. In this section, we demonstrate how we overcome this gap by constructing a common intermediate tactile representation for policy input observation. We assume that the stiffness of the sensor along different shear directions is isotropic and that there exists a linear relationship between the displacement and the contact shear forces $\left( {T}_{sx}\right.$ and ${T}_{sy}$ from Eq. 2) at each tactile point. We connect these two different sensor output formats via a unitless normalized tactile flow map representation.
92
+
93
+ Specifically, we use the raw tactile sensor images from the past $k$ steps from the $n$ tactile sensor pads on a real robot as our policy observation ${T}_{\text{ image }}^{\{ 1 : k,1 : n\} }$ . As shown in Fig. 2, we first detect and identify the marker positions in each image, and obtain the marker displacement field ${T}_{\text{ displacement }}^{\{ 1 : k,1 : n\} } \in {\mathbb{R}}^{r \times c \times 2}$ by subtracting the marker positions in the rest configuration from their positions in the deformed configuration, where $r$ and $c$ are the rows and columns of the tactile marker array on each sensor pad, with each marker giving us the $x$ and $y$ displacement information. Then we normalize the displacement field so that the maximal length of the marker displacement across all tactile points and images is of unit length; i.e.,
94
+
95
+ $$
96
+ {T}_{\text{ normalized }}^{\{ 1 : k,1 : n\} } = \frac{{T}_{\text{ displacement }}^{\{ 1 : k,1 : n\} }}{\max \left( {\mathop{\max }\limits_{{k,n,r,c}}\left( \begin{Vmatrix}{{T}_{\text{ displacement }}^{\{ k,n\} }\left( {r,c}\right) }\end{Vmatrix}\right) ,\epsilon }\right) }, \tag{5}
97
+ $$
98
+
99
+ where $\epsilon$ ensures that the output is zero when there is no any displacement on the markers (i.e., no contact). We concatenate these flow maps into a single tensor ${T}_{\text{ normalized }} \in {\mathbb{R}}^{k \times n \times r \times c \times 2}$ , which is our normalized tactile flow map representation. For the tactile shear forces $\left\{ {{T}_{sx}^{\{ 1 : k,1 : n\} },{T}_{sy}^{\{ 1 : k,1 : n\} }}\right\}$ acquired from the simulation, we conduct the same normalization process as Eq. 5.
100
+
101
+ < g r a p h i c s >
102
+
103
+ Figure 3: (I) Ball rolling experiment: The tactile sensors are installed on the lower surface of the pad. The depth map of the tactile normal forces is shown in (b). The tactile force field is shown in (c) with the arrow denoting the shear forces and the color denoting the magnitude of the normal force. (II) Stable Grasp Task: The bar composed of 11 blocks with random densities (the deeper the color, the heavier the block). (a) An unsuccessful grasp results in rotational patterns in the tactile force field and (b) a successful grasp requires the gripper to adjust the grasp location to the center of mass of the bar. (III) D'Claw Cap Opening Task: The tactile sensors (white dots) are installed at the three hemisphere fingertips of the hand. We map each tactile point at one fingertip onto a 2D image plane and visualize the tactile forces field of three fingertips on the right.
104
+
105
+ Intuitively speaking, the normalized tactile flow map provides the directional information about the relative motion of the markers induced by the contact forces, and it also keeps the relative tactile load magnitude relationships among different sensors and different time steps so as to preserve the meaningful spatial and temporal information about the contact. For our sim-to-real experiments, we only use the shear directional information from the sensor, but the same technique can also be applied to the normal directional information via normalizing the depth map of the contact surface reconstructed from the GelSlim image [4] across different frames and different sensor pads.
106
+
107
+ § 4 EXPERIMENTS
108
+
109
+ We conduct extensive tactile-based experiments to demonstrate the capability of our approach. ${}^{1}$ We investigate the following questions: (§4.1, §4.2) Can our simulator reliably simulate the high-resolution tactile force field at a high speed for RL algorithms? (§4.3) Does the differentiability of our simulator provide advantages in policy learning? (§4.4) Is our tactile sensor representation flexible enough for sensors with arbitrary geometrical layouts? (§4.5) How does our simulated tactile force field compare to the tactile feedback from real sensors, and does our normalized tactile flow map representation help to transfer the policies learned in the simulator to a real robot?
110
+
111
+ § 4.1 SPEED AND RELIABILITY: HIGH-RESOLUTION TACTILE BALL ROLLING EXPERIMENT
112
+
113
+ We design a ball rolling experiment to show the efficacy of the tactile force field generated by our simulator and to test the simulation speed. The simulation setup is shown in Fig. 3(I). A high-resolution tactile pad $\left( {{200} \times {200}\text{ markers }}\right)$ touches the marble ball and moves it around. The simulation step size $h = 5\mathrm{\;{ms}}$ , and we compute the tactile force field every 5 steps (i.e., ${40}\mathrm{\;{Hz}}$ ). Fig. 3(I) also shows the normal tactile force (represented by a depth map) and the tactile shear forces acquired from our simulator. For this example, our simulation runs at 1050 frames per second (FPS) on a single core of Intel Core i7-9700K CPU. The simulation speed can be further accelerated by simply parallelizing it across multiple CPU cores, as we do in the RL experiments.
114
+
115
+ § 4.2 RL TRAINING: TACTILE-BASED STABLE GRASP TASK
116
+
117
+ Our tactile simulator provides the shear force information on the contact surfaces, which is critical for many manipulation tasks. Inspired by the setup in [9], we show the usage of shear force information for control and the effectiveness of our tactile simulator in a parallel-jaw bar grasping task. As shown in Figure 3(II), the task requires a WSG-50 parallel-jaw gripper to stably grasp a bar 192 with unknown mass distribution in fewer than 10 attempts. The gripper has two tactile sensors with a tactile marker resolution of ${13} \times {10}$ . The bar is composed of 11 blocks where the density of each block is randomized. The total mass of the bar ranges in $\left\lbrack {{45},{110}}\right\rbrack \mathrm{g}$ . We consider a grasp to be a failure if the bar tilts more than 0.02 rad after the gripper grasps a bar.
118
+
119
+ ${}^{1}$ See also the supplementary video.
120
+
121
+ < g r a p h i c s >
122
+
123
+ Figure 4: Tactile-based box pushing task. (a) The goal of the gripper policy is to use its tactile feedback to push a box to a randomized target position/orientation. A time-varying external force is randomly applied on the box during the task. (b) the training curve for each policy variation is averaged from the five independent runs with different random seeds.
124
+
125
+ max width=
126
+
127
+ X Pos. ERROR ORI. ERROR
128
+
129
+ 1-3
130
+ ${GD} - {Privileged}$ ${0.037} \pm {0.002m}$ ${0.043} \pm {0.003}^{ \circ }$
131
+
132
+ 1-3
133
+ ${GD} - {Reduced}$ ${0.126} \pm {0.009m}$ ${0.255} \pm {0.021}{}^{ \circ }$
134
+
135
+ 1-3
136
+ PPO-Tactile ${0.123} \pm {0.034m}$ ${0.241} \pm {0.123}^{ \circ }$
137
+
138
+ 1-3
139
+ GD-Tactile ${0.058} \pm {0.003m}$ ${0.074} \pm {0.020}{}^{ \circ }$
140
+
141
+ 1-3
142
+
143
+ Table 1: Metrics comparison on box pushing task. We compute the final position/orientation errors of the best policy in each run and average the metrics from five runs for each policy variation. ${GD}$ - Privileged gives a reference of the best possible metrics, and without the privileged state information of the box, our GD-Tactile achieves much better position error and rotation error than other two variations.
144
+
145
+ We use RL to train a control policy that determines the grasp location. The initial grasp location is the geometric center of the bar. Based on the tactile sensor readings (the only observation input to the policy), the policy outputs a delta change in the grasping location. The policy is a shallow CNN (Conv-ReLU-MaxPool-Conv-ReLU-FC-FC) that takes as input the two tactile sensor readings. We train the policies with PPO [36] using 32 parallel environments with ${20}\mathrm{K}$ environment steps in total. We train the policies with 3 different random seeds and test them 320 times. The success rate is ${93.2} \pm {1.6}\%$ . The average number of attempts taken to stably grasp the bars is 2.1, meaning the policy can compute the correct grasping location after a single failed attempt in most cases.
146
+
147
+ § 4.3 DIFFERENTIABILITY: TACTILE-BASED BOX PUSHING TASK
148
+
149
+ In this experiment, we design a box pushing task similar to [5] to demonstrate how we can leverage the provided analytical gradients to help learn tactile-based control policies better and faster.
150
+
151
+ Task Specification As shown in Fig. 4(a), the task here is to use the same WSG-50 parallel jaw gripper as $\$ {4.2}$ (with only one finger kept) to push the box to a randomly sampled goal location and orientation. The initial position of the box is randomly disturbed. A random external force is applied continually on the box, which changes every ${0.25}\mathrm{\;s}$ . More details of the task are in $§\mathrm{B}{.3}$ .
152
+
153
+ Comparing Policy Learning Algorithms We train the control policies through four different combinations of learning algorithms and observation spaces. GD-Privileged: This variation uses the gradient-based optimizer Adam by utilizing the analytical policy gradients computed from our differentiable simulation. The policy observation contains all the privileged state information of the gripper, the box, and the goal. This policy provides an upper-bound performance reference. GD-Reduced: Similar to GD-Privileged, except that the observation space only contains the state information that can be acquired on a real system such as the gripper state and the goal. GD-Tactile: Other than the state information used in ${GD}$ -Reduced, we also include the tactile sensor readings in the policy input. The policy is trained using the analytical policy gradients. PPO-Tactile: Similar to ${GD}$ -Tactile, but the policy is trained by PPO.
154
+
155
+ All the policies are trained to maximize the same reward function (§B.3). We run each variation five times with different random seeds, and plot their training curves averaged from five runs in Fig. 4(b). We also randomly sample 300 goal poses and measure the final position and orientation errors between the box and the goal pose of the best policy from each run, and report the average metrics across five seeds in Table 1. The results show that when neither state information of the box or tactile information is available (i.e., GD-Reduced), the policy cannot reliably push the box to the target location since the gripper has no any clue when the box goes outside of the control of the gripper due to the random initial box position perturbation and the random external forces. With tactile information feedback (i.e., PPO-Tactile and GD-Tactile), the gripper has this tactile information to keep the gripper touching the box and allowing it to push the box to the goal effectively. However, the high dimensional tactile observation space results in higher computational cost with PPO which relies on stochastic samples to estimate the policy gradients. In contrast, with the help of our differentiable tactile simulation, GD-Tactile makes use of the analytical policy gradients and leads to faster policy learning and better policy performance.
156
+
157
+ § 4.4 FLEXIBILITY: TACTILE SENSOR SIMULATION ON CURVED SURFACES
158
+
159
+ To demonstrate that our method supports tactile sensors on curved surfaces, we train a D'Claw [37] tri-finger hand to open a cap on a bottle. We put the tactile sensors on the three rounded fingertips as shown in Fig. 3(III). The sensor layout on each fingertip is a hemisphere, and we use evenly-spaced 302 tactile markers. We build a coordinate mapping to project the marker positions on the $3\mathrm{D}$ surface into a ${20} \times {202}\mathrm{D}$ array (with some empty values around the boundary). Fig. 3(III) shows that our simulation can produce reliable and realistic tactile sensor readings on a rounded fingertip.
160
+
161
+ The task is to open a cap using the D'Claw hand. The position and the radius of the cap are randomized and unknown. There is also unknown random damping between the cap and the bottle. The task is considered a success if the cap is rotated by ${45}^{ \circ }$ . The only observation data that the policy gets are the angles of each joint, fingertip positions, and the tactile sensor readings. This task is similar to how we open caps by just using proprioception sensory data and tactile feedback on the fingers without knowing the exact size and location of the cap. The policy outputs the delta change on the joint angles. We again use PPO to train the policy (a shallow CNN) using 32 parallel environments. To show that the tactile sensors are useful in this task, we also train a baseline policy (a simple MLP policy) where the policy only takes as input the joint angles and fingertip positions. With tactile sensor readings, policies learn significantly faster and achieve an 87.3% success rate, while policies only achieve a 59.7% success rate when tactile sensor information is unavailable. More details about task setup and results are provided in $§\mathrm{B}.4$ .
162
+
163
+ § 4.5 ZERO-SHOT SIM-TO-REAL: TACTILE RL INSERTION TASK
164
+
165
+ In this experiment, we show the quality of the simulated tactile feedback when compared to the tactile sensing obtained from the real system, and demonstrate how to do zero-shot transfer for the policies learned in simulation to the real robot via our normalized tactile flow map representation.
166
+
167
+ Task Specification We experiment on the tactile-RL insertion task similar to [10]. In this task, a gripper (same as in $\$ {4.2}$ ) is controlled to insert a cuboid object into a rectangle-shaped hole with a random initial pose misalignment. The insertion process is modeled as an episodic policy that iterates between open-loop insertion attempts followed by insertion pose adjustments (shown in Fig. 2). The robot has up to 15 pose correction attempts, and the robot only has access to tactile feedback from the sensors installed on both gripper fingers. For the real robot system, we use a 6-DoF ABB IRB 120 robot arm with a WSG-50 parallel jaw gripper. On each side of the gripper finger, we mount a GelSlim 3.0 tactile sensor that captures the tactile interaction between the fingers and the grasped object as a high resolution tactile image. More details are in $§\mathrm{B}{.5}$ .
168
+
169
+ This task is more challenging than the stable grasp task (§4.2) because it not only needs to recognize the rotational pattern of the tactile field when the object contacts the front/back edges of the hole, but also needs to leverage the different magnitude relationship of two sensors' outputs to tell whether the object hits the left or the right hole edge (Fig. 5). It becomes even more challenging when the object hits the hole at four corners, because the robot must recognize nuances in the tactile pattern to decide the insertion pose adjustment. Therefore, this task requires a high-quality simulated tactile force field in order to transfer the learned policy to the real system successfully.
170
+
171
+ Policy Learning via RL We train the control policies with PPO [36] for three types of misalignments. Rotation: The object is initialized at the hole's center and has random rotation misalignment around the vertical axis. The action of the policy is the angle adjustment in the next insertion attempt. Translation: The object has randomly initialized translation misalignment to the hole and no rotation misalignment. The action space in this case is two dimensional for the translational correction on the $x - y$ plane. Rotation &Translation: The object has both rotation and translation initial misalignments. The action space of the policy is three dimensional.
172
+
173
+ < g r a p h i c s >
174
+
175
+ Figure 5: Comparison of the normalized tactile flow maps. The flow maps in the top blue boxes are from simulation (with noise added), while the flow maps in the bottom red boxes are produced from the real GelSlim sensor. In each box, the two flow maps (left and right) are for two tactile pads on the two gripper fingers.
176
+
177
+ During training, we convert the simulated tactile force field into our normalized tactile flow map representation (§3.4), and treat the resulting tactile flow map as a ${13} \times {10}$ flow "image" with 4 channels ( 2 sensors and 2 shear components of tactile forces). We model the policy by a convolutional RNN to leverage more information from previous attempts. For better sim-to-real performance, we also apply the domain randomization technique [38] on contact parameters, tactile sensor parameters, grasp forces, grasp height and tactile readings, to increase the robustness of the learned policies. More details of policy learning are provided in $§\mathrm{B}.5$ .
178
+
179
+ Experiment Results We first qualitatively compare the normalized tactile flow maps generated by simulation and by real GelSlim sensors. We plot the normalized tactile flow maps at four representative contact configurations (i.e., object contacts at different edges of the hole) [10] in Fig. 5, which shows that our simulation is able to produce highly realistic patterns in those contact configurations. We then deploy the policies learned in simulation on real hardware and quantitatively test its zero-shot performance by conducting 100 insertion experiments under different initial pose misalignments. As reported in Table 2, our zero-shot policy transfer achieves ${100}\%$ success rates on Rotation and Translation tasks. We also calculate the average number of pose corrections for the successful experiments. The average number of pose corrections is 1.53 for the Rotation task and 2.33 for the Translation task, which means that the policy is able to successfully infer the pose misalignment after just one or two failed attempts in most experiments. Given the policies being purely trained with simulated tactile data, the high success rates indicate that our simulation is able to produce normalized tactile flow maps with highly realistic tactile pattern and magnitude to help the gripper to infer the exact adjustment. For challenging Rotation & Translation task, our zero-shot transferred policy also achieves 83% success rate and 4.81 pose corrections in average. For comparison, Dong et al. [10] achieve ${89.6}\%$ success rate and 5.42 times pose adjustments for the cuboid object, with a policy trained directly on the real hardware from a pre-trained policy and with a carefully designed task curriculum. Our policy is trained from scratch only in simulation without observing any real-world data.
180
+
181
+ max width=
182
+
183
+ TASK Success ATTEMPTS
184
+
185
+ 1-3
186
+ $R$ 100% 1.53
187
+
188
+ 1-3
189
+ $T$ 100% 2.33
190
+
191
+ 1-3
192
+ $R\& T$ 83% 4.81
193
+
194
+ 1-3
195
+
196
+ Table 2: Zero-shot sim-to-real performance of the tactile RL insertion policies.
197
+
198
+ § 5 LIMITATIONS AND FUTURE WORK
199
+
200
+ We presented an efficient differentiable simulator that can handle dense tactile force fields with both normal and shear components. When the tactile pad is very soft (e.g., TacTip), its dynamics cannot be well approximated by our penalty-based approach. An interesting direction to explore is how to efficiently simulate such soft tactile sensors. We demonstrated with the box pushing task (§4.3) the potential advantage of differentiable simulators. However, how to effectively leverage analytical gradients for more complex tactile-based tasks is still an open question and it may require more advanced policy learning algorithms [39]. Furthermore, in the sim-to-real experiment (§4.5), the zero-shot success rate of Rotation & Translation is not perfect. This is probably due to some intricacies of the real hardware that are difficult to model in our simulation. We believe that further fine-tuning the learned policies with a few shots on the real hardware will likely lead to improved performance.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/6gEyD5zg0dt/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # USHER: Unbiased Sampling for Hindsight Experience Replay
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Dealing with sparse rewards is a long-standing challenge in reinforcement learning (RL). Hindsight Experience Replay (HER) addresses this problem by reusing failed trajectories for one goal as successful trajectories for another. This allows for both a minimum density of reward and for generalization across multiple goals. However, this strategy is known to result in a biased value function, as the update rule underestimates the likelihood of bad outcomes in a stochastic environment. We propose an asymptotically unbiased importance-sampling-based algorithm to address this problem without sacrificing performance on deterministic environments. We show its effectiveness on a range of robotic systems, including challenging high dimensional stochastic environments.
12
+
13
+ Keywords: Reinforcement Learning, Multi-goal reinforcement learning
14
+
15
+ ## 1 Introduction
16
+
17
+ In recent years, model-free reinforcement learning (RL) has become a popular approach in robotics. In particular, these methods stand out in their ability to learn near-optimal policies in high-dimensional spaces $\left\lbrack {1,2,3}\right\rbrack$ . One popular extension of RL, multi-goal ${RL}$ , allows trained robots to generalize to new tasks by conditioning on a goal parameter that determines the reward function. However, RL algorithms often struggle with tasks that involve sparse rewards, as these environments can require a very large amount of exploration to discover good solutions. Hindsight Experience Replay (HER) offers a solution to the sparse reward problem for multi-goal reinforcement learning [4]. HER treats failed attempts to reach one goal as successful attempts to reach another goal. This significantly reduces the difficulty of the exploration problem, because it guarantees a minimum density of reward and ensures that every trajectory receives useful feedback on how to reach some goal, even when the reward signal is sparse. However, these benefits come with a trade-off. While HER is unbiased in deterministic environments, it is known to be asymptotically biased in stochastic environments $\left\lbrack {5,6}\right\rbrack$ . This is because HER suffers from a survivorship bias. Since failed trajectories to one goal are treated as successful trajectories to another, it follows that HER only ever sees successful trajectories. If a random event can prevent the robot from reaching a desired goal $g$ , then HER will only sample $g$ as a goal when the event did not occur, leading it to significantly overestimate the likelihood of success and underestimate the likelihood of dangerous events. Practically, this manifests as a tendency for HER to want to "run red lights" and take risks.
18
+
19
+ ![01963f26-513b-71b2-a9be-554ef0b36646_0_1075_1457_404_230_0.jpg](images/01963f26-513b-71b2-a9be-554ef0b36646_0_1075_1457_404_230_0.jpg)
20
+
21
+ Figure 1: Q-values learned with HER (left), and Q-learning (right). A robot must navigate from the white circle to the black circle while avoiding obstacles (black squares) and risky areas (yellow triangle, ${75}\%$ chance of stopping the robot). The value function ranges from 1 (bright green) to 0 (bright red).
22
+
23
+ We present a concrete toy example of this problem in Figure 1, using tabular Q-learning. As we can see, HER values the direct path to the goal and the square en route to the dangerous square much higher than that path's correct Q-value because it undersestimates the risk. HER learns to take the shorter, more dangerous path and achieves a lower success rate with lower reward than Q-learning.
24
+
25
+ As suggested in both [5] and [6], we derive an approach that allows us to use HER for sampling goals without suffering from these bias problems. We do this by separating the goal used for the reward function $\left( {g}_{r}\right)$ from the goal that is passed to the policy $\left( {g}_{\pi }\right)$ . The value function is conditioned on both goals, but only the reward goal is sampled using HER. This allows us to efficiently learn a successor representation over future achieved goals that we can use for importance sampling. We show that reweighting HER's mean squared Bellman error using this successor representation yields an unbiased estimate of the error. We call this method Unbiased Sampling for Hindsight Experience Replay (USHER). We demonstrate this approach on an array of stochastic environments, and find that it counteracts the bias shown by HER without compromising HER's sample efficiency or stability.
26
+
27
+ ## 2 Definitions
28
+
29
+ We define a multi-goal Markov Decision Process (MDP) as a seven-tuple: state space $S \subseteq {\mathbb{R}}^{n}$ , action space $A \subseteq {\mathbb{R}}^{m}$ , discount factor $\gamma \in \left\lbrack {0,1}\right\rbrack$ , transition probability distribution $P\left( {{s}^{\prime } \mid s, a}\right)$ (with density function $f\left( {{s}^{\prime } \mid s, a}\right)$ ) for $\left( {s, a,{s}^{\prime }}\right) \in S \times A \times S$ , goal space $G \subseteq {\mathbb{R}}^{l}$ , goal function $\phi : S \rightarrow G$ , and reward function $R : S \times G \rightarrow \mathbb{R}$ . A goal $g = \phi \left( s\right) \in G$ is a vector of goal-relevant features of state $s \in S$ . Goal function $\phi$ is defined a priori, depending on the task. A typical example of $\phi \left( s\right)$ is a low-dimensional vector that preserves only the entries of state-vector $s$ that are relevant to the goal. For instance, a mobile robot is tasked with moving to a particular location and arriving there at zero velocity. The state space of the robot would include velocities and orientations of each wheel, along with several other attributes that are needed to control the robot. The goal function would take the full high-dimensional state of the robot and return only its location and velocity. Therefore, each goal point corresponds to a subspace of the state space in this example. A special case is when $G = S$ and $g = \phi \left( s\right) ,\forall s \in S$ . Note that the immediate reward function $R\left( {s, g}\right)$ depends on a selected goal $g \in G$ . Every selection of $g \in G$ produces a valid single-goal MDP. We denote by $\pi$ a deterministic goal-conditioned policy, with $\pi \left( {s, g}\right) \in A$ for $s \in S, g \in G$ , and define ${Q}^{ * }\left( {s, a, g}\right)$ to be the unique optimal $Q$ -value of action $a \in A$ in state $s \in S$ , given selected goal $g \in G$ .
30
+
31
+ In the proposed algorithm and analysis, a policy $\pi$ can be evaluated according to a goal that is not necessarily the same goal used by the policy for selecting actions. Therefore, we use ${g}_{\pi }$ to refer to goals that are passed to policies, and ${g}_{r}$ to denote goals that are used to evaluate policies. Using these notations, the Bellman equation is re-written as
32
+
33
+ $$
34
+ {Q}^{\pi }\left( {s, a,{g}_{r},{g}_{\pi }}\right) = {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {R\left( {{s}^{\prime },{g}_{r}}\right) + \gamma {Q}^{\pi }\left( {{s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{r},{g}_{\pi }}\right) \mid s, a}\right\rbrack .
35
+ $$
36
+
37
+ Intuitively, this means "The expected cumulative discounted sum of rewards $R\left( {{s}^{\prime },{g}_{r}}\right)$ , when using policy $\pi \left( {{s}^{\prime },{g}_{\pi }}\right)$ ". The reason for this separation is that it allows us to more easily separate the problem of predicting future rewards from the problem of directing the policy. This makes it much easier to find an analytic expression for HER's bias. In particular, it lets us learn an expression for future goal occupancy that is conditioned only on ${g}_{\pi }$ and not ${g}_{r}$ , which will allow us to correct for the bias induced by hindsight sampling. Observe that when ${g}_{r} = {g}_{\pi }$ , this definition reduces to the Bellman equation for standard multi-goal RL. For standard $Q$ -learning, $\pi \left( {{s}^{\prime },{g}_{\pi }}\right)$ would be $\arg \mathop{\max }\limits_{{a}^{\prime }}Q\left( {{s}^{\prime },{a}^{\prime },{g}_{\pi },{g}_{\pi }}\right)$ , where both the policy and reward goals are set to ${g}_{\pi }$ .
38
+
39
+ HER. HER is a modification of the experience replay method employed by many deep RL algorithms $\left\lbrack {1,4,7,8,9}\right\rbrack$ . Policy goal ${g}_{\pi }$ is sampled before each trajectory begins, and is not changed while generating the trajectory. After generating a trajectory, HER stores the entire trajectory in the replay buffer. When sampling transitions $\left( {s,{g}_{\pi }, a,{s}^{\prime }}\right)$ from the buffer, HER retains the original goal ${g}_{\pi }$ used in the policy that generated the trajectory, i.e., ${g}_{r} \leftarrow {g}_{\pi }$ , with probability $\frac{1}{k + 1}$ , where $k$ is a natural number (usually 4 or 8). The rest of the time, it replaces the original goal with $\phi \left( {s}_{t}\right)$ , i.e., ${g}_{r} \leftarrow \phi \left( {s}_{t}\right)$ , where ${s}_{t}$ is a randomly sampled state from the future trajectory that starts at $s$ . Goals that are selected from the future trajectory are referred to as "hindsight goals". HER then updates the Q-value and policy networks with $\left( {s,{g}_{\pi }, a,{s}^{\prime }, R\left( {{s}^{\prime },{g}_{r}}\right) }\right)$ .
40
+
41
+ ## 3 Related Work
42
+
43
+ Over the last few years, several methods have attempted to address the hindsight bias induced by HER. ARCHER attempts to decrease HER's hindsight bias by multiplying the loss on hindsight goals and non-hindsight goals by different weights, effectively upweighting the importance of hindsight goals [10]. MHER combines a multi-step Bellman equation with a bias/variance tradeoff equation to address the bias induced by the multi-step algorithm [11]. It is worth noting that MHER only attempts to address HER's off-policy bias, not its hindsight bias. A rigorous mathematical approach to HER's hindsight bias is taken in [5], by showing that HER is unbiased in deterministic environments, and that one of HER's key benefits is ensuring a minimum density of feedback from the reward function, even in high-dimensional spaces where the reward density would normally be extremely low. This reward-density problem is addressed by deriving a family of algorithms (called the $\delta$ -family, e.g. $\delta$ -DQN, $\delta$ -PPO), which guarantees a minimum reward density while still being unbiased. These methods do not use HER and have higher variance. The authors of [5] also state that the problem of formulating an unbiased form of HER is still open, and call for additional research into the problem.
44
+
45
+ Bias-Corrected HER (BHER) attempts to account for hindsight bias by analytically calculating importance-sampling hindsight goals [12]. Unfortunately, we believe that this derivation is incorrect. The proof in BHER relies on the assumption that the probability of a transition is independent of the goal $\left( {f\left( {{s}^{\prime } \mid s, a, g}\right) = f\left( {{s}^{\prime } \mid s, a}\right) }\right)$ . This assumption does not hold for HER, because it samples the goal from the future trajectory of $s$ , which depends on ${s}^{\prime }$ . Both our work and [5] give concrete counterexamples to this assumption. The following derivation provides an unbiased solution that does not rely on this flawed assumption.
46
+
47
+ ## 4 Derivation
48
+
49
+ Bias in HER. We derive the formula of the bias introduced by HER in estimating the Q-value function in the following. Let $s, a$ , and ${s}^{\prime }$ be random variables representing a state, action, and subsequent state in a given trajectory generated by policy $\pi$ with goal ${g}_{\pi }$ . Let $T$ be the number of time-steps remaining in the sub-trajectory that starts at $s$ . Let ${Q}_{HER}^{\pi }\left( {s, a,{g}_{r},{g}_{\pi }}\right)$ be the solution to the Bellman equation obtained using HER’s sampling process of reward goal ${g}_{r}$ (Sec. 2). This sampling process takes into account both ${g}_{\pi }$ and $T$ . Furthermore, ${g}_{r}$ is selected from the sub-trajectory that starts at $s$ with probability $\frac{k}{k + 1}$ . Therefore, the probability $f\left( {{s}^{\prime } \mid s, a,{g}_{r},{g}_{\pi }, T}\right)$ of the next state ${s}^{\prime }$ after knowing ${g}_{\pi },{g}_{r}$ and $T$ is generally not the same as $f\left( {{s}^{\prime } \mid s, a}\right)$ , which is what HER uses empirically to estimate ${Q}_{HER}^{\pi }\left( {s, a,{g}_{r},{g}_{\pi }}\right)$ . The following proposition quantifies this bias ratio.
50
+
51
+ Proposition 1. Suppose ${g}_{\pi }$ is fixed at the start of the trajectory, and ${g}_{r}$ is sampled using HER. Then for any ${s}^{\prime }, s, a,{g}_{r},{g}_{\pi }, T, f\left( {{s}^{\prime } \mid s, a,{g}_{r},{g}_{\pi }, T}\right) = \frac{f\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi }, T - 1}\right) }{f\left( {{g}_{r} \mid s, a,{g}_{\pi }, T}\right) }f\left( {{s}^{\prime } \mid s, a}\right)$ .
52
+
53
+ Proof: Appendix (A.3). This identity presents an interesting corollary.
54
+
55
+ Corollary. Suppose ${Q}_{HER}^{\pi }\left( {s, a,{g}_{\pi },{g}_{\pi }}\right)$ satisfies the Bellman equation and the distribution of future achieved goals is absolutely continuous with respect to the goal space for all $s, a,{g}_{\pi }$ , and $\pi \left( {s,{g}_{\pi }}\right) =$ $\arg \mathop{\max }\limits_{{a}^{\prime }}{Q}_{HER}^{\pi }\left( {s,{a}^{\prime },{g}_{\pi },{g}_{\pi }}\right)$ . Then ${Q}_{HER}^{\pi }\left( {s, a,{g}_{\pi },{g}_{\pi }}\right) = {Q}^{ * }\left( {s, a,{g}_{\pi }}\right)$ , where ${Q}^{ * }$ is the optimal goal-conditioned $Q$ -function.
56
+
57
+ Proof: Appendix (A.4). While this establishes that the target value for ${Q}_{HER}^{\pi }$ is unbiased when ${g}_{r} = {g}_{\pi }$ , the function approximator for ${Q}_{HER}^{\pi }$ may still be biased, because values other than ${g}_{r} = {g}_{\pi }$ may influence it through the training of the network. Thus, it is possible that the learned ${Q}_{HER}^{\pi }$ value may remain biased until unacceptably large amounts of data are gathered. Additionally, since the density of data is discontinuous, ${Q}_{HER}^{\pi }$ may be discontinuous and difficult to approximate with a neural network. The rest of this section is devoted to developing an importance sampling method that is guaranteed to be asymptotically unbiased over the entire domain of $Q$ .
58
+
59
+ Unbiased HER. To estimate ${Q}^{\pi }\left( {s, a,{g}_{r},{g}_{\pi }}\right)$ , the solution to the unbiased Bellman equation, we use in this work the following expression,
60
+
61
+ ${Q}^{\pi }\left( {s, a,{g}_{r},{g}_{\pi }}\right) = {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {M\left( {{s}^{\prime }, s, a,{g}_{r},{g}_{\pi }, T}\right) \left( {R\left( {{s}^{\prime },{g}_{r}}\right) + \gamma {Q}^{\pi }\left( {{s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{r},{g}_{\pi }}\right) }\right) \mid s, a,{g}_{r},{g}_{\pi }, T}\right\rbrack ,$ where $M\left( {{s}^{\prime }, s, a,{g}_{r},{g}_{\pi }, T}\right)$ is a weight that cancels the bias ratio given in Proposition 1. Conditioning the expected value over ${s}^{\prime }$ on ${g}_{r},{g}_{\pi }$ , and $T$ frees us from the constraint that ${s}^{\prime }$ needs to be independent of ${g}_{r},{g}_{\pi }$ , and $T$ . This would allow us to select ${g}_{r}$ from the future trajectory of $s$ , as HER does. Note that conditioning on $T$ , the number of steps left in the trajectory, is necessary because the distribution of goals selected by HER is not time-independent.
62
+
63
+ Proposition 1 is useful for understanding what situations may cause HER to be biased, but unfortunately we cannot directly use it for importance sampling. Weighting samples by setting $M\left( {{s}^{\prime }, s, a,{g}_{r},{g}_{\pi }, T}\right)$ as $\frac{f\left( {{g}_{r} \mid s, a,{g}_{\pi }, T}\right) }{f\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi }, T - 1}\right) }$ would require $f\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi }, T - 1}\right)$ to always be greater than 0 , which is not necessarily true. To solve this, we sample a mixture of hindsight goals and goals drawn uniformly from the goal space $G$ . Of the goals where ${g}_{r} \neq {g}_{\pi }$ , a fraction $\alpha$ of our goals will be drawn uniformly from the goal space, and the remaining $1 - \alpha$ will be drawn from the trajectory that follows $s$ . This results in the following identity,
64
+
65
+ Proposition 2. Let $W\left( {{s}^{\prime }, s, a,{g}_{r},{g}_{\pi }, T}\right) = \frac{f\left( {{g}_{r} \mid s, a,{g}_{\pi }, T}\right) }{{\alpha f}\left( {{g}_{r} \mid s, a,{g}_{\pi }, T}\right) + \left( {1 - \alpha }\right) f\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi }, T - 1}\right) }$ . Let $\alpha$ be a real value in the range $(0,1\rbrack$ . Then for any ${s}^{\prime }, s, a,{g}_{r},{g}_{\pi }$ ,
66
+
67
+ $$
68
+ f\left( {{s}^{\prime } \mid s, a}\right) = W\left( {{s}^{\prime }, s, a,{g}_{r},{g}_{\pi }, T}\right) \left( {{\alpha f}\left( {{s}^{\prime } \mid s, a}\right) + \left( {1 - \alpha }\right) f\left( {{s}^{\prime } \mid s, a,{g}_{\pi },{g}_{r}, T}\right) }\right)
69
+ $$
70
+
71
+ Furthermore, for any function $F$ of state ${s}^{\prime }$ ,
72
+
73
+ $$
74
+ {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {F\left( {s}^{\prime }\right) \mid s, a}\right\rbrack = \alpha {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {W\left( {{s}^{\prime }, s, a,{g}_{r},{g}_{\pi }, T}\right) F\left( {s}^{\prime }\right) \mid s, a}\right\rbrack
75
+ $$
76
+
77
+ $$
78
+ + \left( {1 - \alpha }\right) {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {W\left( {{s}^{\prime }, s, a,{g}_{r},{g}_{\pi }, T}\right) F\left( {s}^{\prime }\right) \mid s, a,{g}_{\pi },{g}_{r}, T}\right\rbrack . \tag{1}
79
+ $$
80
+
81
+ Proof: Appendix (A.5). We can now derive an unbiased variant of HER by applying Proposition 2 to Bellman equation.
82
+
83
+ Corollary. Suppose $\pi$ is a deterministic policy, ${g}_{r}$ is sampled from the previously mentioned mix of hindsight and uniform random goals, and ${g}_{\pi }$ . Then for any ${s}^{\prime }, s, a,{g}_{r},{g}_{\pi }, T$ ,
84
+
85
+ $$
86
+ Q\left( {s, a,{g}_{r},{g}_{\pi }}\right) = \alpha {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {W\left( {{s}^{\prime }, s, a,{g}_{r},{g}_{\pi }, T}\right) \left( {R\left( {{s}^{\prime },{g}_{r}}\right) + {\gamma Q}\left( {{s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{r},{g}_{\pi }}\right) }\right) \mid s, a}\right\rbrack
87
+ $$
88
+
89
+ $$
90
+ + \left( {1 - \alpha }\right) {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {W\left( {{s}^{\prime }, s, a,{g}_{r},{g}_{\pi }, T}\right) \left( {R\left( {{s}^{\prime },{g}_{r}}\right) + {\gamma Q}\left( {{s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{r},{g}_{\pi }}\right) }\right) \mid s, a,{g}_{\pi },{g}_{r}, T}\right\rbrack \tag{2}
91
+ $$
92
+
93
+ This corollary provides us with a simple method of estimating $Q\left( {s, a,{g}_{r},{g}_{\pi }}\right)$ using HER. A similar unbiased expression can be derived for estimating the gradient of the Bellman error with respect to the weights of a $Q$ -function network, instead of estimating $Q\left( {s, a,{g}_{r},{g}_{\pi }}\right)$ directly from samples.
94
+
95
+ Learning the future goal distribution. In order to use the proposed unbiased estimator of the Q-function with policy and reward goals, we need to compute weight $W$ defined in Proposition 2. This can be achieved by learning future goal distributions $f\left( {{g}_{r} \mid s, a,{g}_{\pi }, T}\right)$ and $f\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi }, T - 1}\right)$ , which both correspond to the conditional probability that a given goal ${g}_{r}$ will be selected as a hindsight goal by HER. A technique for learning such long-term distributions, introduced in [5], consists in training a network ${f}_{\theta }$ , with parameters $\theta$ , to approximate the density of future goals $f\left( {{g}_{r} \mid s, a}\right)$ . The following estimator for the gradient is used in [5], sampling $\left( {s, a,{s}^{\prime }}\right)$ from transitions, and fixing ${g}_{r}$ at the start of each trajectory,
96
+
97
+ $$
98
+ {\nabla }_{\theta }\left( {{\mathbb{E}}_{s, a}\left\lbrack {-{f}_{\theta }\left( {s, a,\phi \left( s\right) }\right) }\right\rbrack + {\mathbb{E}}_{s, a,{s}^{\prime },{g}_{r}}\left\lbrack {{f}_{\theta }\left( {s, a,{g}_{r}}\right) \left( {{f}_{\theta }\left( {s, a,{g}_{r}}\right) - \gamma \mathop{\max }\limits_{{a}^{\prime }}{f}_{\text{target }}\left( {{s}^{\prime },{a}^{\prime },{g}_{r}}\right) }\right) }\right\rbrack }\right) ,
99
+ $$
100
+
101
+ wherein ${f}_{\text{target }}$ is a copy of ${f}_{\theta }$ that is updated separately. This method has however a significantly higher variance than HER [5]. We examine here the source of this variance, and explain how separating the policy and reward goals allows us to avoid this variance problem. One issue with this method that can contribute to variance is that the gradient is separated into two parts: one in which the goal comes from the state $\left( {\phi \left( s\right) }\right)$ , and one in which the goal is sampled at the start of the trajectory $\left( {g}_{r}\right)$ . This is a problem, because the gradient at the state-derived goals is strictly negative, while the gradient at the sampled goals is usually positive. In our experiments, this led to a pattern where the value at the state-derived goals would diverge unboundedly, until a goal was sampled sufficiently close to make the value function crash back down to zero, and then the process would repeat again. In other words, it is not guaranteed that ${f}_{\theta }$ converges a fixed point for every finite set of trajectories.
102
+
103
+ One way to avoid this problem would be to have a fixed, non-zero chance that $\phi \left( s\right) = {g}_{r}$ , so that ${f}_{\theta }$ always converges to a fixed point given any set of training trajectories. We use HER to achieve this outcome. This is possible, unlike in [5], because we can use the importance sampling method derived above to sample a mixture of HER goals and goals independent of the state. Since HER draws from the future states of $s$ , observe that $f\left( {{g}_{r} \mid s, a,{g}_{\pi }, T}\right)$ is in fact a successor representation [13], using an average-reward formulation (because the probability of selecting any of the next T states is uniform). Observe that we can define this probability as
104
+
105
+ $$
106
+ f\left( {{g}_{r} \mid s, a,{g}_{\pi }, T}\right) = {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {\frac{1}{T}\delta \left( {{g}_{r} - \phi \left( {s}^{\prime }\right) }\right) + \left( {1 - \frac{1}{T}}\right) f\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi }, T - 1}\right) \mid s, a}\right\rbrack ,
107
+ $$
108
+
109
+ wherein $\delta$ is Dirac delta function. This results in the loss gradient:
110
+
111
+ $$
112
+ {\nabla }_{\theta }\left( {{\mathbb{E}}_{s, a,{g}_{r},{g}_{\pi }, T}\left\lbrack {-\frac{2}{T}{f}_{\theta }\left( {s, a,\phi \left( s\right) ,{g}_{\pi }, T}\right) + {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {L\left( {s, a,{s}^{\prime },{g}_{r},{g}_{\pi }, T}\right) \mid s, a}\right\rbrack }\right\rbrack }\right) , \tag{3}
113
+ $$
114
+
115
+ $L\left( {s, a,{s}^{\prime },{g}_{r},{g}_{\pi }, T}\right) \triangleq {f}_{\theta }\left( {s, a,{g}_{r},{g}_{\pi }, T}\right) \left( {{f}_{\theta }\left( {s, a,{g}_{r},{g}_{\pi }, T}\right) - \gamma {f}_{\text{target }}\left( {{s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{r},{g}_{\pi }, T - 1}\right) }\right) .$ While ${f}_{\theta }$ may not be a true probability density (because it may not integrate to 1), this does not matter for our purposes, as this factor will divide out when we calculate $W$ . Finally, we inject the formula in Equation 3 into Equation 1, while replacing $F$ with ${f}_{\theta }$ , to derive the following unbiased loss gradient,
116
+
117
+ $$
118
+ {\nabla }_{\theta }\mathcal{L} = {\nabla }_{\theta }\mathbb{E}\left\lbrack {-\frac{2}{T}{f}_{\theta }\left( {s, a,\phi \left( s\right) ,{g}_{\pi }, T}\right) + \alpha {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {W\left( {s, a,{s}^{\prime },{g}_{r},{g}_{\pi }, T}\right) L\left( {s, a,{s}^{\prime },{g}_{r},{g}_{\pi }, T}\right) }\right\rbrack \mid s, a}\right\rbrack
119
+ $$
120
+
121
+ $$
122
+ + \left( {1 - \alpha }\right) \mathbb{E}\left\lbrack {W\left( {s, a,{s}^{\prime },{g}_{r},{g}_{\pi }, T}\right) L\left( {s, a,{s}^{\prime },{g}_{r},{g}_{\pi }, T}\right) \mid s, a,{g}_{r},{g}_{\pi }, T}\right\rbrack \rbrack . \tag{4}
123
+ $$
124
+
125
+ Note that the values of $\alpha$ we use for learning ${Q}_{\theta }$ (Equation 2) and goal distribution densities ${f}_{\theta }$ (Equation 4) can be different. For discrete environments, we can learn the future distribution of the goal state using simple tabular methods, such as tabular successor representations.
126
+
127
+ ## 5 Algorithm and Implementation
128
+
129
+ USHER may be implemented atop DDPG [7], SAC [8], TD3 [9], or any other continuous RL algorithm, as it only changes the loss function for training the goal-conditioned Q-value network. In our experiments, we use SAC as a base.USHER calculates the loss as follows: It samples a batch of transitions $\left( {s, a,{s}^{\prime },{g}_{\pi }, T}\right)$ from the replay buffer, along with two sets of goals: ${g}_{r}$ , which is drawn from the future distribution of $s$ , and ${g}_{r}^{\prime }$ , which is drawn uniformly from the goal space $G$ . For each set of goals, we calculate two values of $W$ . For weighting the $Q$ -values, we use ${\alpha }_{Q} = {0.01}$ , and for weighting the $f$ -values, we use ${\alpha }_{f} = {0.5}$ . We omit the full training loop here, as it is identical to standard HER except for the loss computation.
130
+
131
+ We make a few minor adjustments to the derived formula in order to minimize the variance induced by importance sampling and ensure numerical stability. In order to minimize the variance induced by importance sampling, we clip the importance sampling fraction to the range $\left\lbrack {\frac{1}{1 + c},1 + c}\right\rbrack$ , where $c$ is a hyperparameter. This allows us to make a bias/variance trade-off between hindsight bias and the variance induced by importance sampling. We find that performance is best for $c \approx {0.3}$ , and that the bias induced by clipping is negligible for $c > {1.0}$ for most environments. We apply this to the importance sampling weights for $Q$ only $\left( {W}_{{\alpha }_{Q}}\right.$ and $\left. {W}_{{\alpha }_{Q}}^{\prime }\right)$ , not $f$ . We approximate $W$ using ${f}_{\theta }$ for all experiments. In order to reduce the total number of neural network evaluations, we made ${Q}_{\theta }$ and ${f}_{\theta }$ two heads of a two-headed neural network. Although this choice conditions the value function on $T$ , note that by the definition of the $Q$ function, $Q\left( {s, a,{g}_{r},{g}_{\pi }}\right) = \mathbb{E}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{R}^{t} \mid s, a,{g}_{r},{g}_{\pi }}\right\rbrack =$ ${\mathbb{E}}_{T}\left\lbrack {\mathbb{E}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{R}^{t} \mid s, a,{g}_{r},{g}_{\pi }, T}\right\rbrack }\right\rbrack = {\mathbb{E}}_{T}\left\lbrack {Q\left( {s, a,{g}_{r},{g}_{\pi }, T}\right) }\right\rbrack$ . Thus, training the policy on the $T$ - conditioned value function should result in the policy receiving the same gradient in expectation. We found that this trick did not noticeably impact the behavior of the $Q$ function or the policy.
132
+
133
+ ## 6 Experiments
134
+
135
+ Tasks. We evaluate the proposed algorithm in the following environments, illustrated in Figure 2. (1) Discrete. We first demonstrate our method in the discrete case in Figure 1 because it is analytically tractable and allows us to verify that USHER learns the correct value function. The environment used has a short, risky path that has a high chance of disabling the robot, and a longer risk-free path. The longer path has a higher expected reward, but we will demonstrate that HER mistakenly prefers the riskier path. (2) 4-Torus with Freeze. This environment was introduce in [5] to demonstrate HER's bias. Robots take steps on a continuous N-dimensional torus to reach a location. They also have a "freeze" action that will teleport them to a random location, but freeze them in place for the rest of the episode. HER learns to always take the freeze action. (3) Car with Random Noise and (4) Red Light. These two environments use the "Simple Car" dynamics described in [14]. In Car with Random Noise, the robot must navigate around walls while subject to Gaussian action noise [15]. In Red Light, the robot must learn to safely navigate a traffic light to reach its goal. (5) Fetch. These three environments task a robot manipulating a robot arm to reach a point, push an object, and slide an object to a point outside of the robot's reach, respectively. We use these to demonstrate the USHER has comparable performance to HER on deterministic, high-dimensional environments. (6) Mobile Throwing Robot. We design a simulated robot arm on a mobile base, and task it with throwing a ball to a randomly selected location. There is also a ${50}\%$ chance of wind that can blow the ball off course. (7) Navigation on a physical mechanum robot. Lastly, we train a mechanum robot to navigate around obstacles to reach a goal and deploy it on a physical robot. The terrain contains a high friction zone that leads to the goal faster, but unreliably. Transfer was done by rolling out trajectories in simulation, and then deploying the same sequence of actions on the physical robot as an open-loop control.
136
+
137
+ Algorithm 1 USHER
138
+
139
+ ---
140
+
141
+ Input: Replay Buffer $B$ , Two-headed Critic Network with weights $\theta$ , Actor Network with weights $w$ ,
142
+
143
+ Weighting Factor ${\alpha }_{Q}$ for $Q$ , Weighting Factor ${\alpha }_{f}$ for $f$ , Goal Space $G$ , Goal Function $\phi$ ;
144
+
145
+ Sample batch of tuples $\left( {s, a,{s}^{\prime },{g}_{\pi }, T}\right)$ from $B$ ; critic_loss $\leftarrow 0$ ; actor_loss $\leftarrow 0$ ;
146
+
147
+ for each sampled tuple $\left( {s, a,{s}^{\prime },{g}_{\pi }, T}\right) \in B$ do
148
+
149
+ Sample ${g}_{r}$ as ${g}_{\pi }$ , with probability $\frac{1}{k + 1}$ , and uniformly from the future trajectory that starts at $s$ with
150
+
151
+ probability $\frac{k}{k + 1}$ ( $k$ is a pre-defined number); Sample alternative goal ${g}_{r}^{\prime } \sim \operatorname{Uniform}\left( G\right)$ ;
152
+
153
+ target $\_ q = R\left( {\phi \left( {s}^{\prime }\right) ,{g}_{r}}\right) + {Q}_{\text{target }}\left( {{s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{r},{g}_{\pi }, T - 1}\right) ;//{Q}_{\text{target }}$ is a copy of ${Q}_{\theta }$
154
+
155
+ target_ ${q}^{\prime } = R\left( {\phi \left( {s}^{\prime }\right) ,{g}_{r}^{\prime }}\right) + {Q}_{\text{target }}\left( {{s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{r}^{\prime },{g}_{\pi }, T - 1}\right)$ ;
156
+
157
+ Define $W\left( {s, a,{s}^{\prime },{g}_{\pi },{g}_{r}, T,\alpha }\right) = \frac{{f}_{\theta }\left( {{g}_{r} \mid s, a,{g}_{\pi }, T}\right) }{\alpha {f}_{\theta }\left( {{g}_{r} \mid s, a,{g}_{\pi }, T}\right) + \left( {1 - \alpha }\right) {f}_{\text{target }}\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi }, T - 1}\right) }$ ;
158
+
159
+ Set ${W}_{{\alpha }_{f}} = \left( {1 - {\alpha }_{f}}\right) W\left( {s, a,{s}^{\prime },{g}_{\pi },{g}_{r}, T,{\alpha }_{f}}\right) ,{W}_{{\alpha }_{f}}^{\prime } = {\alpha }_{f}W\left( {s, a,{s}^{\prime },{g}_{\pi },{g}_{r}^{\prime }, T,{\alpha }_{f}}\right) ,{W}_{{\alpha }_{Q}} = (1 -$
160
+
161
+ $\left. {\alpha }_{Q}\right) W\left( {s, a,{s}^{\prime },{g}_{\pi },{g}_{r}, T,{\alpha }_{Q}}\right)$ , and ${W}_{{\alpha }_{Q}}^{\prime } = {\alpha }_{Q}W\left( {s, a,{s}^{\prime },{g}_{\pi },{g}_{r}^{\prime }, T,{\alpha }_{Q}}\right)$ ;
162
+
163
+ critic_loss $+ = \left( {{W}_{{\alpha }_{f}}\left( {{f}_{\theta }{\left( {g}_{r} \mid s, a,{g}_{\pi }, T\right) }^{2} - 2{f}_{\theta }\left( {{g}_{r} \mid s, a,{g}_{\pi }, T}\right) {f}_{\text{target }}\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi }, T}\right) }\right) }\right)$
164
+
165
+ $+ \left( {{W}_{{\alpha }_{f}}^{\prime }\left( {{f}_{\theta }{\left( {g}_{r}^{\prime } \mid s, a,{g}_{\pi }, T\right) }^{2} - 2{f}_{\theta }\left( {{g}_{r}^{\prime } \mid s, a,{g}_{\pi }, T}\right) {f}_{\text{target }}\left( {{g}_{r}^{\prime } \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi }, T - 1}\right) }\right) }\right) - \left( {\frac{2}{T}{f}_{\theta }\left( {\phi \left( {s}^{\prime }\right) \mid }\right. }\right.$
166
+
167
+ $\left. \left. {s, a,{g}_{\pi }, T}\right) \right) + \left( {{W}_{{\alpha }_{Q}}{\left( {Q}_{\theta }\left( s, a,{g}_{r},{g}_{\pi }, T\right) - \text{ target }\_ q\right) }^{2}}\right) + \left( {{W}_{{\alpha }_{Q}}^{\prime }{\left( {Q}_{\theta }\left( s, a,{g}_{r}^{\prime },{g}_{\pi }, T\right) - \text{ target }\_ {q}^{\prime }\right) }^{2}}\right) ;$
168
+
169
+ actor_loss $+ = - {Q}_{\theta }\left( {s,\pi \left( {s,{g}_{\pi }}\right) ,{g}_{\pi },{g}_{\pi }, T}\right)$ ;
170
+
171
+ end for
172
+
173
+ Backprop critic_loss and update $\theta$ ; Backprop actor_loss and update $w$ ;
174
+
175
+ ---
176
+
177
+ Evaluation Metrics. On all tasks, we report the success rate, the average reward, and the average bias at the starting state (measured as the average cumulative discounted reward minus the value at the starting state). For all experiments (except Discrete and physical and simulated Mechanum environments), we report the sample mean and confidence interval, measured using five training runs for each method with randomly initialized seeds.
178
+
179
+ Summary of Results. Figure 3 summarizes the results of the experiments. (1) Discrete. The value functions for USHER and Q-learning both quickly converge to the expected value, while HER overestimates the expected reward. USHER and Q-learning both learn to take the long, safe path, while HER takes the short, risky path. (2) 4-Torus with Freeze. HER learns to always take the freeze action and fails as a result, while USHER learns a successful policy. DDPG and $\delta$ -DDPG are unbiased in this environment, but DDPG struggles due to the difficulty of exploring in high dimensions, and $\delta$ -DDPG struggles with its variance. (3) Car with Random Noise. HER performs well for low noise values, but tends to overestimate values more as the noise level rises. USHER suffers significantly less from high noise levels than HER. (4) Red Light. HER learns to run the
180
+
181
+ ![01963f26-513b-71b2-a9be-554ef0b36646_6_311_194_1150_966_0.jpg](images/01963f26-513b-71b2-a9be-554ef0b36646_6_311_194_1150_966_0.jpg)
182
+
183
+ Figure 2: Some of the robotic tasks used in the experiments. (a) Fetch reach. (b) Fetch push. (c) Fetch slide. (d, e) Simulated mechanum robot with random obstacle in two different positions. (f) Car with random noise. (g, h) Mobile Throwing Robot. (i) Physical Mechanum Robot.
184
+
185
+ red light, while USHER waits for the red light to end. USHER achieves higher success rates and rewards. (5) Fetch. USHER is able to match HER's performance on all of the tested environments. This suggests that the importance sampling method does not significantly affect USHER's variance or sample efficiency in deterministic environments, where HER is known to be unbiased. It also significantly outperforms two other unbiased methods, DDPG and $\delta$ -DDPG on FetchReach. (6) Mobile Throwing Robot. USHER matches HER's sample efficiency until the point where HER's bias causes its performance to suffer. USHER's performance, by contrast, continues to grow steadily to a 75% success rate, significantly better than HER's 55%. Interestingly, we find that USHER actually underestimates its reward here. This is likely because this environment is slightly non-Markovian, because the wind is sampled at the beginning of each trajectory, and then remains fixed. USHER's proof of unbiasedness, however, assumes that the environment is Markovian. It is interesting to note that USHER still performs well even when this property does not completely hold. (7) Navigation on a physical mechanum robot. We find that USHER outperforms HER. Both robots take the short goal when it is open. When the path is blocked, HER repeatedly slams into the obstacle. By contrast, USHER runs into the block once, and then turns to go around it if it is blocked. This leads USHER to have a higher success rate. In simulation, HER's success rate is approximately 50%, while USHER's is near 100%. Due to the difficulty of transfer, USHER's performance drops on the physical robot, but it still outperforms HER. HER succeeded on 4/10 goals, while USHER succeeds on 6/10. Both methods succeed ${100}\%$ of the time on the unblocked path environment.
186
+
187
+ ## 7 Limitations
188
+
189
+ One limitation of this work is that we rely on the Markov assumption to derive our importance sampling weights. This means that while we can correctly estimate the value function for stochastic transitions, we cannot guarantee that the learned value is correct in environments with hidden
190
+
191
+ ![01963f26-513b-71b2-a9be-554ef0b36646_7_291_206_1218_1419_0.jpg](images/01963f26-513b-71b2-a9be-554ef0b36646_7_291_206_1218_1419_0.jpg)
192
+
193
+ Figure 3: Average reward, success rate, and Q-value bias of HER and USHER on the different tasks. information. It is unclear whether this is actually an issue in practice, as USHER still outperforms HER on the non-Markovian environments we tested (such as the Throwing Bot). Additionally, USHER requires approximately 2.5 times as many neural net evaluations as HER does per batch update. This was not an issue in our experiments, as the cost of simulation and policy evaluations usually dominated the training time.
194
+
195
+ ## 8 Conclusion
196
+
197
+ We derive an unbiased importance sampling method for HER, and show that it is able to effectively counteract HER's hindsight bias. We find that addressing this bias leads to higher success rates and rewards in a range of stochastic environments. Furthermore, we introduce a mathematical framework to justify our method which can be used to examine the situations where HER is likely to experience significant bias. In future work, we hope to examine the finite-sample case, in order to better understand whether HER introduces a bias there, and if so, how it could be corrected.
198
+
199
+ References
200
+
201
+ [1] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing Atari with Deep Reinforcement Learning. arXiv:1312.5602 [cs], Dec. 2013. URL http://arxiv.org/abs/1312.5602.arXiv: 1312.5602.
202
+
203
+ [2] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis. Mastering the game of Go without human knowledge. Nature, 550(7676):354-359, Oct. 2017. ISSN 1476-4687. doi:10.1038/nature24270. URL https://www.nature.com/ articles/nature24270.
204
+
205
+ [3] M. Laskin, A. Srinivas, and P. Abbeel. CURL: Contrastive Unsupervised Representations for Reinforcement Learning. In Proceedings of the 37th International Conference on Machine Learning, pages 5639-5650. PMLR, Nov. 2020. URL https://proceedings.mlr.press/ v119/laskin20a.html.
206
+
207
+ [4] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, P. Abbeel, and W. Zaremba. Hindsight Experience Replay. arXiv:1707.01495 [cs], Feb. 2018. URL http://arxiv.org/abs/1707.01495.arXiv: 1707.01495.
208
+
209
+ [5] L. Blier and Y. Ollivier. Unbiased Methods for Multi-Goal Reinforcement Learning. arXiv:2106.08863 [cs], June 2021. URL http://arxiv.org/abs/2106.08863.arXiv: 2106.08863.
210
+
211
+ [6] M. Plappert, M. Andrychowicz, A. Ray, B. McGrew, B. Baker, G. Powell, J. Schneider, J. Tobin, M. Chociej, P. Welinder, V. Kumar, and W. Zaremba. Multi-Goal Reinforcement Learning: Challenging Robotics Environments and Request for Research. arXiv:1802.09464 [cs], Mar. 2018. URL http://arxiv.org/abs/1802.09464.arXiv: 1802.09464.
212
+
213
+ [7] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv:1509.02971 [cs, stat], July 2019. URL http://arxiv.org/abs/1509.02971.arXiv: 1509.02971.
214
+
215
+ [8] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. arXiv:1801.01290 [cs, stat], Aug. 2018. URL http://arxiv.org/abs/1801.01290.arXiv: 1801.01290.
216
+
217
+ [9] S. Fujimoto, H. van Hoof, and D. Meger. Addressing Function Approximation Error in Actor-Critic Methods. arXiv:1802.09477 [cs, stat], Oct. 2018. URL http://arxiv.org/abs/1802.09477.arXiv: 1802.09477.
218
+
219
+ [10] S. Lanka and T. Wu. ARCHER: Aggressive Rewards to Counter bias in Hindsight Experience Replay. arXiv:1809.02070 [cs, stat], Sept. 2018. URL http://arxiv.org/abs/1809.02070.arXiv: 1809.02070.
220
+
221
+ [11] R. Yang, J. Lyu, Y. Yang, J. Ya, F. Luo, D. Luo, L. Li, and X. Li. Bias-reduced Multi-step Hindsight Experience Replay for Efficient Multi-goal Reinforcement Learning. arXiv:2102.12962 [cs], June 2021. URL http://arxiv.org/abs/2102.12962.arXiv: 2102.12962.
222
+
223
+ [12] C. Bai, L. Wang, Y. Wang, Z. Wang, R. Zhao, C. Bai, and P. Liu. Addressing Hindsight Bias in Multigoal Reinforcement Learning. IEEE Transactions on Cybernetics, pages 1-14, 2021. ISSN 2168-2275. doi:10.1109/TCYB.2021.3107202.
224
+
225
+ [13] P. Dayan. Improving generalization for temporal difference learning: The successor representation. Neural Computation, 5:613, 1993.
226
+
227
+ [14] S. M. Lavalle. Planning Algorithms. Cambridge University Press, 2006. ISBN 0521862051.
228
+
229
+ [15] S. LaValle. Planning Algorithms. Cambridge University Press, University of Illinois, Urbana-Champaign, 2006.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/6gEyD5zg0dt/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § USHER: UNBIASED SAMPLING FOR HINDSIGHT EXPERIENCE REPLAY
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Dealing with sparse rewards is a long-standing challenge in reinforcement learning (RL). Hindsight Experience Replay (HER) addresses this problem by reusing failed trajectories for one goal as successful trajectories for another. This allows for both a minimum density of reward and for generalization across multiple goals. However, this strategy is known to result in a biased value function, as the update rule underestimates the likelihood of bad outcomes in a stochastic environment. We propose an asymptotically unbiased importance-sampling-based algorithm to address this problem without sacrificing performance on deterministic environments. We show its effectiveness on a range of robotic systems, including challenging high dimensional stochastic environments.
12
+
13
+ Keywords: Reinforcement Learning, Multi-goal reinforcement learning
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ In recent years, model-free reinforcement learning (RL) has become a popular approach in robotics. In particular, these methods stand out in their ability to learn near-optimal policies in high-dimensional spaces $\left\lbrack {1,2,3}\right\rbrack$ . One popular extension of RL, multi-goal ${RL}$ , allows trained robots to generalize to new tasks by conditioning on a goal parameter that determines the reward function. However, RL algorithms often struggle with tasks that involve sparse rewards, as these environments can require a very large amount of exploration to discover good solutions. Hindsight Experience Replay (HER) offers a solution to the sparse reward problem for multi-goal reinforcement learning [4]. HER treats failed attempts to reach one goal as successful attempts to reach another goal. This significantly reduces the difficulty of the exploration problem, because it guarantees a minimum density of reward and ensures that every trajectory receives useful feedback on how to reach some goal, even when the reward signal is sparse. However, these benefits come with a trade-off. While HER is unbiased in deterministic environments, it is known to be asymptotically biased in stochastic environments $\left\lbrack {5,6}\right\rbrack$ . This is because HER suffers from a survivorship bias. Since failed trajectories to one goal are treated as successful trajectories to another, it follows that HER only ever sees successful trajectories. If a random event can prevent the robot from reaching a desired goal $g$ , then HER will only sample $g$ as a goal when the event did not occur, leading it to significantly overestimate the likelihood of success and underestimate the likelihood of dangerous events. Practically, this manifests as a tendency for HER to want to "run red lights" and take risks.
18
+
19
+ < g r a p h i c s >
20
+
21
+ Figure 1: Q-values learned with HER (left), and Q-learning (right). A robot must navigate from the white circle to the black circle while avoiding obstacles (black squares) and risky areas (yellow triangle, ${75}\%$ chance of stopping the robot). The value function ranges from 1 (bright green) to 0 (bright red).
22
+
23
+ We present a concrete toy example of this problem in Figure 1, using tabular Q-learning. As we can see, HER values the direct path to the goal and the square en route to the dangerous square much higher than that path's correct Q-value because it undersestimates the risk. HER learns to take the shorter, more dangerous path and achieves a lower success rate with lower reward than Q-learning.
24
+
25
+ As suggested in both [5] and [6], we derive an approach that allows us to use HER for sampling goals without suffering from these bias problems. We do this by separating the goal used for the reward function $\left( {g}_{r}\right)$ from the goal that is passed to the policy $\left( {g}_{\pi }\right)$ . The value function is conditioned on both goals, but only the reward goal is sampled using HER. This allows us to efficiently learn a successor representation over future achieved goals that we can use for importance sampling. We show that reweighting HER's mean squared Bellman error using this successor representation yields an unbiased estimate of the error. We call this method Unbiased Sampling for Hindsight Experience Replay (USHER). We demonstrate this approach on an array of stochastic environments, and find that it counteracts the bias shown by HER without compromising HER's sample efficiency or stability.
26
+
27
+ § 2 DEFINITIONS
28
+
29
+ We define a multi-goal Markov Decision Process (MDP) as a seven-tuple: state space $S \subseteq {\mathbb{R}}^{n}$ , action space $A \subseteq {\mathbb{R}}^{m}$ , discount factor $\gamma \in \left\lbrack {0,1}\right\rbrack$ , transition probability distribution $P\left( {{s}^{\prime } \mid s,a}\right)$ (with density function $f\left( {{s}^{\prime } \mid s,a}\right)$ ) for $\left( {s,a,{s}^{\prime }}\right) \in S \times A \times S$ , goal space $G \subseteq {\mathbb{R}}^{l}$ , goal function $\phi : S \rightarrow G$ , and reward function $R : S \times G \rightarrow \mathbb{R}$ . A goal $g = \phi \left( s\right) \in G$ is a vector of goal-relevant features of state $s \in S$ . Goal function $\phi$ is defined a priori, depending on the task. A typical example of $\phi \left( s\right)$ is a low-dimensional vector that preserves only the entries of state-vector $s$ that are relevant to the goal. For instance, a mobile robot is tasked with moving to a particular location and arriving there at zero velocity. The state space of the robot would include velocities and orientations of each wheel, along with several other attributes that are needed to control the robot. The goal function would take the full high-dimensional state of the robot and return only its location and velocity. Therefore, each goal point corresponds to a subspace of the state space in this example. A special case is when $G = S$ and $g = \phi \left( s\right) ,\forall s \in S$ . Note that the immediate reward function $R\left( {s,g}\right)$ depends on a selected goal $g \in G$ . Every selection of $g \in G$ produces a valid single-goal MDP. We denote by $\pi$ a deterministic goal-conditioned policy, with $\pi \left( {s,g}\right) \in A$ for $s \in S,g \in G$ , and define ${Q}^{ * }\left( {s,a,g}\right)$ to be the unique optimal $Q$ -value of action $a \in A$ in state $s \in S$ , given selected goal $g \in G$ .
30
+
31
+ In the proposed algorithm and analysis, a policy $\pi$ can be evaluated according to a goal that is not necessarily the same goal used by the policy for selecting actions. Therefore, we use ${g}_{\pi }$ to refer to goals that are passed to policies, and ${g}_{r}$ to denote goals that are used to evaluate policies. Using these notations, the Bellman equation is re-written as
32
+
33
+ $$
34
+ {Q}^{\pi }\left( {s,a,{g}_{r},{g}_{\pi }}\right) = {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {R\left( {{s}^{\prime },{g}_{r}}\right) + \gamma {Q}^{\pi }\left( {{s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{r},{g}_{\pi }}\right) \mid s,a}\right\rbrack .
35
+ $$
36
+
37
+ Intuitively, this means "The expected cumulative discounted sum of rewards $R\left( {{s}^{\prime },{g}_{r}}\right)$ , when using policy $\pi \left( {{s}^{\prime },{g}_{\pi }}\right)$ ". The reason for this separation is that it allows us to more easily separate the problem of predicting future rewards from the problem of directing the policy. This makes it much easier to find an analytic expression for HER's bias. In particular, it lets us learn an expression for future goal occupancy that is conditioned only on ${g}_{\pi }$ and not ${g}_{r}$ , which will allow us to correct for the bias induced by hindsight sampling. Observe that when ${g}_{r} = {g}_{\pi }$ , this definition reduces to the Bellman equation for standard multi-goal RL. For standard $Q$ -learning, $\pi \left( {{s}^{\prime },{g}_{\pi }}\right)$ would be $\arg \mathop{\max }\limits_{{a}^{\prime }}Q\left( {{s}^{\prime },{a}^{\prime },{g}_{\pi },{g}_{\pi }}\right)$ , where both the policy and reward goals are set to ${g}_{\pi }$ .
38
+
39
+ HER. HER is a modification of the experience replay method employed by many deep RL algorithms $\left\lbrack {1,4,7,8,9}\right\rbrack$ . Policy goal ${g}_{\pi }$ is sampled before each trajectory begins, and is not changed while generating the trajectory. After generating a trajectory, HER stores the entire trajectory in the replay buffer. When sampling transitions $\left( {s,{g}_{\pi },a,{s}^{\prime }}\right)$ from the buffer, HER retains the original goal ${g}_{\pi }$ used in the policy that generated the trajectory, i.e., ${g}_{r} \leftarrow {g}_{\pi }$ , with probability $\frac{1}{k + 1}$ , where $k$ is a natural number (usually 4 or 8). The rest of the time, it replaces the original goal with $\phi \left( {s}_{t}\right)$ , i.e., ${g}_{r} \leftarrow \phi \left( {s}_{t}\right)$ , where ${s}_{t}$ is a randomly sampled state from the future trajectory that starts at $s$ . Goals that are selected from the future trajectory are referred to as "hindsight goals". HER then updates the Q-value and policy networks with $\left( {s,{g}_{\pi },a,{s}^{\prime },R\left( {{s}^{\prime },{g}_{r}}\right) }\right)$ .
40
+
41
+ § 3 RELATED WORK
42
+
43
+ Over the last few years, several methods have attempted to address the hindsight bias induced by HER. ARCHER attempts to decrease HER's hindsight bias by multiplying the loss on hindsight goals and non-hindsight goals by different weights, effectively upweighting the importance of hindsight goals [10]. MHER combines a multi-step Bellman equation with a bias/variance tradeoff equation to address the bias induced by the multi-step algorithm [11]. It is worth noting that MHER only attempts to address HER's off-policy bias, not its hindsight bias. A rigorous mathematical approach to HER's hindsight bias is taken in [5], by showing that HER is unbiased in deterministic environments, and that one of HER's key benefits is ensuring a minimum density of feedback from the reward function, even in high-dimensional spaces where the reward density would normally be extremely low. This reward-density problem is addressed by deriving a family of algorithms (called the $\delta$ -family, e.g. $\delta$ -DQN, $\delta$ -PPO), which guarantees a minimum reward density while still being unbiased. These methods do not use HER and have higher variance. The authors of [5] also state that the problem of formulating an unbiased form of HER is still open, and call for additional research into the problem.
44
+
45
+ Bias-Corrected HER (BHER) attempts to account for hindsight bias by analytically calculating importance-sampling hindsight goals [12]. Unfortunately, we believe that this derivation is incorrect. The proof in BHER relies on the assumption that the probability of a transition is independent of the goal $\left( {f\left( {{s}^{\prime } \mid s,a,g}\right) = f\left( {{s}^{\prime } \mid s,a}\right) }\right)$ . This assumption does not hold for HER, because it samples the goal from the future trajectory of $s$ , which depends on ${s}^{\prime }$ . Both our work and [5] give concrete counterexamples to this assumption. The following derivation provides an unbiased solution that does not rely on this flawed assumption.
46
+
47
+ § 4 DERIVATION
48
+
49
+ Bias in HER. We derive the formula of the bias introduced by HER in estimating the Q-value function in the following. Let $s,a$ , and ${s}^{\prime }$ be random variables representing a state, action, and subsequent state in a given trajectory generated by policy $\pi$ with goal ${g}_{\pi }$ . Let $T$ be the number of time-steps remaining in the sub-trajectory that starts at $s$ . Let ${Q}_{HER}^{\pi }\left( {s,a,{g}_{r},{g}_{\pi }}\right)$ be the solution to the Bellman equation obtained using HER’s sampling process of reward goal ${g}_{r}$ (Sec. 2). This sampling process takes into account both ${g}_{\pi }$ and $T$ . Furthermore, ${g}_{r}$ is selected from the sub-trajectory that starts at $s$ with probability $\frac{k}{k + 1}$ . Therefore, the probability $f\left( {{s}^{\prime } \mid s,a,{g}_{r},{g}_{\pi },T}\right)$ of the next state ${s}^{\prime }$ after knowing ${g}_{\pi },{g}_{r}$ and $T$ is generally not the same as $f\left( {{s}^{\prime } \mid s,a}\right)$ , which is what HER uses empirically to estimate ${Q}_{HER}^{\pi }\left( {s,a,{g}_{r},{g}_{\pi }}\right)$ . The following proposition quantifies this bias ratio.
50
+
51
+ Proposition 1. Suppose ${g}_{\pi }$ is fixed at the start of the trajectory, and ${g}_{r}$ is sampled using HER. Then for any ${s}^{\prime },s,a,{g}_{r},{g}_{\pi },T,f\left( {{s}^{\prime } \mid s,a,{g}_{r},{g}_{\pi },T}\right) = \frac{f\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi },T - 1}\right) }{f\left( {{g}_{r} \mid s,a,{g}_{\pi },T}\right) }f\left( {{s}^{\prime } \mid s,a}\right)$ .
52
+
53
+ Proof: Appendix (A.3). This identity presents an interesting corollary.
54
+
55
+ Corollary. Suppose ${Q}_{HER}^{\pi }\left( {s,a,{g}_{\pi },{g}_{\pi }}\right)$ satisfies the Bellman equation and the distribution of future achieved goals is absolutely continuous with respect to the goal space for all $s,a,{g}_{\pi }$ , and $\pi \left( {s,{g}_{\pi }}\right) =$ $\arg \mathop{\max }\limits_{{a}^{\prime }}{Q}_{HER}^{\pi }\left( {s,{a}^{\prime },{g}_{\pi },{g}_{\pi }}\right)$ . Then ${Q}_{HER}^{\pi }\left( {s,a,{g}_{\pi },{g}_{\pi }}\right) = {Q}^{ * }\left( {s,a,{g}_{\pi }}\right)$ , where ${Q}^{ * }$ is the optimal goal-conditioned $Q$ -function.
56
+
57
+ Proof: Appendix (A.4). While this establishes that the target value for ${Q}_{HER}^{\pi }$ is unbiased when ${g}_{r} = {g}_{\pi }$ , the function approximator for ${Q}_{HER}^{\pi }$ may still be biased, because values other than ${g}_{r} = {g}_{\pi }$ may influence it through the training of the network. Thus, it is possible that the learned ${Q}_{HER}^{\pi }$ value may remain biased until unacceptably large amounts of data are gathered. Additionally, since the density of data is discontinuous, ${Q}_{HER}^{\pi }$ may be discontinuous and difficult to approximate with a neural network. The rest of this section is devoted to developing an importance sampling method that is guaranteed to be asymptotically unbiased over the entire domain of $Q$ .
58
+
59
+ Unbiased HER. To estimate ${Q}^{\pi }\left( {s,a,{g}_{r},{g}_{\pi }}\right)$ , the solution to the unbiased Bellman equation, we use in this work the following expression,
60
+
61
+ ${Q}^{\pi }\left( {s,a,{g}_{r},{g}_{\pi }}\right) = {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {M\left( {{s}^{\prime },s,a,{g}_{r},{g}_{\pi },T}\right) \left( {R\left( {{s}^{\prime },{g}_{r}}\right) + \gamma {Q}^{\pi }\left( {{s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{r},{g}_{\pi }}\right) }\right) \mid s,a,{g}_{r},{g}_{\pi },T}\right\rbrack ,$ where $M\left( {{s}^{\prime },s,a,{g}_{r},{g}_{\pi },T}\right)$ is a weight that cancels the bias ratio given in Proposition 1. Conditioning the expected value over ${s}^{\prime }$ on ${g}_{r},{g}_{\pi }$ , and $T$ frees us from the constraint that ${s}^{\prime }$ needs to be independent of ${g}_{r},{g}_{\pi }$ , and $T$ . This would allow us to select ${g}_{r}$ from the future trajectory of $s$ , as HER does. Note that conditioning on $T$ , the number of steps left in the trajectory, is necessary because the distribution of goals selected by HER is not time-independent.
62
+
63
+ Proposition 1 is useful for understanding what situations may cause HER to be biased, but unfortunately we cannot directly use it for importance sampling. Weighting samples by setting $M\left( {{s}^{\prime },s,a,{g}_{r},{g}_{\pi },T}\right)$ as $\frac{f\left( {{g}_{r} \mid s,a,{g}_{\pi },T}\right) }{f\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi },T - 1}\right) }$ would require $f\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi },T - 1}\right)$ to always be greater than 0, which is not necessarily true. To solve this, we sample a mixture of hindsight goals and goals drawn uniformly from the goal space $G$ . Of the goals where ${g}_{r} \neq {g}_{\pi }$ , a fraction $\alpha$ of our goals will be drawn uniformly from the goal space, and the remaining $1 - \alpha$ will be drawn from the trajectory that follows $s$ . This results in the following identity,
64
+
65
+ Proposition 2. Let $W\left( {{s}^{\prime },s,a,{g}_{r},{g}_{\pi },T}\right) = \frac{f\left( {{g}_{r} \mid s,a,{g}_{\pi },T}\right) }{{\alpha f}\left( {{g}_{r} \mid s,a,{g}_{\pi },T}\right) + \left( {1 - \alpha }\right) f\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi },T - 1}\right) }$ . Let $\alpha$ be a real value in the range $(0,1\rbrack$ . Then for any ${s}^{\prime },s,a,{g}_{r},{g}_{\pi }$ ,
66
+
67
+ $$
68
+ f\left( {{s}^{\prime } \mid s,a}\right) = W\left( {{s}^{\prime },s,a,{g}_{r},{g}_{\pi },T}\right) \left( {{\alpha f}\left( {{s}^{\prime } \mid s,a}\right) + \left( {1 - \alpha }\right) f\left( {{s}^{\prime } \mid s,a,{g}_{\pi },{g}_{r},T}\right) }\right)
69
+ $$
70
+
71
+ Furthermore, for any function $F$ of state ${s}^{\prime }$ ,
72
+
73
+ $$
74
+ {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {F\left( {s}^{\prime }\right) \mid s,a}\right\rbrack = \alpha {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {W\left( {{s}^{\prime },s,a,{g}_{r},{g}_{\pi },T}\right) F\left( {s}^{\prime }\right) \mid s,a}\right\rbrack
75
+ $$
76
+
77
+ $$
78
+ + \left( {1 - \alpha }\right) {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {W\left( {{s}^{\prime },s,a,{g}_{r},{g}_{\pi },T}\right) F\left( {s}^{\prime }\right) \mid s,a,{g}_{\pi },{g}_{r},T}\right\rbrack . \tag{1}
79
+ $$
80
+
81
+ Proof: Appendix (A.5). We can now derive an unbiased variant of HER by applying Proposition 2 to Bellman equation.
82
+
83
+ Corollary. Suppose $\pi$ is a deterministic policy, ${g}_{r}$ is sampled from the previously mentioned mix of hindsight and uniform random goals, and ${g}_{\pi }$ . Then for any ${s}^{\prime },s,a,{g}_{r},{g}_{\pi },T$ ,
84
+
85
+ $$
86
+ Q\left( {s,a,{g}_{r},{g}_{\pi }}\right) = \alpha {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {W\left( {{s}^{\prime },s,a,{g}_{r},{g}_{\pi },T}\right) \left( {R\left( {{s}^{\prime },{g}_{r}}\right) + {\gamma Q}\left( {{s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{r},{g}_{\pi }}\right) }\right) \mid s,a}\right\rbrack
87
+ $$
88
+
89
+ $$
90
+ + \left( {1 - \alpha }\right) {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {W\left( {{s}^{\prime },s,a,{g}_{r},{g}_{\pi },T}\right) \left( {R\left( {{s}^{\prime },{g}_{r}}\right) + {\gamma Q}\left( {{s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{r},{g}_{\pi }}\right) }\right) \mid s,a,{g}_{\pi },{g}_{r},T}\right\rbrack \tag{2}
91
+ $$
92
+
93
+ This corollary provides us with a simple method of estimating $Q\left( {s,a,{g}_{r},{g}_{\pi }}\right)$ using HER. A similar unbiased expression can be derived for estimating the gradient of the Bellman error with respect to the weights of a $Q$ -function network, instead of estimating $Q\left( {s,a,{g}_{r},{g}_{\pi }}\right)$ directly from samples.
94
+
95
+ Learning the future goal distribution. In order to use the proposed unbiased estimator of the Q-function with policy and reward goals, we need to compute weight $W$ defined in Proposition 2. This can be achieved by learning future goal distributions $f\left( {{g}_{r} \mid s,a,{g}_{\pi },T}\right)$ and $f\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi },T - 1}\right)$ , which both correspond to the conditional probability that a given goal ${g}_{r}$ will be selected as a hindsight goal by HER. A technique for learning such long-term distributions, introduced in [5], consists in training a network ${f}_{\theta }$ , with parameters $\theta$ , to approximate the density of future goals $f\left( {{g}_{r} \mid s,a}\right)$ . The following estimator for the gradient is used in [5], sampling $\left( {s,a,{s}^{\prime }}\right)$ from transitions, and fixing ${g}_{r}$ at the start of each trajectory,
96
+
97
+ $$
98
+ {\nabla }_{\theta }\left( {{\mathbb{E}}_{s,a}\left\lbrack {-{f}_{\theta }\left( {s,a,\phi \left( s\right) }\right) }\right\rbrack + {\mathbb{E}}_{s,a,{s}^{\prime },{g}_{r}}\left\lbrack {{f}_{\theta }\left( {s,a,{g}_{r}}\right) \left( {{f}_{\theta }\left( {s,a,{g}_{r}}\right) - \gamma \mathop{\max }\limits_{{a}^{\prime }}{f}_{\text{ target }}\left( {{s}^{\prime },{a}^{\prime },{g}_{r}}\right) }\right) }\right\rbrack }\right) ,
99
+ $$
100
+
101
+ wherein ${f}_{\text{ target }}$ is a copy of ${f}_{\theta }$ that is updated separately. This method has however a significantly higher variance than HER [5]. We examine here the source of this variance, and explain how separating the policy and reward goals allows us to avoid this variance problem. One issue with this method that can contribute to variance is that the gradient is separated into two parts: one in which the goal comes from the state $\left( {\phi \left( s\right) }\right)$ , and one in which the goal is sampled at the start of the trajectory $\left( {g}_{r}\right)$ . This is a problem, because the gradient at the state-derived goals is strictly negative, while the gradient at the sampled goals is usually positive. In our experiments, this led to a pattern where the value at the state-derived goals would diverge unboundedly, until a goal was sampled sufficiently close to make the value function crash back down to zero, and then the process would repeat again. In other words, it is not guaranteed that ${f}_{\theta }$ converges a fixed point for every finite set of trajectories.
102
+
103
+ One way to avoid this problem would be to have a fixed, non-zero chance that $\phi \left( s\right) = {g}_{r}$ , so that ${f}_{\theta }$ always converges to a fixed point given any set of training trajectories. We use HER to achieve this outcome. This is possible, unlike in [5], because we can use the importance sampling method derived above to sample a mixture of HER goals and goals independent of the state. Since HER draws from the future states of $s$ , observe that $f\left( {{g}_{r} \mid s,a,{g}_{\pi },T}\right)$ is in fact a successor representation [13], using an average-reward formulation (because the probability of selecting any of the next T states is uniform). Observe that we can define this probability as
104
+
105
+ $$
106
+ f\left( {{g}_{r} \mid s,a,{g}_{\pi },T}\right) = {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {\frac{1}{T}\delta \left( {{g}_{r} - \phi \left( {s}^{\prime }\right) }\right) + \left( {1 - \frac{1}{T}}\right) f\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi },T - 1}\right) \mid s,a}\right\rbrack ,
107
+ $$
108
+
109
+ wherein $\delta$ is Dirac delta function. This results in the loss gradient:
110
+
111
+ $$
112
+ {\nabla }_{\theta }\left( {{\mathbb{E}}_{s,a,{g}_{r},{g}_{\pi },T}\left\lbrack {-\frac{2}{T}{f}_{\theta }\left( {s,a,\phi \left( s\right) ,{g}_{\pi },T}\right) + {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {L\left( {s,a,{s}^{\prime },{g}_{r},{g}_{\pi },T}\right) \mid s,a}\right\rbrack }\right\rbrack }\right) , \tag{3}
113
+ $$
114
+
115
+ $L\left( {s,a,{s}^{\prime },{g}_{r},{g}_{\pi },T}\right) \triangleq {f}_{\theta }\left( {s,a,{g}_{r},{g}_{\pi },T}\right) \left( {{f}_{\theta }\left( {s,a,{g}_{r},{g}_{\pi },T}\right) - \gamma {f}_{\text{ target }}\left( {{s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{r},{g}_{\pi },T - 1}\right) }\right) .$ While ${f}_{\theta }$ may not be a true probability density (because it may not integrate to 1), this does not matter for our purposes, as this factor will divide out when we calculate $W$ . Finally, we inject the formula in Equation 3 into Equation 1, while replacing $F$ with ${f}_{\theta }$ , to derive the following unbiased loss gradient,
116
+
117
+ $$
118
+ {\nabla }_{\theta }\mathcal{L} = {\nabla }_{\theta }\mathbb{E}\left\lbrack {-\frac{2}{T}{f}_{\theta }\left( {s,a,\phi \left( s\right) ,{g}_{\pi },T}\right) + \alpha {\mathbb{E}}_{{s}^{\prime }}\left\lbrack {W\left( {s,a,{s}^{\prime },{g}_{r},{g}_{\pi },T}\right) L\left( {s,a,{s}^{\prime },{g}_{r},{g}_{\pi },T}\right) }\right\rbrack \mid s,a}\right\rbrack
119
+ $$
120
+
121
+ $$
122
+ + \left( {1 - \alpha }\right) \mathbb{E}\left\lbrack {W\left( {s,a,{s}^{\prime },{g}_{r},{g}_{\pi },T}\right) L\left( {s,a,{s}^{\prime },{g}_{r},{g}_{\pi },T}\right) \mid s,a,{g}_{r},{g}_{\pi },T}\right\rbrack \rbrack . \tag{4}
123
+ $$
124
+
125
+ Note that the values of $\alpha$ we use for learning ${Q}_{\theta }$ (Equation 2) and goal distribution densities ${f}_{\theta }$ (Equation 4) can be different. For discrete environments, we can learn the future distribution of the goal state using simple tabular methods, such as tabular successor representations.
126
+
127
+ § 5 ALGORITHM AND IMPLEMENTATION
128
+
129
+ USHER may be implemented atop DDPG [7], SAC [8], TD3 [9], or any other continuous RL algorithm, as it only changes the loss function for training the goal-conditioned Q-value network. In our experiments, we use SAC as a base.USHER calculates the loss as follows: It samples a batch of transitions $\left( {s,a,{s}^{\prime },{g}_{\pi },T}\right)$ from the replay buffer, along with two sets of goals: ${g}_{r}$ , which is drawn from the future distribution of $s$ , and ${g}_{r}^{\prime }$ , which is drawn uniformly from the goal space $G$ . For each set of goals, we calculate two values of $W$ . For weighting the $Q$ -values, we use ${\alpha }_{Q} = {0.01}$ , and for weighting the $f$ -values, we use ${\alpha }_{f} = {0.5}$ . We omit the full training loop here, as it is identical to standard HER except for the loss computation.
130
+
131
+ We make a few minor adjustments to the derived formula in order to minimize the variance induced by importance sampling and ensure numerical stability. In order to minimize the variance induced by importance sampling, we clip the importance sampling fraction to the range $\left\lbrack {\frac{1}{1 + c},1 + c}\right\rbrack$ , where $c$ is a hyperparameter. This allows us to make a bias/variance trade-off between hindsight bias and the variance induced by importance sampling. We find that performance is best for $c \approx {0.3}$ , and that the bias induced by clipping is negligible for $c > {1.0}$ for most environments. We apply this to the importance sampling weights for $Q$ only $\left( {W}_{{\alpha }_{Q}}\right.$ and $\left. {W}_{{\alpha }_{Q}}^{\prime }\right)$ , not $f$ . We approximate $W$ using ${f}_{\theta }$ for all experiments. In order to reduce the total number of neural network evaluations, we made ${Q}_{\theta }$ and ${f}_{\theta }$ two heads of a two-headed neural network. Although this choice conditions the value function on $T$ , note that by the definition of the $Q$ function, $Q\left( {s,a,{g}_{r},{g}_{\pi }}\right) = \mathbb{E}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{R}^{t} \mid s,a,{g}_{r},{g}_{\pi }}\right\rbrack =$ ${\mathbb{E}}_{T}\left\lbrack {\mathbb{E}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{R}^{t} \mid s,a,{g}_{r},{g}_{\pi },T}\right\rbrack }\right\rbrack = {\mathbb{E}}_{T}\left\lbrack {Q\left( {s,a,{g}_{r},{g}_{\pi },T}\right) }\right\rbrack$ . Thus, training the policy on the $T$ - conditioned value function should result in the policy receiving the same gradient in expectation. We found that this trick did not noticeably impact the behavior of the $Q$ function or the policy.
132
+
133
+ § 6 EXPERIMENTS
134
+
135
+ Tasks. We evaluate the proposed algorithm in the following environments, illustrated in Figure 2. (1) Discrete. We first demonstrate our method in the discrete case in Figure 1 because it is analytically tractable and allows us to verify that USHER learns the correct value function. The environment used has a short, risky path that has a high chance of disabling the robot, and a longer risk-free path. The longer path has a higher expected reward, but we will demonstrate that HER mistakenly prefers the riskier path. (2) 4-Torus with Freeze. This environment was introduce in [5] to demonstrate HER's bias. Robots take steps on a continuous N-dimensional torus to reach a location. They also have a "freeze" action that will teleport them to a random location, but freeze them in place for the rest of the episode. HER learns to always take the freeze action. (3) Car with Random Noise and (4) Red Light. These two environments use the "Simple Car" dynamics described in [14]. In Car with Random Noise, the robot must navigate around walls while subject to Gaussian action noise [15]. In Red Light, the robot must learn to safely navigate a traffic light to reach its goal. (5) Fetch. These three environments task a robot manipulating a robot arm to reach a point, push an object, and slide an object to a point outside of the robot's reach, respectively. We use these to demonstrate the USHER has comparable performance to HER on deterministic, high-dimensional environments. (6) Mobile Throwing Robot. We design a simulated robot arm on a mobile base, and task it with throwing a ball to a randomly selected location. There is also a ${50}\%$ chance of wind that can blow the ball off course. (7) Navigation on a physical mechanum robot. Lastly, we train a mechanum robot to navigate around obstacles to reach a goal and deploy it on a physical robot. The terrain contains a high friction zone that leads to the goal faster, but unreliably. Transfer was done by rolling out trajectories in simulation, and then deploying the same sequence of actions on the physical robot as an open-loop control.
136
+
137
+ Algorithm 1 USHER
138
+
139
+ Input: Replay Buffer $B$ , Two-headed Critic Network with weights $\theta$ , Actor Network with weights $w$ ,
140
+
141
+ Weighting Factor ${\alpha }_{Q}$ for $Q$ , Weighting Factor ${\alpha }_{f}$ for $f$ , Goal Space $G$ , Goal Function $\phi$ ;
142
+
143
+ Sample batch of tuples $\left( {s,a,{s}^{\prime },{g}_{\pi },T}\right)$ from $B$ ; critic_loss $\leftarrow 0$ ; actor_loss $\leftarrow 0$ ;
144
+
145
+ for each sampled tuple $\left( {s,a,{s}^{\prime },{g}_{\pi },T}\right) \in B$ do
146
+
147
+ Sample ${g}_{r}$ as ${g}_{\pi }$ , with probability $\frac{1}{k + 1}$ , and uniformly from the future trajectory that starts at $s$ with
148
+
149
+ probability $\frac{k}{k + 1}$ ( $k$ is a pre-defined number); Sample alternative goal ${g}_{r}^{\prime } \sim \operatorname{Uniform}\left( G\right)$ ;
150
+
151
+ target $\_ q = R\left( {\phi \left( {s}^{\prime }\right) ,{g}_{r}}\right) + {Q}_{\text{ target }}\left( {{s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{r},{g}_{\pi },T - 1}\right) ;//{Q}_{\text{ target }}$ is a copy of ${Q}_{\theta }$
152
+
153
+ target_ ${q}^{\prime } = R\left( {\phi \left( {s}^{\prime }\right) ,{g}_{r}^{\prime }}\right) + {Q}_{\text{ target }}\left( {{s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{r}^{\prime },{g}_{\pi },T - 1}\right)$ ;
154
+
155
+ Define $W\left( {s,a,{s}^{\prime },{g}_{\pi },{g}_{r},T,\alpha }\right) = \frac{{f}_{\theta }\left( {{g}_{r} \mid s,a,{g}_{\pi },T}\right) }{\alpha {f}_{\theta }\left( {{g}_{r} \mid s,a,{g}_{\pi },T}\right) + \left( {1 - \alpha }\right) {f}_{\text{ target }}\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi },T - 1}\right) }$ ;
156
+
157
+ Set ${W}_{{\alpha }_{f}} = \left( {1 - {\alpha }_{f}}\right) W\left( {s,a,{s}^{\prime },{g}_{\pi },{g}_{r},T,{\alpha }_{f}}\right) ,{W}_{{\alpha }_{f}}^{\prime } = {\alpha }_{f}W\left( {s,a,{s}^{\prime },{g}_{\pi },{g}_{r}^{\prime },T,{\alpha }_{f}}\right) ,{W}_{{\alpha }_{Q}} = (1 -$
158
+
159
+ $\left. {\alpha }_{Q}\right) W\left( {s,a,{s}^{\prime },{g}_{\pi },{g}_{r},T,{\alpha }_{Q}}\right)$ , and ${W}_{{\alpha }_{Q}}^{\prime } = {\alpha }_{Q}W\left( {s,a,{s}^{\prime },{g}_{\pi },{g}_{r}^{\prime },T,{\alpha }_{Q}}\right)$ ;
160
+
161
+ critic_loss $+ = \left( {{W}_{{\alpha }_{f}}\left( {{f}_{\theta }{\left( {g}_{r} \mid s,a,{g}_{\pi },T\right) }^{2} - 2{f}_{\theta }\left( {{g}_{r} \mid s,a,{g}_{\pi },T}\right) {f}_{\text{ target }}\left( {{g}_{r} \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi },T}\right) }\right) }\right)$
162
+
163
+ $+ \left( {{W}_{{\alpha }_{f}}^{\prime }\left( {{f}_{\theta }{\left( {g}_{r}^{\prime } \mid s,a,{g}_{\pi },T\right) }^{2} - 2{f}_{\theta }\left( {{g}_{r}^{\prime } \mid s,a,{g}_{\pi },T}\right) {f}_{\text{ target }}\left( {{g}_{r}^{\prime } \mid {s}^{\prime },\pi \left( {{s}^{\prime },{g}_{\pi }}\right) ,{g}_{\pi },T - 1}\right) }\right) }\right) - \left( {\frac{2}{T}{f}_{\theta }\left( {\phi \left( {s}^{\prime }\right) \mid }\right. }\right.$
164
+
165
+ $\left. \left. {s,a,{g}_{\pi },T}\right) \right) + \left( {{W}_{{\alpha }_{Q}}{\left( {Q}_{\theta }\left( s,a,{g}_{r},{g}_{\pi },T\right) - \text{ target }\_ q\right) }^{2}}\right) + \left( {{W}_{{\alpha }_{Q}}^{\prime }{\left( {Q}_{\theta }\left( s,a,{g}_{r}^{\prime },{g}_{\pi },T\right) - \text{ target }\_ {q}^{\prime }\right) }^{2}}\right) ;$
166
+
167
+ actor_loss $+ = - {Q}_{\theta }\left( {s,\pi \left( {s,{g}_{\pi }}\right) ,{g}_{\pi },{g}_{\pi },T}\right)$ ;
168
+
169
+ end for
170
+
171
+ Backprop critic_loss and update $\theta$ ; Backprop actor_loss and update $w$ ;
172
+
173
+ Evaluation Metrics. On all tasks, we report the success rate, the average reward, and the average bias at the starting state (measured as the average cumulative discounted reward minus the value at the starting state). For all experiments (except Discrete and physical and simulated Mechanum environments), we report the sample mean and confidence interval, measured using five training runs for each method with randomly initialized seeds.
174
+
175
+ Summary of Results. Figure 3 summarizes the results of the experiments. (1) Discrete. The value functions for USHER and Q-learning both quickly converge to the expected value, while HER overestimates the expected reward. USHER and Q-learning both learn to take the long, safe path, while HER takes the short, risky path. (2) 4-Torus with Freeze. HER learns to always take the freeze action and fails as a result, while USHER learns a successful policy. DDPG and $\delta$ -DDPG are unbiased in this environment, but DDPG struggles due to the difficulty of exploring in high dimensions, and $\delta$ -DDPG struggles with its variance. (3) Car with Random Noise. HER performs well for low noise values, but tends to overestimate values more as the noise level rises. USHER suffers significantly less from high noise levels than HER. (4) Red Light. HER learns to run the
176
+
177
+ < g r a p h i c s >
178
+
179
+ Figure 2: Some of the robotic tasks used in the experiments. (a) Fetch reach. (b) Fetch push. (c) Fetch slide. (d, e) Simulated mechanum robot with random obstacle in two different positions. (f) Car with random noise. (g, h) Mobile Throwing Robot. (i) Physical Mechanum Robot.
180
+
181
+ red light, while USHER waits for the red light to end. USHER achieves higher success rates and rewards. (5) Fetch. USHER is able to match HER's performance on all of the tested environments. This suggests that the importance sampling method does not significantly affect USHER's variance or sample efficiency in deterministic environments, where HER is known to be unbiased. It also significantly outperforms two other unbiased methods, DDPG and $\delta$ -DDPG on FetchReach. (6) Mobile Throwing Robot. USHER matches HER's sample efficiency until the point where HER's bias causes its performance to suffer. USHER's performance, by contrast, continues to grow steadily to a 75% success rate, significantly better than HER's 55%. Interestingly, we find that USHER actually underestimates its reward here. This is likely because this environment is slightly non-Markovian, because the wind is sampled at the beginning of each trajectory, and then remains fixed. USHER's proof of unbiasedness, however, assumes that the environment is Markovian. It is interesting to note that USHER still performs well even when this property does not completely hold. (7) Navigation on a physical mechanum robot. We find that USHER outperforms HER. Both robots take the short goal when it is open. When the path is blocked, HER repeatedly slams into the obstacle. By contrast, USHER runs into the block once, and then turns to go around it if it is blocked. This leads USHER to have a higher success rate. In simulation, HER's success rate is approximately 50%, while USHER's is near 100%. Due to the difficulty of transfer, USHER's performance drops on the physical robot, but it still outperforms HER. HER succeeded on 4/10 goals, while USHER succeeds on 6/10. Both methods succeed ${100}\%$ of the time on the unblocked path environment.
182
+
183
+ § 7 LIMITATIONS
184
+
185
+ One limitation of this work is that we rely on the Markov assumption to derive our importance sampling weights. This means that while we can correctly estimate the value function for stochastic transitions, we cannot guarantee that the learned value is correct in environments with hidden
186
+
187
+ < g r a p h i c s >
188
+
189
+ Figure 3: Average reward, success rate, and Q-value bias of HER and USHER on the different tasks. information. It is unclear whether this is actually an issue in practice, as USHER still outperforms HER on the non-Markovian environments we tested (such as the Throwing Bot). Additionally, USHER requires approximately 2.5 times as many neural net evaluations as HER does per batch update. This was not an issue in our experiments, as the cost of simulation and policy evaluations usually dominated the training time.
190
+
191
+ § 8 CONCLUSION
192
+
193
+ We derive an unbiased importance sampling method for HER, and show that it is able to effectively counteract HER's hindsight bias. We find that addressing this bias leads to higher success rates and rewards in a range of stochastic environments. Furthermore, we introduce a mathematical framework to justify our method which can be used to examine the situations where HER is likely to experience significant bias. In future work, we hope to examine the finite-sample case, in order to better understand whether HER introduces a bias there, and if so, how it could be corrected.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/7CrXRhmzVVR/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,333 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Solving Complex Manipulation Tasks with Model-Assisted Model-Free Reinforcement Learning
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: In this paper, we propose a novel deep reinforcement learning approach for improving the sample efficiency of a model-free actor-critic method by using a learned model to encourage exploration. The basic idea consists in generating imaginary transitions with noisy actions, which can be used to update the critic. To counteract the model bias, we introduce a high initialization for the critic and two filters for the imaginary transitions. Finally, we evaluate our approach with the TD3 algorithm on different robotic tasks and demonstrate that it achieves a better performance with higher sample efficiency than several other model-based and model-free methods.
12
+
13
+ Keywords: Reinforcement learning, Data augmentation, Imaginary exploration, Optimistic initialization
14
+
15
+ ## 1 Introduction
16
+
17
+ Deep reinforcement learning (DRL) has shown its potential in solving difficult robotic tasks especially when facing complex dynamics and contact-rich environment. For instance, dexterous manipulation tasks like rotating a cube or ball to a desired position and orientation can be solved with state-of-the-art model-free methods [1], model-based methods [2, 3]. However, the inherent issues related to the stability and sample efficiency of DRL algorithms still make them challenging to apply on real robots. To tackle those issues, various approaches have been investigated. In this paper, we specifically investigate how a learned model can help accelerate a model-free method. Previous research in this direction (see Section 2) used a learned model for data augmentation [4], to improve critic estimation [5], for gradient computation [6], or for guiding exploration in the true environment [7].
18
+
19
+ In contrast to previous model-based methods, we propose to "explore" inside the learned model, which is arguably safer than in the real environment. In the learned model, noisy actions are executed in states visited in the true environment to obtain imaginary transitions, which can be used to compute imaginary target Q-values for updating the Q-function of the current policy. With a perfect transition model, this approach would help accelerate learning the Q-function. However, since both the learned model and the estimated Q-function may be incorrect, directly using those imaginary transitions may lead to potential issues. To counteract them, we introduce several techniques: high Q-value initialization and filtering of imaginary transitions to favor optimistic targets that are uncertain.
20
+
21
+ Intuitively, with a higher initialization, we approximate the "optimism in face of uncertainty" principle when evaluating random actions. In the spirit of this principle, we only keep those that lead to higher evaluations than real transitions. Moreover, to avoid unnecessary updates, those imaginary transitions are selected with higher probability if there is a larger uncertainty in the corresponding Q-values. Note that since the noisy actions are performed in the learned model, our model is not a true exploration strategy. However, if such imaginary targets are indeed used to update the Q-function, these can lead to higher
22
+
23
+ evaluations of the corresponding actions, which would later favor selecting them in the true environment. In this paper, we implement those techniques in the TD3 algorithm (see Section 3) [8]. However, they may be beneficial in other DRL algorithms as well.
24
+
25
+ Contributions: We propose a novel approach to exploit a learned model in DRL (see Section 4). We validate the proposed method and show that it outperforms relevant state-of-the-art algorithms in MuJoCo [9] robot environments and especially in some complex manipulation tasks (see sec:result). Moreover, we analyze and discuss the different proposed techniques.
26
+
27
+ ## 2 Related Work
28
+
29
+ Researchers have explored various methods for combining a learned model with model-free reinforcement learning algorithms. They can mainly be divided into four categories: (1) model-based data augmentation, (2) model-based value estimation, (3) analytic gradient calculation, and (4) model-guided exploration.
30
+
31
+ Dyna [4] is a typical architecture for learning a dynamics model with the true experience and using the dynamics model to generate imaginary data for training a value function and a policy. Gu et al. [10] introduce a local linear model for representing the dynamics and show the improvement of using the imaginary rollouts with this model. ME-TRPO [11] is a method leveraging an ensemble dynamics model in a model-free reinforcement learning algorithm TRPO [12]. Another work, MA-BDDPG [13], aims to alleviate the effects of model bias by considering uncertainty when using imaginary transitions stored in a replay buffer. The uncertainty of imaginary transitions is measured as the variance of an ensemble of critics. In contrast to this method, we use the disagreement between the two target critics in TD3 [8] as an uncertainty measure and generate imaginary transitions online instead of maintaining another imaginary replay buffer.
32
+
33
+ Focusing on improving the estimation of the $\mathrm{Q}$ function, Feinberg et al. [5] propose a method called MVE which uses the observed states and a learned dynamics model to simulate for a fixed horizon with a current policy. With this imaginary segment, a better target Q-value (value expansion) is applied in the training of the critic. Instead of using the dynamics model to forward for several steps, our method simulates only one step (to reduce the impact of the model error) but not directly with the current policy. The performance of MVE is sensitive to a difficult-to-set hyperparameter, the simulation horizon, which is limited by the quality of the learned model. To overcome this problem, Buckman et al. [14] use a weighted sum of value expansions from different horizons and different models. They learn ensemble models to approximate the transition function, reward function, and Q-function. Weights are then assigned to the value estimations of different prediction horizons according to the variance from those models. To ensure a monotonic improvement under the model bias, Janner et al. [15] provided theoretical analysis on how to decide the simulation horizon.
34
+
35
+ Instead of using the learned model as an environment simulator, researchers have also exploited its differentiability. Deisenroth and Rasmussen [16] propose a method called PILCO to learn a probabilistic dynamics model and use it in policy search by calculating analytical gradient of the object with respect to the policy parameters. Clavera et al. [6] further extend the idea and train a bootstrap ensemble probabilistic dynamics model.
36
+
37
+ As for guiding the exploration, a natural idea is to try to explore more in regions where the learned model is uncertain. For instance, Pathak et al. [17] formulate the disagreement across an ensemble model as an intrinsic reward. Another example is the work of Shyam et al. [7] in which they measure the novelty of state-action pairs with a learned model and use this novelty as the objective of an exploration Markov Decision Process (MDP) to find an exploratory policy. In contrast to these methods, we influence the exploration in the true environment by trying noisy actions in the learned model and setting a high initialization for the critics.
38
+
39
+ ## 3 Background
40
+
41
+ A Markov Decision Process (MDP) is composed of a set of state $\mathcal{S}$ , a set of action $\mathcal{A}$ , a transition function $T : \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{P}\left( \mathcal{S}\right)$ (with $\mathcal{P}\left( \mathcal{S}\right)$ denoting the set of probability distributions over $\mathcal{S}$ ), a reward function $r : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ , and a distribution over initial state $\mu \in \mathcal{P}\left( \mathcal{S}\right)$ . Given a deterministic policy $\pi : \mathcal{S} \rightarrow \mathcal{A}$ , the value function ${V}^{\pi }\left( s\right) = {\mathbb{E}}_{\pi }\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{r}_{t} \mid {s}_{0} = s}\right\rbrack$ is defined as the expected discounted cumulative reward an agent will receive starting from a feasible state $s$ and following the policy. The discounted factor is defined as $\gamma \in \left\lbrack {0,1}\right\rbrack$ . To solve this MDP is to find an optimal policy ${\pi }^{ * }$ that maximizes the expected value function ${\pi }^{ * }\left( s\right) = \arg \mathop{\max }\limits_{\pi }{\mathbb{E}}_{\mu }\left\lbrack {{V}^{\pi }\left( s\right) \mid s \sim \mu }\right\rbrack$ . To find this optimal policy, we often need an action-value function ${Q}^{\pi }\left( {s, a}\right) = {\mathbb{E}}_{\pi }\left\lbrack {r\left( {s, a}\right) + \gamma {V}^{\pi }\left( {s}^{\prime }\right) }\right\rbrack$ , which corresponds to the expected discounted sum of rewards obtained by executing action $a$ in state $s$ and acting according to policy $\pi$ thereafter. Since we focus on robotic tasks, we only consider deterministic MDPs in this work, although our approach could certainly be applied in stochastic settings as well.
42
+
43
+ Deep Deterministic Policy Gradient (DDPG) DDPG [18] is a DRL algorithm with an actor-critic structure for solving an MDP with continuous state and action spaces. The policy $\pi$ (actor) and its Q-function ${Q}^{\pi }\left( {s, a}\right)$ (critic) are approximated by neural networks parameterized by $\theta$ and $\phi$ respectively. In DDPG, the actor interacts with the environment generating transitions that are stored in a replay buffer. At each training step, a mini-batch $\left\{ {\left( {{s}_{i},{a}_{i},{r}_{i},{s}_{i + 1}}\right) \mid i = 1,\cdots , N}\right\}$ is sampled from the replay buffer and used for updating the actor’s parameters $\theta$ according to the deterministic policy gradient:
44
+
45
+ $$
46
+ {\nabla }_{\theta }\mathcal{L}\left( \pi \right) = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\nabla }_{a}Q\left( {{s}_{i}, a \mid \phi }\right) { \mid }_{a = \pi \left( {{s}_{i} \mid \theta }\right) }{\nabla }_{\theta }\pi \left( {{s}_{i} \mid \theta }\right) , \tag{1}
47
+ $$
48
+
49
+ while the critic’s parameters $\phi$ are updated to minimize the following loss function:
50
+
51
+ $$
52
+ \mathcal{L}\left( Q\right) = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\left( Q\left( {s}_{i},{a}_{i} \mid \phi \right) - {y}_{i}\right) }^{2} \tag{2}
53
+ $$
54
+
55
+ where ${y}_{i}$ is the target Q-value for the sampled transition $\left( {{s}_{i},{a}_{i},{r}_{i},{s}_{i + 1}}\right)$ :
56
+
57
+ $$
58
+ {y}_{i} = {r}_{i} + {\gamma Q}\left( {{s}_{i + 1},\pi \left( {{s}_{i + 1} \mid {\theta }^{\prime }}\right) \mid {\phi }^{\prime }}\right) . \tag{3}
59
+ $$
60
+
61
+ To improve the learning stability, the target Q-value is calculated with a target Q function $Q\left( {\cdot , \cdot \mid {\phi }^{\prime }}\right)$ and a target actor $\pi \left( {\cdot \mid {\theta }^{\prime }}\right)$ . The target networks $\left( {{\phi }^{\prime },{\theta }^{\prime }}\right)$ are initialized to the same parameters as the original networks $\left( {\phi ,\theta }\right)$ , but are then updated slowly towards the original ones with ${\phi }^{\prime } \leftarrow \left( {1 - \tau }\right) {\phi }^{\prime } + {\tau \phi }$ and ${\theta }^{\prime } \leftarrow \left( {1 - \tau }\right) {\theta }^{\prime } + {\tau \theta }$ , where $\tau \in \left( {0,1}\right)$ is a hyperparameter.
62
+
63
+ Twin Delayed DDPG (TD3) TD3 [8] improves DDPG with three tricks: clipped double-Q learning, target policy smoothing, and delayed policy update. The first trick aims to prevent overestimation by learning two critics $Q\left( {\cdot , \cdot \mid {\phi }_{1}}\right) , Q\left( {\cdot , \cdot \mid {\phi }_{2}}\right)$ (with their corresponding target critics $Q\left( {\cdot , \cdot \mid {\phi }_{1}^{\prime }}\right) , Q\left( {\cdot , \cdot \mid {\phi }_{2}^{\prime }}\right)$ ). When calculating the target Q-value, the target critic with smaller value is used for training. The second trick aims to smooth the objective function by injecting a truncated Gaussian noise $\epsilon$ to the output of the target policy. Note that the perturbed actions are clipped to ensure they remain feasible. For legibility, we do not write this step in our equations. With these two tricks, the target Q-value becomes:
64
+
65
+ $$
66
+ {y}_{i} = {r}_{i} + \gamma \min \left\{ {Q\left( {{s}_{i + 1},\pi \left( {{s}_{i + 1} \mid {\theta }^{\prime }}\right) + \epsilon \mid {\phi }_{1}^{\prime }}\right) , Q\left( {{s}_{i + 1},\pi \left( {{s}_{i + 1} \mid {\theta }^{\prime }}\right) + \epsilon \mid {\phi }_{2}^{\prime }}\right) }\right\} . \tag{4}
67
+ $$
68
+
69
+ Lastly, the delayed policy update enforces that the actor be updated at a lower frequency than the critic. With those tricks, it has been empirically observed that the performance and stability of learning the Q-function are substantially improved.
70
+
71
+ ## 4 Methodology
72
+
73
+ Our proposed algorithm extends TD3 to use a learned model to enhance the update of its critics. Note that while the dynamics model is not known, we assume that the reward
74
+
75
+ Algorithm 1 MAMF
76
+
77
+ ---
78
+
79
+ initialize critics $Q\left( {s, a \mid {\phi }_{1}}\right) , Q\left( {s, a \mid {\phi }_{2}}\right)$
80
+
81
+ 2: initialize actor $\pi \left( {s \mid \theta }\right)$ , model $f$ , and empty replay buffer $\mathcal{R}$
82
+
83
+ 3: set the parameters of targets ${\phi }_{1}^{\prime } \leftarrow {\phi }_{1},{\phi }_{2}^{\prime } \leftarrow {\phi }_{2}$ and ${\theta }^{\prime } \leftarrow \theta$ .
84
+
85
+ initialize $\eta$ and $\rho$
86
+
87
+ start with initial state ${s}_{0} \sim \mu$
88
+
89
+ for $t = 0\ldots h$ do
90
+
91
+ generate a transition with random policy and keep track of the max reward ${r}^{ * }$
92
+
93
+ save transition $\left( {{s}_{t},{a}_{t},{r}_{t},{s}_{t + 1}}\right)$ in replay buffer $\mathcal{R}$
94
+
95
+ if episode ends then reset the environment ${s}_{t + 1} \sim \mu$ end if
96
+
97
+ if $t > {h}_{0}$ then sample $\left\{ {\left( {{s}_{i},{a}_{i},{r}_{i},{s}_{i + 1}}\right) \mid i = 1,\cdots , N}\right\}$ from $\mathcal{R}$ and pretrain $f$ end
98
+
99
+ if
100
+
101
+ end for
102
+
103
+ 12: use max reward ${r}^{ * }$ to set the bias in the critics according to Equation (6).
104
+
105
+ start with initial state ${s}_{0} \sim \mu$
106
+
107
+ for $t = 0,\ldots , H$ do
108
+
109
+ interact with the environment with current policy $\pi$ and small noise
110
+
111
+ save transition $\left( {{s}_{t},{a}_{t},{r}_{t},{s}_{t + 1}}\right)$ to replay buffer $\mathcal{R}$
112
+
113
+ if episode ends then reset the environment ${s}_{t + 1} \sim \mu$ end if
114
+
115
+ sample $\left\{ {\left( {{s}_{j},{a}_{j},{r}_{j},{s}_{j + 1}}\right) \mid j = 1,\cdots , N}\right\}$ from $\mathcal{R}$ and train $f$
116
+
117
+ sample $\left\{ {\left( {{s}_{k},{a}_{k},{r}_{k},{s}_{k + 1}}\right) \mid k = 1,\cdots , N}\right\}$ from $\mathcal{R}$ and create imaginary transitions
118
+
119
+ $\eta \leftarrow \eta \times \rho$
120
+
121
+ filter the imaginary transitions with the two filters
122
+
123
+ update critic according to Equation (10)
124
+
125
+ if $t{\;\operatorname{mod}\;{policy}}$ _delay $= 0$ then
126
+
127
+ update actor
128
+
129
+ update target networks
130
+
131
+ end if
132
+
133
+ end for
134
+
135
+ ---
136
+
137
+ function is known. This assumption is natural in robotics since the reward function is defined by the system designer to guide the learning of the robot.
138
+
139
+ To obtain a dynamics model, we first pretrain a model and use the data generated during this period to set a high initialization for the critics. The dynamics model is then further updated online and used for creating imaginary transitions with noisy actions. The goal in our approach is to train the critics with imaginary transitions, which can either improve the Q-estimation or guide the exploration such that potentially promising actions could be tried in the true environment. To that aim, we apply two filters to the imaginary transitions, since using any imaginary transitions is generally detrimental due to model bias and Q-value estimation error. The first filter keeps the imaginary transitions with higher target Q-values. The second filter selects imaginary transitions whose Q-values are uncertain with higher probability. With all these components, we propose the model-assisted model-free (MAMF) algorithm (see Algorithm 1). To ensure the reproducibility of our results, the source code of MAMF will be released after publication.
140
+
141
+ Next we explain how we learn the dynamics model, how we implement the high initialization, how we perform exploration in the learned model, and how we filter the imaginary transitions.
142
+
143
+ Learning the Dynamics Model. Our algorithm starts by pretraining a dynamics model $\mathcal{M} : \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}$ with samples generated by a random policy (e.g., which uniformly randomly selects actions). Before starting updating $\mathcal{M},{h}_{0}$ transitions are first collected. Then, $\mathcal{M}$ is repeatedly updated via stochastic mini-batch gradient descent until $h$ transitions have been generated by the random policy, which ends pretraining. After that, the model is still updated with the same procedure, but the transitions are now generated by the current policy. Online DRL training is performed until $H$ transitions have been generated. The model is (pre)trained by minimizing a mean-squared error between predicted next states and true next states computed over a mini-batch $\left\{ {\left( {{s}_{i},{a}_{i},{r}_{i},{s}_{i + 1}}\right) \mid i = 1,\cdots , N}\right\}$ :
144
+
145
+ $$
146
+ {L}_{\mathcal{M}} = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\left( f\left( {s}_{i},{a}_{i}\right) - {s}_{i + 1}\right) }^{2}. \tag{5}
147
+ $$
148
+
149
+ High Initialization of Critics. The critics are initialized to optimistic values by changing the bias term in the last layer of the critic networks. This bias is set as follows:
150
+
151
+ $$
152
+ b = \left( {{r}^{ * } \times l}\right) \times c \tag{6}
153
+ $$
154
+
155
+ where ${r}^{ * }$ is the max reward observed during pretraining, $l$ is the max episode length, and $c \in \left\lbrack {0,1}\right\rbrack$ is a hyperparameter controlling how optimistic the high initialization is.
156
+
157
+ Exploration with Learned Model. In the actor-critic architecture of TD3, the critic is trained using transitions $\left\{ {\left( {{s}_{i},{a}_{i},{r}_{i},{s}_{i + 1}}\right) \mid i = 1,\cdots , N}\right\}$ sampled from the replay buffer. For the same set of states $\left\{ {{s}_{i} \mid i = 1,\cdots , N}\right\}$ , a noise from a truncated normal distribution $\nu \sim \operatorname{clip}\left( {\mathcal{N}\left( {0,1}\right) , - \eta ,\eta }\right)$ with a large clipping bound $\eta$ is applied on the policy action $\pi \left( {{s}_{i} \mid }\right.$ $\theta$ ) to generate imaginary transitions. The bound $\eta$ is initialized as the max action ${a}_{\max }$ the agent can take in each dimension. After each iteration, the bound for truncating the Gaussian noise exponentially decays with a fixed decaying rate $\rho$ . Formally, the imaginary transitions $\left( {{s}_{i},{\widehat{a}}_{i},{\widehat{r}}_{i},{\widehat{s}}_{i + 1}}\right)$ generated by the learned model $f$ and known reward function $r$ are written:
158
+
159
+ $$
160
+ {\widehat{a}}_{i} = \operatorname{clip}\left( {\pi \left( {{s}_{i} \mid \theta }\right) + \nu , - {a}_{\max },{a}_{\max }}\right) ,\;{\widehat{s}}_{i + 1} = f\left( {{s}_{i},{\widehat{a}}_{i}}\right) ,\;{\widehat{r}}_{i} = r\left( {{s}_{i},{\widehat{a}}_{i}}\right) . \tag{7}
161
+ $$
162
+
163
+ Filtering. To only use the imaginary transitions whose actions are potentially better than the original ones, these transitions are filtered by their target Q-values. Only the imaginary transitions whose target Q-values are higher than the original ones are considered:
164
+
165
+ $$
166
+ {\widehat{y}}_{i}\left( {{\widehat{r}}_{i},{\widehat{s}}_{i + 1} \mid {\theta }^{\prime },{\phi }_{1}^{\prime },{\phi }_{2}^{\prime }}\right) \geq {y}_{i}\left( {{r}_{i},{s}_{i + 1} \mid {\theta }^{\prime },{\phi }_{1}^{\prime },{\phi }_{2}^{\prime }}\right) \tag{8}
167
+ $$
168
+
169
+ where ${\widehat{y}}_{i}$ and ${y}_{i}$ are calculated by Equation (4)
170
+
171
+ The remaining imaginary transitions are filtered with respect to the uncertainty in the Q-value estimates. This uncertainty can be measured as the disagreement of the two target critics in TD3, which can be expressed as the absolute value of the difference of the target Q-values given by the two target critic networks:
172
+
173
+ $$
174
+ \Delta = \left| {Q\left( {{\widehat{s}}_{i + 1},\pi \left( {{\widehat{s}}_{i + 1} + \epsilon \mid {\theta }^{\prime }}\right) \mid {\phi }_{1}^{\prime }}\right) - Q\left( {{\widehat{s}}_{i + 1},\pi \left( {{\widehat{s}}_{i + 1} + \epsilon \mid {\theta }^{\prime }}\right) \mid {\phi }_{2}^{\prime }}\right) }\right| . \tag{9}
175
+ $$
176
+
177
+ This difference is then normalized by the max difference observed so far in the sampled transitions to to obtain a ratio $P = \frac{\Delta }{\max \Delta } \in \left\lbrack {0,1}\right\rbrack$ . With probability $P$ , an imaginary transition is selected for training the critics, otherwise it is dropped. Therefore, imaginary transitions with larger uncertainty have a higher chance to be kept. The number of remaining imaginary transitions after the two filters is denoted by $n$ .
178
+
179
+ Finally, the loss for the critics is composed of two parts, the original loss and an additional loss obtained from the filtered imaginary transitions:
180
+
181
+ $$
182
+ {L}_{\phi } = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\left( Q\left( {s}_{i},{a}_{i} \mid \phi \right) - {y}_{i}\right) }^{2} + \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( Q\left( {s}_{i},{\widehat{a}}_{i} \mid \phi \right) - {\widehat{y}}_{i}\right) }^{2} \tag{10}
183
+ $$
184
+
185
+ where the target value ${y}_{i}$ is calculated with Equation (4) and the target values ${\widehat{y}}_{i}$ for imaginary transitions follow almost the same calculation but with the action, next state, and reward substituted by their imaginary counterparts.
186
+
187
+ ## 5 Experimental Results
188
+
189
+ Our experiments are designed to answer the following questions:
190
+
191
+ - Do our propositions improve the performance with respect to relevant baselines?
192
+
193
+ ![01963fe1-63c8-75c7-bd0a-109fa4a0a898_5_316_201_1166_190_0.jpg](images/01963fe1-63c8-75c7-bd0a-109fa4a0a898_5_316_201_1166_190_0.jpg)
194
+
195
+ Figure 1: Visualization of three environments in Dexterous Gym
196
+
197
+ - Does each component of our method (i.e., decaying noise, high initialization, filtering) contribute to its performance?
198
+
199
+ - In which aspects does our method help?
200
+
201
+ Before answering those questions, we present next the experimental environments and discuss the experimental set-up we used.
202
+
203
+ ### 5.1 Environments and Experimental Set-Up
204
+
205
+ We evaluated our approach on several robot environments in ${MuJoCo}\left\lbrack 9\right\rbrack$ and three complex manipulation tasks proposed in Dexterous Gym [19]. The visualizations of these manipulation tasks are shown in Figure 1. The task is throwing and catching a object at desired position and orientation with two dexterous manipulation hands in EggCatchUnderarm and EggCatchOverarm. The task of PenSpin is rotating a pen without dropping it. The details of these environments are described in Appendix A .
206
+
207
+ We chose these three representative environments from Dexterous Gym [19]. The other environments like BlockCatchUnderarm are simply variants with different objects.
208
+
209
+ Across all the environments, all the hyperparameters such as learning rates, batch size and network architectures for the actor and critic are kept the same except for the hyperparam-eters related to setting the bias. The hyperparameters and a brief discussion about setting them are shown in Appendix B. All the experiments are performed over five different random seeds and the performance is measured by the sum of rewards averaged over 10 episodes.
210
+
211
+ ### 5.2 Results
212
+
213
+ #### 5.2.1 Do our propositions improve the performance w.r.t relevant baselines?
214
+
215
+ To answer this question, we evaluate our method on the environments introduced above. We compare with the following baselines: (1) TD3, (2) MVE-TD3: MVE with TD3, and (3) MA-TD3: MA-BDDPG with TD3.
216
+
217
+ Both MVE [5] and MA-BDDPG [13] are methods that use a dynamics model to generate imaginary transitions for training the critics. Since their source codes are not publicly available, we implemented those methods in TD3 for consistency. For MVE, we set the prediction horizon to 3 which is applicable in all the environments. For MA-BDDPG, we actually implemented a version similar to it, which corresponds to TD3 with only imaginary transitions filtered by uncertainty. It is equivalent to our method without high initialization, decaying noise and the filter favoring higher target $\mathrm{Q}$ values. Similar to the original method proposed by Charlesworth and Montana [19], demonstrations are also used in all the experiments in Dexterous Gym.
218
+
219
+ The training results are shown in Figure 2. We can see the outperformance of our method across different environments. Especially in complex environments like EggCatchUnderarm, our method achieves a much higher final return and has low variability across different runs. We also notice that combining MVE with TD3 can not guarantee an improvement in some environments. This might be due to the target Q smoothing in TD3 (clipped noise added to the action when calculating the target Q-values). Similar results of MVE can also be observed in the work by Buckman et al. [14]. Their results also show that MVE does not guarantee an improvement when applied on DDPG [18].
220
+
221
+ ![01963fe1-63c8-75c7-bd0a-109fa4a0a898_6_312_226_1159_1072_0.jpg](images/01963fe1-63c8-75c7-bd0a-109fa4a0a898_6_312_226_1159_1072_0.jpg)
222
+
223
+ Figure 2: Results of training in MuJoCo environments and Dexterous Gym
224
+
225
+ #### 5.2.2 Does each component of our method contribute to its performance?
226
+
227
+ To prove the significance of each component in our method, we performed an ablation study. By comparing the results shown in Figure 3(a)-(b), we can find that high initialization is necessary to counteract the effect of the model bias and encourage the exploration at the beginning. Using a decaying noise and choosing imaginary transitions with higher target Q-values achieve the ideas of both seeking optimistic state-action during training and of exploring in the model. The filter with uncertainty guides the learning of the Q-function to emphasize on the uncertain part and thus helps the estimation of Q-values. All the components in synergy improve the training of the critic.
228
+
229
+ #### 5.2.3 In which aspects does our method help?
230
+
231
+ We conjecture that our method helps in three ways: better exploration, better estimation of Q-values, and exploitation of independence among action components. For the first point, we believe that the higher final performance of our method compared to the baselines provides some evidence of the improved exploration.
232
+
233
+ For the second point, we compare our method with a variant of TD3 where critics are updated more frequently. This variant could achieve better Q-estimations and thus better performances in some environments. Due to the page limit, the results and details for the experiments are shown in Appendix C. We can see that simply updating the critic more often is still not sufficient to achieve the performance gains of our method.
234
+
235
+ ![01963fe1-63c8-75c7-bd0a-109fa4a0a898_7_313_232_1147_342_0.jpg](images/01963fe1-63c8-75c7-bd0a-109fa4a0a898_7_313_232_1147_342_0.jpg)
236
+
237
+ Figure 3: Figures (a) and (b) show the ablation study. Figure (c) show the results of training in the environments defined in Section 5.2.3
238
+
239
+ For the last point, we believe that our method also exploits potential action independence (i.e., in some states, parts of the actions do not have an impact on the next environment state). This kind of independence is common in robotic tasks e.g., in a walking robot, the actions of any limb that does not touch the ground may have less effects on the next states, or more concretely, in EggCatchUnderarm, once a hand has thrown a ball, the actions of this hand is not important anymore. To test this conjecture, we augment HalfCheetah by doubling its action space $\mathcal{A}$ to an enlarged action space $\mathcal{A} \times \mathcal{A}$ . The transitions in this augmented environment depend only on the first or second part of the enlarged actions. This dependence switches from one part to the other every $H$ time steps. Following this pattern, part of the actions has no direct effect. The results of training in this artificial environment with different frequencies of switching are shown in Figure 3(c). Although the augmented environment has a larger action space, our method can still improve on the performance.
240
+
241
+ ## 6 Limitations
242
+
243
+ The limitations of our work mainly include these three aspects:
244
+
245
+ (1) Using a too large or too small bias in the critic will hurt the performance of our method. Although our heuristic for setting the bias is quite robust (except for HalfCheetah), it is not clear how to analytically determine it. A more theoretical analysis may be required for applying it in a more general situation.
246
+
247
+ (2) While much work has investigated exploration strategies in model-free algorithms, in our method, a simple truncated Gaussian noise is applied to generate imaginary transitions. A more sophisticated way of choosing exploratory actions might further improve our approach.
248
+
249
+ (3)Although our proposed method significantly improves the sample efficiency and the performance, especially on the harder tasks of Dexterous Gym, we believe that more efforts may still be needed to fully run the algorithm in a real robot. Potential avenues to make the approach even more practical could be to use it in combination with sim2real methods $\left\lbrack {{20},{21}}\right\rbrack$ , exploit any extra a priori known information such as symmetries [22], or exploit the learned model in some other ways [23] for instance.
250
+
251
+ ## 7 Conclusion
252
+
253
+ We propose a method called MAMF, which leverages a dynamics model to help train the critic in a model-free algorithm. One key novelty is to generate imaginary data with exploratory actions. Our experiments demonstrate the sample efficiency and performance improvements of this method even in some complex manipulation tasks. We believe that our method is a step forward towards making DRL practical on real robots and shows a promising way of solving those complex tasks. Although our proposition is implemented with TD3, it is actually independent from this DRL algorithm. Thus, one interesting future work would be to adapt and combine our proposition with other model-free algorithms.
254
+
255
+ References
256
+
257
+ [1] OpenAI, I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino, M. Plappert, G. Powell, R. Ribas, J. Schneider, N. Tezak, J. Tworek, P. Welinder, L. Weng, Q. Yuan, W. Zaremba, and L. Zhang. Solving rubik's cube with a robot hand. CoRR, abs/1910.07113, 2019. URL http://arxiv.org/abs/1910.07113.
258
+
259
+ [2] K. Lowrey, A. Rajeswaran, S. M. Kakade, E. Todorov, and I. Mordatch. Plan online, learn offline: Efficient learning and exploration via model-based control. CoRR, abs/1811.01848, 2018. URL http://arxiv.org/abs/1811.01848.
260
+
261
+ [3] A. Nagabandi, K. Konolige, S. Levine, and V. Kumar. Deep dynamics models for learning dexterous manipulation. CoRR, abs/1909.11652, 2019. URL http://arxiv.org/abs/1909.11652.
262
+
263
+ [4] R. S. Sutton. Dyna, an integrated architecture for learning, planning, and reacting. SIGART Bull., 2(4):160-163, jul 1991. ISSN 0163-5719. doi:10.1145/122344.122377. URL https://doi.org/10.1145/122344.122377.
264
+
265
+ [5] V. Feinberg, A. Wan, I. Stoica, M. I. Jordan, J. E. Gonzalez, and S. Levine. Model-based value estimation for efficient model-free reinforcement learning. CoRR, abs/1803.00101, 2018. URL http://arxiv.org/abs/1803.00101.
266
+
267
+ [6] I. Clavera, V. Fu, and P. Abbeel. Model-augmented actor-critic: Backpropagating through paths. CoRR, abs/2005.08068, 2020. URL https://arxiv.org/abs/2005.08068.
268
+
269
+ [7] P. Shyam, W. Jaskowski, and F. Gomez. Model-based active exploration. CoRR, abs/1810.12162, 2018. URL http://arxiv.org/abs/1810.12162.
270
+
271
+ [8] S. Fujimoto, H. van Hoof, and D. Meger. Addressing function approximation error in actor-critic methods. CoRR, abs/1802.09477, 2018. URL http://arxiv.org/abs/ 1802.09477.
272
+
273
+ [9] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026-5033, 2012. doi:10.1109/IROS.2012.6386109.
274
+
275
+ [10] S. Gu, T. P. Lillicrap, I. Sutskever, and S. Levine. Continuous deep q-learning with model-based acceleration. CoRR, abs/1603.00748, 2016. URL http://arxiv.org/ abs/1603.00748.
276
+
277
+ [11] T. Kurutach, I. Clavera, Y. Duan, A. Tamar, and P. Abbeel. Model-ensemble trust-region policy optimization. CoRR, abs/1802.10592, 2018. URL http://arxiv.org/ abs/1802.10592.
278
+
279
+ [12] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel. Trust region policy optimization. CoRR, abs/1502.05477, 2015. URL http://arxiv.org/abs/1502.05477.
280
+
281
+ [13] G. Kalweit and J. Boedecker. Uncertainty-driven imagination for continuous deep reinforcement learning. In S. Levine, V. Vanhoucke, and K. Goldberg, editors, Proceedings of the 1st Annual Conference on Robot Learning, volume 78 of Proceedings of Machine Learning Research, pages 195-206. PMLR, 13-15 Nov 2017. URL https://proceedings.mlr.press/v78/kalweit17a.html.
282
+
283
+ [14] J. Buckman, D. Hafner, G. Tucker, E. Brevdo, and H. Lee. Sample-efficient reinforcement learning with stochastic ensemble value expansion. CoRR, abs/1807.01675, 2018. URL http://arxiv.org/abs/1807.01675.
284
+
285
+ [15] M. Janner, J. Fu, M. Zhang, and S. Levine. When to trust your model: Model-based policy optimization. CoRR, abs/1906.08253, 2019. URL http://arxiv.org/ abs/1906.08253.
286
+
287
+ 335
288
+
289
+ [16] M. P. Deisenroth and C. E. Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML'11, page 465-472, Madison, WI, USA, 2011. Omnipress. ISBN 9781450306195.
290
+
291
+ [17] D. Pathak, D. Gandhi, and A. Gupta. Self-supervised exploration via disagreement. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 5062-5071. PMLR, 09-15 Jun 2019. URL https://proceedings.mlr.press/v97/pathak19a.html.
292
+
293
+ [18] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. In ICLR (Poster), 2016. URL http://arxiv.org/abs/1509.02971.
294
+
295
+ [19] H. Charlesworth and G. Montana. Solving challenging dexterous manipulation tasks with trajectory optimisation and reinforcement learning. CoRR, abs/2009.05104, 2020. URL https://arxiv.org/abs/2009.05104.
296
+
297
+ [20] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. CoRR, abs/1703.06907, 2017. URL http://arxiv.org/abs/1703.06907.
298
+
299
+ [21] OpenAI, M. Andrychowicz, B. Baker, M. Chociej, R. Józefowicz, B. McGrew, J. Pa-chocki, A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder, L. Weng, and W. Zaremba. Learning dexterous in-hand manipulation. $\mathit{{CoRR}}$ , abs/1808.00177, 2018. URL http://arxiv.org/abs/1808.00177.
300
+
301
+ [22] Y. Lin, J. Huang, M. Zimmer, Y. Guan, J. Rojas, and P. Weng. Invariant transform experience replay: Data augmentation for deep reinforcement learning. IEEE Robotics and Automation Letters, 5(4):6615-6622, 2020. doi:10.1109/LRA.2020.3013937.
302
+
303
+ [23] P. F. Christiano, Z. Shah, I. Mordatch, J. Schneider, T. Blackwell, J. Tobin, P. Abbeel, and W. Zaremba. Transfer from simulation to real world through learning deep inverse dynamics model. CoRR, abs/1610.03518, 2016. URL http://arxiv.org/abs/1610.03518.
304
+
305
+ 336 337 338 339 340 341 363 A Environments
306
+
307
+ EggCatchUnderarm: The agent controls two dexterous manipulation hands in this environment. The goal is to throw the object with one hand from an initial in-hand position 367 and catch the object at a desired pose with another hand. The observation space has 140 dimensions containing information of position, orientation and velocity about hands and the object. The continuous action space has 52 dimensions and the actions control the joints of the hands.
308
+
309
+ EggCatchOverarm: The main settings and the goal are the same as in EggCatchUnder-arm. The only difference is that the the two hands are in the vertical plane instead of horizontal plane. This environment has the same state space (140 dimensions) and action space (52 dimensions) as EggCatchUnderarm.
310
+
311
+ PenSpin: This is an in-hand manipulation task with the goal of rotating a pen without dropping it. The observation space has 61 dimensions related to position, orientation and velocity of the pen and hand. The action space has 20 dimensions corresponding to the joints of the hand.
312
+
313
+ ## B Hyperparamters
314
+
315
+ We provide all the hyperparameters used in different environments in Table 1 and Table 2.
316
+
317
+ <table><tr><td>Hyperparamter</td><td>Value</td></tr><tr><td>batch size $N$</td><td>256</td></tr><tr><td>discount $\gamma$</td><td>0.98</td></tr><tr><td>target network smoothing $\tau$</td><td>0.005</td></tr><tr><td>frequency of delayed policy update</td><td>2</td></tr><tr><td>std of exploration noise</td><td>0.1</td></tr><tr><td>std of target policy noise</td><td>0.2</td></tr><tr><td>clipping bound of target policy noise in TD3</td><td>0.5</td></tr><tr><td>decay rate of clipping bound in our method $\rho$</td><td>${}_{0.9999996}$</td></tr><tr><td>number of layers in actors and critics</td><td>3</td></tr><tr><td>learning rate in actors and critics</td><td>3e-4</td></tr><tr><td>number of layers in dynamics model</td><td>4</td></tr><tr><td>learning rate in dynamics model</td><td>1e-4</td></tr><tr><td>number of nodes in each layer</td><td>256</td></tr></table>
318
+
319
+ Table 1: Common hyperparamters in all the environments
320
+
321
+ <table><tr><td>Environment</td><td>pretraining start ${h}_{0}$</td><td>pretraining horizon $h$</td><td>$c$ for the bias</td></tr><tr><td>Reacher</td><td>1000</td><td>5000</td><td>1/6</td></tr><tr><td>Pusher</td><td>5000</td><td>10000</td><td>1/6</td></tr><tr><td>Hopper</td><td>5000</td><td>25000</td><td>1/6</td></tr><tr><td>Walker</td><td>5000</td><td>25000</td><td>1/6</td></tr><tr><td>HalfCheetah</td><td>5000</td><td>25000</td><td>1/30</td></tr><tr><td>Swimmer</td><td>5000</td><td>25000</td><td>1/60</td></tr><tr><td>Dexterous gym</td><td>10000</td><td>25000</td><td>1/6</td></tr></table>
322
+
323
+ Table 2: Different hyperparamters in different environments
324
+
325
+ For the time step of starting to train the model and the actor-critic, a more complex environment requires more samples before we start training. For the constant $c$ used for deciding the bias term, we initially want a heuristic way such that no more hyperparameter-tuning is needed for different environments. However, we find that the bias with $c = 1/6$ is too optimistic in HalfCheetah and Swimmer. We decrease it to make it work in these two 386 environments.
326
+
327
+ ![01963fe1-63c8-75c7-bd0a-109fa4a0a898_11_312_225_1162_1074_0.jpg](images/01963fe1-63c8-75c7-bd0a-109fa4a0a898_11_312_225_1162_1074_0.jpg)
328
+
329
+ Figure 4: Results of increasing the frequency of updating the critic
330
+
331
+ ## 87 C Sample efficiency
332
+
333
+ For the previous experiments, the critic is updated once at each time step. We compare our method with a variant of TD3 which has a higher frequency of updating the critic. The results of updating the critic twice at each time steps are shown in Figure 4. We can see that simply updating the critic more can not achieve a similar performance with our method. Including imaginary transitions of our method in the training of the critics is more efficient than simply adding more samples from the replay buffer.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/7CrXRhmzVVR/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,249 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § SOLVING COMPLEX MANIPULATION TASKS WITH MODEL-ASSISTED MODEL-FREE REINFORCEMENT LEARNING
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: In this paper, we propose a novel deep reinforcement learning approach for improving the sample efficiency of a model-free actor-critic method by using a learned model to encourage exploration. The basic idea consists in generating imaginary transitions with noisy actions, which can be used to update the critic. To counteract the model bias, we introduce a high initialization for the critic and two filters for the imaginary transitions. Finally, we evaluate our approach with the TD3 algorithm on different robotic tasks and demonstrate that it achieves a better performance with higher sample efficiency than several other model-based and model-free methods.
12
+
13
+ Keywords: Reinforcement learning, Data augmentation, Imaginary exploration, Optimistic initialization
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Deep reinforcement learning (DRL) has shown its potential in solving difficult robotic tasks especially when facing complex dynamics and contact-rich environment. For instance, dexterous manipulation tasks like rotating a cube or ball to a desired position and orientation can be solved with state-of-the-art model-free methods [1], model-based methods [2, 3]. However, the inherent issues related to the stability and sample efficiency of DRL algorithms still make them challenging to apply on real robots. To tackle those issues, various approaches have been investigated. In this paper, we specifically investigate how a learned model can help accelerate a model-free method. Previous research in this direction (see Section 2) used a learned model for data augmentation [4], to improve critic estimation [5], for gradient computation [6], or for guiding exploration in the true environment [7].
18
+
19
+ In contrast to previous model-based methods, we propose to "explore" inside the learned model, which is arguably safer than in the real environment. In the learned model, noisy actions are executed in states visited in the true environment to obtain imaginary transitions, which can be used to compute imaginary target Q-values for updating the Q-function of the current policy. With a perfect transition model, this approach would help accelerate learning the Q-function. However, since both the learned model and the estimated Q-function may be incorrect, directly using those imaginary transitions may lead to potential issues. To counteract them, we introduce several techniques: high Q-value initialization and filtering of imaginary transitions to favor optimistic targets that are uncertain.
20
+
21
+ Intuitively, with a higher initialization, we approximate the "optimism in face of uncertainty" principle when evaluating random actions. In the spirit of this principle, we only keep those that lead to higher evaluations than real transitions. Moreover, to avoid unnecessary updates, those imaginary transitions are selected with higher probability if there is a larger uncertainty in the corresponding Q-values. Note that since the noisy actions are performed in the learned model, our model is not a true exploration strategy. However, if such imaginary targets are indeed used to update the Q-function, these can lead to higher
22
+
23
+ evaluations of the corresponding actions, which would later favor selecting them in the true environment. In this paper, we implement those techniques in the TD3 algorithm (see Section 3) [8]. However, they may be beneficial in other DRL algorithms as well.
24
+
25
+ Contributions: We propose a novel approach to exploit a learned model in DRL (see Section 4). We validate the proposed method and show that it outperforms relevant state-of-the-art algorithms in MuJoCo [9] robot environments and especially in some complex manipulation tasks (see sec:result). Moreover, we analyze and discuss the different proposed techniques.
26
+
27
+ § 2 RELATED WORK
28
+
29
+ Researchers have explored various methods for combining a learned model with model-free reinforcement learning algorithms. They can mainly be divided into four categories: (1) model-based data augmentation, (2) model-based value estimation, (3) analytic gradient calculation, and (4) model-guided exploration.
30
+
31
+ Dyna [4] is a typical architecture for learning a dynamics model with the true experience and using the dynamics model to generate imaginary data for training a value function and a policy. Gu et al. [10] introduce a local linear model for representing the dynamics and show the improvement of using the imaginary rollouts with this model. ME-TRPO [11] is a method leveraging an ensemble dynamics model in a model-free reinforcement learning algorithm TRPO [12]. Another work, MA-BDDPG [13], aims to alleviate the effects of model bias by considering uncertainty when using imaginary transitions stored in a replay buffer. The uncertainty of imaginary transitions is measured as the variance of an ensemble of critics. In contrast to this method, we use the disagreement between the two target critics in TD3 [8] as an uncertainty measure and generate imaginary transitions online instead of maintaining another imaginary replay buffer.
32
+
33
+ Focusing on improving the estimation of the $\mathrm{Q}$ function, Feinberg et al. [5] propose a method called MVE which uses the observed states and a learned dynamics model to simulate for a fixed horizon with a current policy. With this imaginary segment, a better target Q-value (value expansion) is applied in the training of the critic. Instead of using the dynamics model to forward for several steps, our method simulates only one step (to reduce the impact of the model error) but not directly with the current policy. The performance of MVE is sensitive to a difficult-to-set hyperparameter, the simulation horizon, which is limited by the quality of the learned model. To overcome this problem, Buckman et al. [14] use a weighted sum of value expansions from different horizons and different models. They learn ensemble models to approximate the transition function, reward function, and Q-function. Weights are then assigned to the value estimations of different prediction horizons according to the variance from those models. To ensure a monotonic improvement under the model bias, Janner et al. [15] provided theoretical analysis on how to decide the simulation horizon.
34
+
35
+ Instead of using the learned model as an environment simulator, researchers have also exploited its differentiability. Deisenroth and Rasmussen [16] propose a method called PILCO to learn a probabilistic dynamics model and use it in policy search by calculating analytical gradient of the object with respect to the policy parameters. Clavera et al. [6] further extend the idea and train a bootstrap ensemble probabilistic dynamics model.
36
+
37
+ As for guiding the exploration, a natural idea is to try to explore more in regions where the learned model is uncertain. For instance, Pathak et al. [17] formulate the disagreement across an ensemble model as an intrinsic reward. Another example is the work of Shyam et al. [7] in which they measure the novelty of state-action pairs with a learned model and use this novelty as the objective of an exploration Markov Decision Process (MDP) to find an exploratory policy. In contrast to these methods, we influence the exploration in the true environment by trying noisy actions in the learned model and setting a high initialization for the critics.
38
+
39
+ § 3 BACKGROUND
40
+
41
+ A Markov Decision Process (MDP) is composed of a set of state $\mathcal{S}$ , a set of action $\mathcal{A}$ , a transition function $T : \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{P}\left( \mathcal{S}\right)$ (with $\mathcal{P}\left( \mathcal{S}\right)$ denoting the set of probability distributions over $\mathcal{S}$ ), a reward function $r : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ , and a distribution over initial state $\mu \in \mathcal{P}\left( \mathcal{S}\right)$ . Given a deterministic policy $\pi : \mathcal{S} \rightarrow \mathcal{A}$ , the value function ${V}^{\pi }\left( s\right) = {\mathbb{E}}_{\pi }\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}{r}_{t} \mid {s}_{0} = s}\right\rbrack$ is defined as the expected discounted cumulative reward an agent will receive starting from a feasible state $s$ and following the policy. The discounted factor is defined as $\gamma \in \left\lbrack {0,1}\right\rbrack$ . To solve this MDP is to find an optimal policy ${\pi }^{ * }$ that maximizes the expected value function ${\pi }^{ * }\left( s\right) = \arg \mathop{\max }\limits_{\pi }{\mathbb{E}}_{\mu }\left\lbrack {{V}^{\pi }\left( s\right) \mid s \sim \mu }\right\rbrack$ . To find this optimal policy, we often need an action-value function ${Q}^{\pi }\left( {s,a}\right) = {\mathbb{E}}_{\pi }\left\lbrack {r\left( {s,a}\right) + \gamma {V}^{\pi }\left( {s}^{\prime }\right) }\right\rbrack$ , which corresponds to the expected discounted sum of rewards obtained by executing action $a$ in state $s$ and acting according to policy $\pi$ thereafter. Since we focus on robotic tasks, we only consider deterministic MDPs in this work, although our approach could certainly be applied in stochastic settings as well.
42
+
43
+ Deep Deterministic Policy Gradient (DDPG) DDPG [18] is a DRL algorithm with an actor-critic structure for solving an MDP with continuous state and action spaces. The policy $\pi$ (actor) and its Q-function ${Q}^{\pi }\left( {s,a}\right)$ (critic) are approximated by neural networks parameterized by $\theta$ and $\phi$ respectively. In DDPG, the actor interacts with the environment generating transitions that are stored in a replay buffer. At each training step, a mini-batch $\left\{ {\left( {{s}_{i},{a}_{i},{r}_{i},{s}_{i + 1}}\right) \mid i = 1,\cdots ,N}\right\}$ is sampled from the replay buffer and used for updating the actor’s parameters $\theta$ according to the deterministic policy gradient:
44
+
45
+ $$
46
+ {\nabla }_{\theta }\mathcal{L}\left( \pi \right) = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\nabla }_{a}Q\left( {{s}_{i},a \mid \phi }\right) { \mid }_{a = \pi \left( {{s}_{i} \mid \theta }\right) }{\nabla }_{\theta }\pi \left( {{s}_{i} \mid \theta }\right) , \tag{1}
47
+ $$
48
+
49
+ while the critic’s parameters $\phi$ are updated to minimize the following loss function:
50
+
51
+ $$
52
+ \mathcal{L}\left( Q\right) = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\left( Q\left( {s}_{i},{a}_{i} \mid \phi \right) - {y}_{i}\right) }^{2} \tag{2}
53
+ $$
54
+
55
+ where ${y}_{i}$ is the target Q-value for the sampled transition $\left( {{s}_{i},{a}_{i},{r}_{i},{s}_{i + 1}}\right)$ :
56
+
57
+ $$
58
+ {y}_{i} = {r}_{i} + {\gamma Q}\left( {{s}_{i + 1},\pi \left( {{s}_{i + 1} \mid {\theta }^{\prime }}\right) \mid {\phi }^{\prime }}\right) . \tag{3}
59
+ $$
60
+
61
+ To improve the learning stability, the target Q-value is calculated with a target Q function $Q\left( {\cdot , \cdot \mid {\phi }^{\prime }}\right)$ and a target actor $\pi \left( {\cdot \mid {\theta }^{\prime }}\right)$ . The target networks $\left( {{\phi }^{\prime },{\theta }^{\prime }}\right)$ are initialized to the same parameters as the original networks $\left( {\phi ,\theta }\right)$ , but are then updated slowly towards the original ones with ${\phi }^{\prime } \leftarrow \left( {1 - \tau }\right) {\phi }^{\prime } + {\tau \phi }$ and ${\theta }^{\prime } \leftarrow \left( {1 - \tau }\right) {\theta }^{\prime } + {\tau \theta }$ , where $\tau \in \left( {0,1}\right)$ is a hyperparameter.
62
+
63
+ Twin Delayed DDPG (TD3) TD3 [8] improves DDPG with three tricks: clipped double-Q learning, target policy smoothing, and delayed policy update. The first trick aims to prevent overestimation by learning two critics $Q\left( {\cdot , \cdot \mid {\phi }_{1}}\right) ,Q\left( {\cdot , \cdot \mid {\phi }_{2}}\right)$ (with their corresponding target critics $Q\left( {\cdot , \cdot \mid {\phi }_{1}^{\prime }}\right) ,Q\left( {\cdot , \cdot \mid {\phi }_{2}^{\prime }}\right)$ ). When calculating the target Q-value, the target critic with smaller value is used for training. The second trick aims to smooth the objective function by injecting a truncated Gaussian noise $\epsilon$ to the output of the target policy. Note that the perturbed actions are clipped to ensure they remain feasible. For legibility, we do not write this step in our equations. With these two tricks, the target Q-value becomes:
64
+
65
+ $$
66
+ {y}_{i} = {r}_{i} + \gamma \min \left\{ {Q\left( {{s}_{i + 1},\pi \left( {{s}_{i + 1} \mid {\theta }^{\prime }}\right) + \epsilon \mid {\phi }_{1}^{\prime }}\right) ,Q\left( {{s}_{i + 1},\pi \left( {{s}_{i + 1} \mid {\theta }^{\prime }}\right) + \epsilon \mid {\phi }_{2}^{\prime }}\right) }\right\} . \tag{4}
67
+ $$
68
+
69
+ Lastly, the delayed policy update enforces that the actor be updated at a lower frequency than the critic. With those tricks, it has been empirically observed that the performance and stability of learning the Q-function are substantially improved.
70
+
71
+ § 4 METHODOLOGY
72
+
73
+ Our proposed algorithm extends TD3 to use a learned model to enhance the update of its critics. Note that while the dynamics model is not known, we assume that the reward
74
+
75
+ Algorithm 1 MAMF
76
+
77
+ initialize critics $Q\left( {s,a \mid {\phi }_{1}}\right) ,Q\left( {s,a \mid {\phi }_{2}}\right)$
78
+
79
+ 2: initialize actor $\pi \left( {s \mid \theta }\right)$ , model $f$ , and empty replay buffer $\mathcal{R}$
80
+
81
+ 3: set the parameters of targets ${\phi }_{1}^{\prime } \leftarrow {\phi }_{1},{\phi }_{2}^{\prime } \leftarrow {\phi }_{2}$ and ${\theta }^{\prime } \leftarrow \theta$ .
82
+
83
+ initialize $\eta$ and $\rho$
84
+
85
+ start with initial state ${s}_{0} \sim \mu$
86
+
87
+ for $t = 0\ldots h$ do
88
+
89
+ generate a transition with random policy and keep track of the max reward ${r}^{ * }$
90
+
91
+ save transition $\left( {{s}_{t},{a}_{t},{r}_{t},{s}_{t + 1}}\right)$ in replay buffer $\mathcal{R}$
92
+
93
+ if episode ends then reset the environment ${s}_{t + 1} \sim \mu$ end if
94
+
95
+ if $t > {h}_{0}$ then sample $\left\{ {\left( {{s}_{i},{a}_{i},{r}_{i},{s}_{i + 1}}\right) \mid i = 1,\cdots ,N}\right\}$ from $\mathcal{R}$ and pretrain $f$ end
96
+
97
+ if
98
+
99
+ end for
100
+
101
+ 12: use max reward ${r}^{ * }$ to set the bias in the critics according to Equation (6).
102
+
103
+ start with initial state ${s}_{0} \sim \mu$
104
+
105
+ for $t = 0,\ldots ,H$ do
106
+
107
+ interact with the environment with current policy $\pi$ and small noise
108
+
109
+ save transition $\left( {{s}_{t},{a}_{t},{r}_{t},{s}_{t + 1}}\right)$ to replay buffer $\mathcal{R}$
110
+
111
+ if episode ends then reset the environment ${s}_{t + 1} \sim \mu$ end if
112
+
113
+ sample $\left\{ {\left( {{s}_{j},{a}_{j},{r}_{j},{s}_{j + 1}}\right) \mid j = 1,\cdots ,N}\right\}$ from $\mathcal{R}$ and train $f$
114
+
115
+ sample $\left\{ {\left( {{s}_{k},{a}_{k},{r}_{k},{s}_{k + 1}}\right) \mid k = 1,\cdots ,N}\right\}$ from $\mathcal{R}$ and create imaginary transitions
116
+
117
+ $\eta \leftarrow \eta \times \rho$
118
+
119
+ filter the imaginary transitions with the two filters
120
+
121
+ update critic according to Equation (10)
122
+
123
+ if $t{\;\operatorname{mod}\;{policy}}$ _delay $= 0$ then
124
+
125
+ update actor
126
+
127
+ update target networks
128
+
129
+ end if
130
+
131
+ end for
132
+
133
+ function is known. This assumption is natural in robotics since the reward function is defined by the system designer to guide the learning of the robot.
134
+
135
+ To obtain a dynamics model, we first pretrain a model and use the data generated during this period to set a high initialization for the critics. The dynamics model is then further updated online and used for creating imaginary transitions with noisy actions. The goal in our approach is to train the critics with imaginary transitions, which can either improve the Q-estimation or guide the exploration such that potentially promising actions could be tried in the true environment. To that aim, we apply two filters to the imaginary transitions, since using any imaginary transitions is generally detrimental due to model bias and Q-value estimation error. The first filter keeps the imaginary transitions with higher target Q-values. The second filter selects imaginary transitions whose Q-values are uncertain with higher probability. With all these components, we propose the model-assisted model-free (MAMF) algorithm (see Algorithm 1). To ensure the reproducibility of our results, the source code of MAMF will be released after publication.
136
+
137
+ Next we explain how we learn the dynamics model, how we implement the high initialization, how we perform exploration in the learned model, and how we filter the imaginary transitions.
138
+
139
+ Learning the Dynamics Model. Our algorithm starts by pretraining a dynamics model $\mathcal{M} : \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}$ with samples generated by a random policy (e.g., which uniformly randomly selects actions). Before starting updating $\mathcal{M},{h}_{0}$ transitions are first collected. Then, $\mathcal{M}$ is repeatedly updated via stochastic mini-batch gradient descent until $h$ transitions have been generated by the random policy, which ends pretraining. After that, the model is still updated with the same procedure, but the transitions are now generated by the current policy. Online DRL training is performed until $H$ transitions have been generated. The model is (pre)trained by minimizing a mean-squared error between predicted next states and true next states computed over a mini-batch $\left\{ {\left( {{s}_{i},{a}_{i},{r}_{i},{s}_{i + 1}}\right) \mid i = 1,\cdots ,N}\right\}$ :
140
+
141
+ $$
142
+ {L}_{\mathcal{M}} = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\left( f\left( {s}_{i},{a}_{i}\right) - {s}_{i + 1}\right) }^{2}. \tag{5}
143
+ $$
144
+
145
+ High Initialization of Critics. The critics are initialized to optimistic values by changing the bias term in the last layer of the critic networks. This bias is set as follows:
146
+
147
+ $$
148
+ b = \left( {{r}^{ * } \times l}\right) \times c \tag{6}
149
+ $$
150
+
151
+ where ${r}^{ * }$ is the max reward observed during pretraining, $l$ is the max episode length, and $c \in \left\lbrack {0,1}\right\rbrack$ is a hyperparameter controlling how optimistic the high initialization is.
152
+
153
+ Exploration with Learned Model. In the actor-critic architecture of TD3, the critic is trained using transitions $\left\{ {\left( {{s}_{i},{a}_{i},{r}_{i},{s}_{i + 1}}\right) \mid i = 1,\cdots ,N}\right\}$ sampled from the replay buffer. For the same set of states $\left\{ {{s}_{i} \mid i = 1,\cdots ,N}\right\}$ , a noise from a truncated normal distribution $\nu \sim \operatorname{clip}\left( {\mathcal{N}\left( {0,1}\right) , - \eta ,\eta }\right)$ with a large clipping bound $\eta$ is applied on the policy action $\pi \left( {{s}_{i} \mid }\right.$ $\theta$ ) to generate imaginary transitions. The bound $\eta$ is initialized as the max action ${a}_{\max }$ the agent can take in each dimension. After each iteration, the bound for truncating the Gaussian noise exponentially decays with a fixed decaying rate $\rho$ . Formally, the imaginary transitions $\left( {{s}_{i},{\widehat{a}}_{i},{\widehat{r}}_{i},{\widehat{s}}_{i + 1}}\right)$ generated by the learned model $f$ and known reward function $r$ are written:
154
+
155
+ $$
156
+ {\widehat{a}}_{i} = \operatorname{clip}\left( {\pi \left( {{s}_{i} \mid \theta }\right) + \nu , - {a}_{\max },{a}_{\max }}\right) ,\;{\widehat{s}}_{i + 1} = f\left( {{s}_{i},{\widehat{a}}_{i}}\right) ,\;{\widehat{r}}_{i} = r\left( {{s}_{i},{\widehat{a}}_{i}}\right) . \tag{7}
157
+ $$
158
+
159
+ Filtering. To only use the imaginary transitions whose actions are potentially better than the original ones, these transitions are filtered by their target Q-values. Only the imaginary transitions whose target Q-values are higher than the original ones are considered:
160
+
161
+ $$
162
+ {\widehat{y}}_{i}\left( {{\widehat{r}}_{i},{\widehat{s}}_{i + 1} \mid {\theta }^{\prime },{\phi }_{1}^{\prime },{\phi }_{2}^{\prime }}\right) \geq {y}_{i}\left( {{r}_{i},{s}_{i + 1} \mid {\theta }^{\prime },{\phi }_{1}^{\prime },{\phi }_{2}^{\prime }}\right) \tag{8}
163
+ $$
164
+
165
+ where ${\widehat{y}}_{i}$ and ${y}_{i}$ are calculated by Equation (4)
166
+
167
+ The remaining imaginary transitions are filtered with respect to the uncertainty in the Q-value estimates. This uncertainty can be measured as the disagreement of the two target critics in TD3, which can be expressed as the absolute value of the difference of the target Q-values given by the two target critic networks:
168
+
169
+ $$
170
+ \Delta = \left| {Q\left( {{\widehat{s}}_{i + 1},\pi \left( {{\widehat{s}}_{i + 1} + \epsilon \mid {\theta }^{\prime }}\right) \mid {\phi }_{1}^{\prime }}\right) - Q\left( {{\widehat{s}}_{i + 1},\pi \left( {{\widehat{s}}_{i + 1} + \epsilon \mid {\theta }^{\prime }}\right) \mid {\phi }_{2}^{\prime }}\right) }\right| . \tag{9}
171
+ $$
172
+
173
+ This difference is then normalized by the max difference observed so far in the sampled transitions to to obtain a ratio $P = \frac{\Delta }{\max \Delta } \in \left\lbrack {0,1}\right\rbrack$ . With probability $P$ , an imaginary transition is selected for training the critics, otherwise it is dropped. Therefore, imaginary transitions with larger uncertainty have a higher chance to be kept. The number of remaining imaginary transitions after the two filters is denoted by $n$ .
174
+
175
+ Finally, the loss for the critics is composed of two parts, the original loss and an additional loss obtained from the filtered imaginary transitions:
176
+
177
+ $$
178
+ {L}_{\phi } = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{\left( Q\left( {s}_{i},{a}_{i} \mid \phi \right) - {y}_{i}\right) }^{2} + \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( Q\left( {s}_{i},{\widehat{a}}_{i} \mid \phi \right) - {\widehat{y}}_{i}\right) }^{2} \tag{10}
179
+ $$
180
+
181
+ where the target value ${y}_{i}$ is calculated with Equation (4) and the target values ${\widehat{y}}_{i}$ for imaginary transitions follow almost the same calculation but with the action, next state, and reward substituted by their imaginary counterparts.
182
+
183
+ § 5 EXPERIMENTAL RESULTS
184
+
185
+ Our experiments are designed to answer the following questions:
186
+
187
+ * Do our propositions improve the performance with respect to relevant baselines?
188
+
189
+ < g r a p h i c s >
190
+
191
+ Figure 1: Visualization of three environments in Dexterous Gym
192
+
193
+ * Does each component of our method (i.e., decaying noise, high initialization, filtering) contribute to its performance?
194
+
195
+ * In which aspects does our method help?
196
+
197
+ Before answering those questions, we present next the experimental environments and discuss the experimental set-up we used.
198
+
199
+ § 5.1 ENVIRONMENTS AND EXPERIMENTAL SET-UP
200
+
201
+ We evaluated our approach on several robot environments in ${MuJoCo}\left\lbrack 9\right\rbrack$ and three complex manipulation tasks proposed in Dexterous Gym [19]. The visualizations of these manipulation tasks are shown in Figure 1. The task is throwing and catching a object at desired position and orientation with two dexterous manipulation hands in EggCatchUnderarm and EggCatchOverarm. The task of PenSpin is rotating a pen without dropping it. The details of these environments are described in Appendix A .
202
+
203
+ We chose these three representative environments from Dexterous Gym [19]. The other environments like BlockCatchUnderarm are simply variants with different objects.
204
+
205
+ Across all the environments, all the hyperparameters such as learning rates, batch size and network architectures for the actor and critic are kept the same except for the hyperparam-eters related to setting the bias. The hyperparameters and a brief discussion about setting them are shown in Appendix B. All the experiments are performed over five different random seeds and the performance is measured by the sum of rewards averaged over 10 episodes.
206
+
207
+ § 5.2 RESULTS
208
+
209
+ § 5.2.1 DO OUR PROPOSITIONS IMPROVE THE PERFORMANCE W.R.T RELEVANT BASELINES?
210
+
211
+ To answer this question, we evaluate our method on the environments introduced above. We compare with the following baselines: (1) TD3, (2) MVE-TD3: MVE with TD3, and (3) MA-TD3: MA-BDDPG with TD3.
212
+
213
+ Both MVE [5] and MA-BDDPG [13] are methods that use a dynamics model to generate imaginary transitions for training the critics. Since their source codes are not publicly available, we implemented those methods in TD3 for consistency. For MVE, we set the prediction horizon to 3 which is applicable in all the environments. For MA-BDDPG, we actually implemented a version similar to it, which corresponds to TD3 with only imaginary transitions filtered by uncertainty. It is equivalent to our method without high initialization, decaying noise and the filter favoring higher target $\mathrm{Q}$ values. Similar to the original method proposed by Charlesworth and Montana [19], demonstrations are also used in all the experiments in Dexterous Gym.
214
+
215
+ The training results are shown in Figure 2. We can see the outperformance of our method across different environments. Especially in complex environments like EggCatchUnderarm, our method achieves a much higher final return and has low variability across different runs. We also notice that combining MVE with TD3 can not guarantee an improvement in some environments. This might be due to the target Q smoothing in TD3 (clipped noise added to the action when calculating the target Q-values). Similar results of MVE can also be observed in the work by Buckman et al. [14]. Their results also show that MVE does not guarantee an improvement when applied on DDPG [18].
216
+
217
+ < g r a p h i c s >
218
+
219
+ Figure 2: Results of training in MuJoCo environments and Dexterous Gym
220
+
221
+ § 5.2.2 DOES EACH COMPONENT OF OUR METHOD CONTRIBUTE TO ITS PERFORMANCE?
222
+
223
+ To prove the significance of each component in our method, we performed an ablation study. By comparing the results shown in Figure 3(a)-(b), we can find that high initialization is necessary to counteract the effect of the model bias and encourage the exploration at the beginning. Using a decaying noise and choosing imaginary transitions with higher target Q-values achieve the ideas of both seeking optimistic state-action during training and of exploring in the model. The filter with uncertainty guides the learning of the Q-function to emphasize on the uncertain part and thus helps the estimation of Q-values. All the components in synergy improve the training of the critic.
224
+
225
+ § 5.2.3 IN WHICH ASPECTS DOES OUR METHOD HELP?
226
+
227
+ We conjecture that our method helps in three ways: better exploration, better estimation of Q-values, and exploitation of independence among action components. For the first point, we believe that the higher final performance of our method compared to the baselines provides some evidence of the improved exploration.
228
+
229
+ For the second point, we compare our method with a variant of TD3 where critics are updated more frequently. This variant could achieve better Q-estimations and thus better performances in some environments. Due to the page limit, the results and details for the experiments are shown in Appendix C. We can see that simply updating the critic more often is still not sufficient to achieve the performance gains of our method.
230
+
231
+ < g r a p h i c s >
232
+
233
+ Figure 3: Figures (a) and (b) show the ablation study. Figure (c) show the results of training in the environments defined in Section 5.2.3
234
+
235
+ For the last point, we believe that our method also exploits potential action independence (i.e., in some states, parts of the actions do not have an impact on the next environment state). This kind of independence is common in robotic tasks e.g., in a walking robot, the actions of any limb that does not touch the ground may have less effects on the next states, or more concretely, in EggCatchUnderarm, once a hand has thrown a ball, the actions of this hand is not important anymore. To test this conjecture, we augment HalfCheetah by doubling its action space $\mathcal{A}$ to an enlarged action space $\mathcal{A} \times \mathcal{A}$ . The transitions in this augmented environment depend only on the first or second part of the enlarged actions. This dependence switches from one part to the other every $H$ time steps. Following this pattern, part of the actions has no direct effect. The results of training in this artificial environment with different frequencies of switching are shown in Figure 3(c). Although the augmented environment has a larger action space, our method can still improve on the performance.
236
+
237
+ § 6 LIMITATIONS
238
+
239
+ The limitations of our work mainly include these three aspects:
240
+
241
+ (1) Using a too large or too small bias in the critic will hurt the performance of our method. Although our heuristic for setting the bias is quite robust (except for HalfCheetah), it is not clear how to analytically determine it. A more theoretical analysis may be required for applying it in a more general situation.
242
+
243
+ (2) While much work has investigated exploration strategies in model-free algorithms, in our method, a simple truncated Gaussian noise is applied to generate imaginary transitions. A more sophisticated way of choosing exploratory actions might further improve our approach.
244
+
245
+ (3)Although our proposed method significantly improves the sample efficiency and the performance, especially on the harder tasks of Dexterous Gym, we believe that more efforts may still be needed to fully run the algorithm in a real robot. Potential avenues to make the approach even more practical could be to use it in combination with sim2real methods $\left\lbrack {{20},{21}}\right\rbrack$ , exploit any extra a priori known information such as symmetries [22], or exploit the learned model in some other ways [23] for instance.
246
+
247
+ § 7 CONCLUSION
248
+
249
+ We propose a method called MAMF, which leverages a dynamics model to help train the critic in a model-free algorithm. One key novelty is to generate imaginary data with exploratory actions. Our experiments demonstrate the sample efficiency and performance improvements of this method even in some complex manipulation tasks. We believe that our method is a step forward towards making DRL practical on real robots and shows a promising way of solving those complex tasks. Although our proposition is implemented with TD3, it is actually independent from this DRL algorithm. Thus, one interesting future work would be to adapt and combine our proposition with other model-free algorithms.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/7JVNhaMbZUu/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,285 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Particle-Based Score Estimation for State Space Model Learning in Autonomous Driving
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Multi-object state estimation is a fundamental problem for robotic applications where a robot must interact with other moving objects. Typically, other objects' relevant state features are not directly observable, and must instead be inferred from observations. Particle filtering can perform such inference given approximate transition and observation models. However, these models are often unknown a priori, yielding a difficult parameter estimation problem since observations jointly carry transition and observation noise. In this work, we consider learning maximum-likelihood parameters using particle methods. Recent methods addressing this problem typically differentiate through time in a particle filter, which requires workarounds to the non-differentiable resampling step, that yield biased or high variance gradient estimates. By contrast, we exploit Fisher's identity to obtain a particle-based approximation of the score function (the gradient of the log likelihood) that yields a low variance estimate while only requiring stepwise differentiation through the transition and observation models. We apply our method to real data collected from autonomous vehicles (AVs) and show that it learns better models than existing techniques and is more stable in training, yielding an effective smoother for tracking the trajectories of vehicles around an AV.
12
+
13
+ Keywords: Autonomous Driving, Particle Filtering, Self-supervised Learning
14
+
15
+ ## 1 Introduction
16
+
17
+ Multi-object state estimation is a fundamental problem in settings where a robot must interact with other moving objects, since their state is directly relevant for decision making. Typically, other objects' relevant state features are not directly observable. Instead, the robot must infer them from a stream of observations it receives via a perception system. For example, an autonomous vehicle (AV) selects actions based on the state of nearby road users. However, such road users are only partially observed, owing to limited field of view, occlusions, and imperfections in the AV's sensors and perception systems. Such partial observability negatively affects many downstream tasks in a robot's behavioural stack that depend on observations, e.g., action planning.
18
+
19
+ Addressing partial observability requires sequential state estimation, to which Bayesian filtering offers a generic probabilistic approach. In particular, sequential Monte Carlo methods, also known as particle filtering, have been successfully applied to state estimation in many robotics applications [1]. However, Bayesian filters require models that reasonably approximate the transition and observation models of a state-space model (SSM). In some special cases, these models can be derived analytically from first principles, e.g., when the physical dynamics are well understood, or by modeling a sensor's physical characteristics. In many real-world applications, however, these models cannot be specified analytically. For example, the transition model may encode complicated motion dynamics and environmental physics. In multi-agent settings, other agents' behaviour must also be modelled. Modelling observations is also difficult. Modern perception systems often involve multiple stages and combine information from multiple sensors, making observation models practically impossible to specify by hand. By contrast, collecting observations from a robotic system is relatively easy and cheap. We are interested, therefore, in algorithms that can leverage such observations to learn transition and observation models in a self-supervised fashion, and yield an effective particle smoother. Learned transition and observation models can also be independently useful for other applications, such as the evaluation of AVs by simulating realistic observations.
20
+
21
+ ---
22
+
23
+ *These authors contributed equally to this work.
24
+
25
+ ---
26
+
27
+ In this work, we propose Particle Filtering-Based Score Estimation using Fisher's Identity (PF-SEFI), a method for jointly learning maximum-likelihood parameters of both the transition and observation models of an SSM. Unlike many recently proposed methods [2, 3, 4, 5, 6, 7], our approach avoids differentiable approximations of the resampling step. We achieve this by revisiting a methodology originally proposed in statistics $\left\lbrack {8,9}\right\rbrack$ that relies on a particle approximation of the score, i.e., the gradient of the log likelihood of observation sequences, obtained through Fisher's identity. This only requires differentiating through the transition and observation models. Unfortunately, a direct particle approximation of this identity provides a high variance estimate of the score. While [8] propose an alternative low variance estimate, it admits a $\mathcal{O}\left( {N}^{2}\right)$ cost, where $N$ is the number of particles. Furthermore, these methods compute and store the gradient of the marginal log-likelihood with respect to model parameters for each particle. This requires computing Jacobian matrices, which are slow to compute using automatic differentiation tools such as TensorFlow and PyTorch $\left\lbrack {{10},{11}}\right\rbrack$ which rely on Jacobian-vector products. This makes these methods impractical for large models. By contrast, PF-SEFI is a simple scalable $\mathcal{O}\left( N\right)$ variant with only negligible bias. PF-SEFI marginalises over particles before computing gradients, allowing automatic differentiation tools to make use of efficient Jacobian-vector product operations, making it significantly faster and allowing us to scale to larger models. To the best of our knowledge, previous particle methods estimating the score have been limited to SSMs with few parameters, whereas we apply PF-SEFI to neural network models with thousands of parameters.
28
+
29
+ We apply PF-SEFI to jointly learn transition and observation models for tracking multiple objects around an AV, using a large set of noisy trajectories, containing almost 10 hours of road-user trajectories observed by an AV. We show that PF-SEFI learns an SSM that yields an effective object tracker as measured by average displacement and yaw errors. We compare the learned observation model to one trained through supervised learning on a dataset of manually labelled trajectories, and show that PF-SEFI yields a better model (as measured by log-likelihood on ground-truth labels) even though it requires no labels for training. Finally, we compare PF-SEFI to a number of existing particle methods for jointly learning transition and observation models and show that it learns better models and is more stable to train.
30
+
31
+ ## 2 2 Related Work
32
+
33
+ Particle filters are widely used for state estimation in non-linear non-Gaussian SSMs where no closed form solution is available; see e.g., [12] for a survey. The original bootstrap particle filter [13] samples at each time step using the transition density particles that are then reweighted according to their conditional likelihood, which measures their "fitness" w.r.t. to the available observation. Particles with low weights are then eliminated while particles with high weights are replicated to focus computational efforts into regions of high probability mass. Compared to many newer methods, such as the auxiliary particle filter [14], the bootstrap particle filter only requires sampling from the transition density, not its evaluation at arbitrary values, which is not possible for the compositional transition density used in this work.
34
+
35
+ In most practical applications, the SSM has unknown parameters that must be estimated together with the latent state posterior (see [9] for a review). Simply extending the latent space to include the unknown parameters suffers from insufficient parameter space exploration [15]. While particle filters can estimate consistently the likelihood for fixed model parameters, a core challenge is that the such estimated likelihood function is discontinuous in the model parameters due to the resampling step, hence complicating its optimization; see e.g. [6, Figure 1] for an illustration.
36
+
37
+ Instead, the score vector can be computed using Fisher's identity [8]. However, as shown in [8], performance degrades quickly for longer sequences if a standard particle filter is used, due to the path degeneracy problem: repeated resampling of particles and their ancestors will leave few or even just one remaining ancestor path for earlier timesteps, resulting in unbiased, but very high variance estimates. Methods for overcoming this limitation exist $\left\lbrack {8,{16},{17}}\right\rbrack$ , but with requirements making them unsuitable in this work. Poyiadjis et al. [8] store gradients separately for each particle, making
38
+
39
+ this approach infeasible for all but the smallest neural networks. Scibior and Wood [17] propose an improved implementation with lower memory requirements by smartly using automatic differentiation. However, their approach still requires storing a computation graph whose size scales with $\mathcal{O}\left( {N}^{2}\right)$ as the transition density for each particle pair must be evaluated during the forward pass. Both previous methods' computational complexity also scales quadratically with the number of particles, $N$ , which is problematic for costly gradient backpropagation through large neural networks. Lastly, Olsson and Westerborn [16] require evaluation of the transition density for arbitrary values, which our compositional transition model does not allow. Instead, in this work, we show that fixed-lag smoothing $\left\lbrack {{18},{19}}\right\rbrack$ is a viable alternative to compute the score function of large neural network models in the context of extended object tracking.
40
+
41
+ There is extensive literature on combining particle filters with learning complex models such as neural networks $\left\lbrack {2,3,4,5,6,{20},{21},{22},{23},{24}}\right\rbrack$ . In contrast to our work, they make use of a learned, data-dependent proposal distribution. However, for parameter estimation, they rely on differentiation of an estimated lower bound (ELBO). Due to the non-differentiable resampling step, this gradient estimation has either extremely high variance or is biased if the high variance terms are simply dropped, as in $\left\lbrack {2,3,4}\right\rbrack$ . As we show in Section 5, this degrades performance noticeably. A second line of work proposes soft resampling [5, 20, 21], which interpolates between regular and uniform sampling, thereby allowing to trade off variance reduction through resampling with the bias introduced by ignoring the non-differentiable component of resampling. Lastly, Corenflos et al. [6] make the resampling step differentiable by using entropy-regularized optimal transport, also inducing bias and a $\mathcal{O}\left( {N}^{2}\right)$ cost.
42
+
43
+ Extended object tracking [25] considers how to track objects which, in contrast to "small" objects [26], generate multiple sensor measurements per timestep. Unlike in our work, transition and measurement models are assumed to be known or to depend on only a few learnable parameters. Similar to our work, the measurement model proposed in [27] assumes measurement sources lying on a rectangular shape. However, our model is more flexible, for example, allowing non-zero probability on all four sides simultaneously.
44
+
45
+ ## 3 State-Space Models and Particle Filtering
46
+
47
+ ### 3.1 State-Space Models
48
+
49
+ A SSM is a partially observed discrete-time Markov process with initial density, ${x}_{0} \sim \mu \left( \cdot \right)$ , transition density ${x}_{t} \mid {x}_{t - 1} \sim {f}_{\theta }\left( {\cdot \mid {x}_{t - 1}}\right)$ , and observation density ${y}_{t} \mid {x}_{t} \sim {g}_{\theta }\left( {\cdot \mid {x}_{t}}\right)$ , where ${x}_{t}$ is the latent state at time $t$ and ${y}_{t}$ the corresponding observation. The joint density of ${x}_{0 : T},{y}_{0 : T}$ satisfies:
50
+
51
+ $$
52
+ {p}_{\theta }\left( {{x}_{0 : T},{y}_{0 : T}}\right) = \mu \left( {x}_{0}\right) {g}_{\theta }\left( {{y}_{0} \mid {x}_{0}}\right) \mathop{\prod }\limits_{{t = 1}}^{T}{f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right) {g}_{\theta }\left( {{y}_{t} \mid {x}_{t}}\right) . \tag{1}
53
+ $$
54
+
55
+ Given this model, we are typically interested in inferring the states from the data by computing the filtering and one-step ahead prediction distributions, ${\left\{ p\left( {x}_{t} \mid {y}_{0 : t}\right) \right\} }_{t \in 0,\ldots , T}$ and ${\left\{ p\left( {x}_{t + 1} \mid {y}_{0 : t}\right) \right\} }_{t \in 0,\ldots , T - 1}$ respectively, and more generally the joint distributions ${\left\{ p\left( {x}_{0 : t} \mid {y}_{0 : t}\right) \right\} }_{t \in 0,\ldots , T}$ satisfying
56
+
57
+ $$
58
+ {p}_{\theta }\left( {{x}_{0 : t} \mid {y}_{0 : t}}\right) = \frac{{p}_{\theta }\left( {{x}_{0 : t},{y}_{0 : t}}\right) }{{p}_{\theta }\left( {y}_{0 : t}\right) },\;{p}_{\theta }\left( {y}_{0 : T}\right) = \int {p}_{\theta }\left( {{x}_{0 : T},{y}_{0 : T}}\right) \mathrm{d}{x}_{0 : T}. \tag{2}
59
+ $$
60
+
61
+ - Additionally, to estimate parameters, we would also like to compute the marginal log likelihood:
62
+
63
+ $$
64
+ {\ell }_{T}\left( \theta \right) = \log {p}_{\theta }\left( {y}_{0 : T}\right) = \log {p}_{\theta }\left( {y}_{0}\right) + \mathop{\sum }\limits_{{t = 1}}^{T}\log {p}_{\theta }\left( {{y}_{t} \mid {y}_{0 : t - 1}}\right) , \tag{3}
65
+ $$
66
+
67
+ where ${p}_{\theta }\left( {y}_{0}\right) = \int {g}_{\theta }\left( {{y}_{0} \mid {x}_{0}}\right) \mu \left( {x}_{0}\right) \mathrm{d}{x}_{0}$ and ${p}_{\theta }\left( {{y}_{t} \mid {y}_{0 : t - 1}}\right) = \int {g}_{\theta }\left( {{y}_{t} \mid {x}_{t}}\right) {p}_{\theta }\left( {{x}_{t} \mid {y}_{0 : t - 1}}\right) \mathrm{d}{x}_{t}$ for $t \geq 1$ . For non-linear non-Gaussian SSMs, these posterior distributions and the corresponding marginal likelihood cannot be computed in closed form.
68
+
69
+ ### 3.2 Particle Filtering
70
+
71
+ Particle methods provide non-parametric and consistent approximations of these quantities. They rely on the combination of importance sampling and resampling steps of a set of $N$ weighted parti-
72
+
73
+ cles $\left( {{x}_{t}^{i},{w}_{t}^{i}}\right)$ , where ${x}_{t}^{i}$ denotes the values of the ${i}^{\text{th }}$ particle at time $t$ and ${w}_{t}^{i}$ is corresponding weight satisfying $\mathop{\sum }\limits_{{i = 1}}^{N}{w}_{t}^{i} = 1$ . We focus on the bootstrap particle filter, shown in Algorithm 1, which samples particles according to the transition density.
74
+
75
+ Algorithm 1 Bootstrap Particle Filter
76
+
77
+ ---
78
+
79
+ Sample ${X}_{0}^{i}\overset{\text{ i.i.d. }}{ \sim }\mu \left( \cdot \right)$ for $i \in \left\lbrack N\right\rbrack$ and set ${\widehat{\ell }}_{0}\left( \theta \right) \leftarrow \log \left( {\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{g}_{\theta }\left( {{y}_{0} \mid {x}_{0}^{i}}\right) }\right)$ .
80
+
81
+ For $t = 1,\ldots , T$
82
+
83
+ 1. Compute weights ${w}_{t - 1}^{i} \propto {g}_{\theta }\left( {{y}_{t - 1} \mid {x}_{t - 1}^{i}}\right)$ with $\mathop{\sum }\limits_{{i = 1}}^{N}{w}_{t - 1}^{i} = 1$ .
84
+
85
+ 2. Sample ${a}_{t - 1}^{i} \sim \operatorname{Cat}\left( {{w}_{t - 1}^{1},\ldots ,{w}_{t - 1}^{N}}\right)$ then ${x}_{t}^{i} \sim {f}_{\theta }\left( {\cdot \mid {x}_{t - 1}^{{a}_{t - 1}^{i}}}\right)$ for $i \in \left\lbrack N\right\rbrack$ .
86
+
87
+ 3. Set ${x}_{0 : t}^{i} \leftarrow \left( {{x}_{0 : t - 1}^{{a}_{t - 1}^{i}},{x}_{t}^{i}}\right)$ for $i \in \left\lbrack N\right\rbrack$ and ${\widehat{\ell }}_{t}\left( \theta \right) \leftarrow {\widehat{\ell }}_{t - 1}\left( \theta \right) + \log \left( {\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{g}_{\theta }\left( {{y}_{t} \mid {x}_{t}^{i}}\right) }\right)$ .
88
+
89
+ ---
90
+
91
+ Let $k \sim \operatorname{Cat}\left( {{\alpha }_{1},\ldots ,{\alpha }_{N}}\right)$ denote the categorical distribution of parameters $\left( {{\alpha }_{1},\ldots ,{\alpha }_{N}}\right)$ so that $\mathbb{P}(k =$ $i) = {\alpha }_{i}$ . At any time $t$ , this algorithm produces particle approximations
92
+
93
+ $$
94
+ {\widehat{p}}_{\theta }\left( {{x}_{0 : t} \mid {y}_{0 : t}}\right) = \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{t}^{i}{\delta }_{{x}_{0 : t}^{i}}\left( {x}_{0 : t}\right) ,\;{\widehat{\ell }}_{t}\left( \theta \right) = \mathop{\sum }\limits_{{t = 0}}^{T}\log \left( {\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{g}_{\theta }\left( {{y}_{t} \mid {x}_{t}^{i}}\right) }\right) , \tag{4}
95
+ $$
96
+
97
+ of ${p}_{\theta }\left( {{x}_{0 : t} \mid {y}_{0 : t}}\right)$ and ${\ell }_{t}\left( \theta \right) = \log {p}_{\theta }\left( {y}_{0 : t}\right)$ . Step 2 resamples, discarding particles with small weights while replicating those with large weights before evolving according to the transition density. This focuses computational effort on the "promising" regions of the state space. Unfortunately, resampling involves sampling $N$ discrete random variables at each time step and as such produces estimates of the log likelihood that are not differentiable w.r.t. $\theta$ as illustrated in [6, Figure 1].
98
+
99
+ While the resulting estimates are consistent as $N \rightarrow \infty$ for any fixed time $t$ [28], this does not guarantee good practical performance. Fortunately, under regularity conditions the approximation error for the estimate ${\widehat{p}}_{\theta }\left( {{x}_{t} \mid {y}_{0 : t}}\right)$ and more generally ${\widehat{p}}_{\theta }\left( {{x}_{t - L + 1 : t} \mid {y}_{0 : t}}\right)$ for a fixed lag $L \geq 1$ as well as $\log {p}_{\theta }\left( {y}_{0 : t}\right) /t$ does not increase with $t$ for fixed $N$ . However, this is not the case for the joint smoothing approximation because successive resampling means that ${\widehat{p}}_{\theta }\left( {{x}_{0 : L} \mid {y}_{0 : t}}\right)$ is eventually approximated by a single unique particle for large enough $t$ , a phenomenon known as path degeneracy; see e.g. [12, Section 4.3].
100
+
101
+ ## 4 Score Estimation using Particle Methods
102
+
103
+ To estimate the parameters $\theta$ of a given SSM (1) along with a dataset of observations ${y}_{0 : T}$ , we want to maximise via gradient ascent the marginal log likelihood in (3). However, the gradient of the marginal log likelihood, i.e., the score function, is intractable. As explained in Section 2, automatic differentiation through the filter is difficult due to the non-differentiable resampling step.
104
+
105
+ ### 4.1 Score Function Using Fisher's Identity
106
+
107
+ We leverage here instead Fisher's identity [8] for the score to completely side-step the non-differentiability problem. This identity shows that
108
+
109
+ $$
110
+ {\nabla }_{\theta }{\ell }_{T}\left( \theta \right) = \int {\nabla }_{\theta }\log {p}_{\theta }\left( {{x}_{0 : T},{y}_{0 : T}}\right) {p}_{\theta }\left( {{x}_{0 : T} \mid {y}_{0 : T}}\right) \mathrm{d}{x}_{0 : T}, \tag{5}
111
+ $$
112
+
113
+ i.e., the score is the expectation of ${\nabla }_{\theta }\log {p}_{\theta }\left( {{x}_{0 : T},{y}_{0 : T}}\right)$ under the joint smoothing distribution ${p}_{\theta }\left( {{x}_{0 : T} \mid {y}_{0 : T}}\right)$ . Plugging in (1), the score function can be simplified to
114
+
115
+ $$
116
+ {\nabla }_{\theta }{\ell }_{T}\left( \theta \right) = \mathop{\sum }\limits_{{t = 0}}^{T}\int {\nabla }_{\theta }\log {g}_{\theta }\left( {{y}_{t} \mid {x}_{t}}\right) {p}_{\theta }\left( {{x}_{t} \mid {y}_{0 : T}}\right) \mathrm{d}{x}_{t}
117
+ $$
118
+
119
+ $$
120
+ + \mathop{\sum }\limits_{{t = 1}}^{T}\int {\nabla }_{\theta }\log {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right) {p}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : T}}\right) \mathrm{d}{x}_{t - 1 : t}. \tag{6}
121
+ $$
122
+
123
+ ### 4.2 Particle Score Approximation
124
+
125
+ The identity (6) shows that we can simply estimate the score by plugging particle approximations of the marginal smoothing distributions $p\left( {{x}_{t - 1 : t} \mid {y}_{0 : T}}\right)$ into (6). This identity makes differentiating through time superfluous and thereby renders the use of differentiable approximations of resampling unnecessary. However, as discussed in Section 3.2, naive particle approximations of the smoothing distribution’s marginals, ${p}_{\theta }\left( {{x}_{t} \mid {y}_{0 : T}}\right)$ and ${p}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : T}}\right)$ , suffer from path degeneracy. To bypass this problem,[8,17] propose an $\mathcal{O}\left( {N}^{2}\right)$ method inspired by dynamic programming. We propose here a simpler and computationally cheaper method that relies on the following fixed-lag approximation of the fixed-interval smoothing distribution, which states that for $L \geq 1$ large enough,
126
+
127
+ $$
128
+ {p}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : T}}\right) \approx {p}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : \min \{ t + L, T\} }}\right) . \tag{7}
129
+ $$
130
+
131
+ This approximation simply assumes that observations after time $t + L$ do not bring further information about the states ${x}_{t - 1},{x}_{t}$ . This is satisfied for most models and the resulting approximation error decreases geometrically fast with $L$ [19]. The benefit of this approximation is that the particle approximation of ${p}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : \min \{ t + L, T\} }}\right)$ does not suffer from path degeneracy and is a simple byproduct of the bootstrap particle filtering of Algorithm 1; e.g., for $t + L < T$ we consider the particle approximation ${\widehat{p}}_{\theta }\left( {{x}_{0 : t + L} \mid {y}_{0 : t + L}}\right) = \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{t + L}^{i}{\delta }_{{x}_{0 : t + L}^{i}}\left( {x}_{0 : t + L}^{i}\right)$ obtained at time $t + L$ and use its corresponding marginals in ${x}_{t - 1},{x}_{t}$ and ${x}_{t}$ to integrate respectively ${\nabla }_{\theta }\log {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right)$ and ${\nabla }_{\theta }\log {g}_{\theta }\left( {{y}_{t} \mid {x}_{t}}\right)$ . For $t + L \geq T$ , we just consider the marginals in ${x}_{t - 1},{x}_{t}$ and ${x}_{t}$ of ${\widehat{p}}_{\theta }\left( {{x}_{0 : T} \mid {y}_{0 : T}}\right)$ . So finally, we consider the estimate,
132
+
133
+ $$
134
+ \overset{⏜}{{\nabla }_{\theta }{\ell }_{T}}\left( \theta \right) = \mathop{\sum }\limits_{{t = 0}}^{T}\int {\nabla }_{\theta }\log {g}_{\theta }\left( {{y}_{t} \mid {x}_{t}}\right) {\widehat{p}}_{\theta }\left( {{x}_{t} \mid {y}_{0 : \min \{ t + L, T\} }}\right) \mathrm{d}{x}_{t}
135
+ $$
136
+
137
+ $$
138
+ + \mathop{\sum }\limits_{{t = 1}}^{T}\int {\nabla }_{\theta }\log {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right) {\widehat{p}}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : \min \{ t + L, T\} }}\right) \mathrm{d}{x}_{t - 1 : t}. \tag{8}
139
+ $$
140
+
141
+ ### 4.3 Score Estimation with Deterministic, Differentiable, Injective Motion Models
142
+
143
+ We have described a generic method to approximate the score using particle filtering techniques. For many applications, however, the transition density function, ${f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right)$ , is the composition of a policy, ${\pi }_{\theta }\left( {{a}_{t} \mid {x}_{t - 1}}\right)$ , which characterises the action distribution conditioned on the state, and a potentially complex but deterministic, differentiable, and injective motion model, $\tau : {\mathbb{R}}^{{n}_{x}} \times {\mathbb{R}}^{{n}_{a}} \rightarrow {\mathbb{R}}^{{n}_{x}}$ where ${n}_{a} < {n}_{x}$ , which characterises kinematic constraints such that ${x}_{t} = \tau \left( {{x}_{t - 1},{a}_{t}}\right) = {\bar{\tau }}_{{x}_{t - 1}}\left( {a}_{t}\right)$ . Under such a composition, the transition density function on the induced manifold ${\mathcal{M}}_{{x}_{t - 1}} =$ $\left\{ {{\bar{\tau }}_{{x}_{t - 1}}\left( {a}_{t}\right) : {a}_{t} \in {\mathbb{R}}^{{n}_{a}}}\right\}$ is thus obtained by marginalising out the latent action variable, i.e.,
144
+
145
+ $$
146
+ {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right) = \mathbb{I}\left( {{x}_{t} \in {\mathcal{M}}_{{x}_{t - 1}}}\right) \int \delta \left( {{x}_{t} - {\bar{\tau }}_{{x}_{t - 1}}\left( {a}_{t}\right) }\right) {\pi }_{\theta }\left( {{a}_{t} \mid {x}_{t - 1}}\right) \mathrm{d}{a}_{t}. \tag{9}
147
+ $$
148
+
149
+ It is easy to sample from this density but it is intractable analytically if the motion model is only available through a complex simulator or if it is not invertible. This precludes the use of sophisticated proposal distributions within the particle filter. Additionally, even if it were known, one cannot use the $\mathcal{O}\left( {N}^{2}\right)$ smoothing type algorithms developed in [8,16] as the density is concentrated on a low-dimensional manifold [29]. This setting is common in mobile robotics, in which controllers factor into policies that select actions and motion models that determine the next state. Indeed, this is precisely the case in our application setting of estimating the state of observed road users around an AV (see Section 5). Learning the corresponding SSM reduces to learning the parameters $\theta$ of the policy, ${\pi }_{\theta }\left( {{a}_{t} \mid {x}_{t - 1}}\right)$ , and the observation model, ${g}_{\theta }\left( {{y}_{t} \mid {x}_{t}}\right)$ . Thankfully, even if the explicit form of the motion model is unknown, we can still compute $\nabla \log {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right)$ as required by the score estimate (8).
150
+
151
+ Lemma 4.1. For any $x \in {\mathbb{R}}^{{n}_{x}}$ , let ${\tau }_{x} : {\mathbb{R}}^{{n}_{a}} \rightarrow {\mathbb{R}}^{{n}_{x}}$ where ${n}_{a} < {n}_{x}$ be a smooth and injective mapping. Then, for any fixed ${x}_{t - 1}$ and ${x}_{t} \in {\mathcal{M}}_{{x}_{t - 1}}$ , the gradient of the transition log density, i.e., ${\nabla }_{\theta }\log {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right)$ , reduces to the gradient of the policy log density, i.e., ${\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t} \mid {x}_{t - 1}}\right)$ , where ${a}_{t}$ is the unique action that takes ${x}_{t - 1}$ to ${x}_{t}$ .
152
+
153
+ Proof. For ${x}_{t - 1}$ and ${x}_{t} \in {\mathcal{M}}_{{x}_{t - 1}}$ , we denote by $J\left\lbrack {\bar{\tau }}_{{x}_{t - 1}}\right\rbrack \left( {{\bar{\tau }}_{{x}_{t - 1}}^{-1}\left( {x}_{t}\right) }\right) \in {\mathbb{R}}^{{n}_{x} \times {n}_{a}}$ the rectangular Jacobian matrix and write ${a}_{t} = {\bar{\tau }}_{{x}_{t - 1}}^{-1}\left( {x}_{t}\right)$ , i.e., this is the unique action such ${\bar{\tau }}_{{x}_{t - 1}}\left( {a}_{t}\right) = {x}_{t}$ . By a standard result from differential geometry $\left\lbrack {{30},{31}}\right\rbrack$ , the transition density (9) satisfies
154
+
155
+ $$
156
+ {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right) = {\pi }_{\theta }\left( {{a}_{t} \mid {x}_{t - 1}}\right) {\left| \det J{\left\lbrack {\bar{\tau }}_{{x}_{t - 1}}\right\rbrack }^{\mathrm{T}}\left( {a}_{t}\right) J\left\lbrack {\bar{\tau }}_{{x}_{t - 1}}\right\rbrack \left( {a}_{t}\right) \right| }^{-1/2}\mathbb{I}\left( {{x}_{t} \in {\mathcal{M}}_{{x}_{t - 1}}}\right) . \tag{10}
157
+ $$
158
+
159
+ (a) An observation from the real data in the form of (b) Sampled observation from the synthetic data in the a set of $2\mathrm{D}$ points (in blue) forming a convex poly- form of a set of $2\mathrm{D}$ points (in blue) forming a con-gon around the observed road user's true (manually la- vex polygon around a synthetic road user's sampled belled) bounding box (in red). bounding box (in red).
160
+
161
+ Figure 1: Observations from real and synthetic data. Notice the similarity between the two with regards to the distribution of the 2D points from the viewpoint of the observing AV (in green).
162
+
163
+ It follows directly that ${\nabla }_{\theta }\log {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right) = {\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t} \mid {x}_{t - 1}}\right)$ .
164
+
165
+ Indeed for the marginals ${\widehat{p}}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : \min \{ t + L, T\} }}\right)$ , we can store the actions corresponding to transitions ${x}_{t - 1} \rightarrow {x}_{t}$ during filtering, and it follows that for the class of SSMs described above, the score estimate reduces to:
166
+
167
+ $$
168
+ \overset{⏜}{{\nabla }_{\theta }{\ell }_{T}}\left( \theta \right) = \mathop{\sum }\limits_{{t = 0}}^{T}\int {\nabla }_{\theta }\log {g}_{\theta }\left( {{y}_{t} \mid {x}_{t}}\right) {\widehat{p}}_{\theta }\left( {{x}_{t} \mid {y}_{0 : \min \{ t + L, T\} }}\right) \mathrm{d}{x}_{t}
169
+ $$
170
+
171
+ $$
172
+ + \mathop{\sum }\limits_{{t = 1}}^{T}\int {\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t} \mid {x}_{t - 1}}\right) {\widehat{p}}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : \min \{ t + L, T\} }}\right) \mathrm{d}{x}_{t - 1 : t}, \tag{11}
173
+ $$
174
+
175
+ where we use Lemma 4.1 to replace the gradient of the transition log density with the gradient of the policy log density in (8), and where ${a}_{t}$ is the action sampled to go from ${x}_{t - 1}$ to ${x}_{t}$ .
176
+
177
+ ## 5 Experiments
178
+
179
+ Problem Setting. Our experiments focus on the problem of state estimation of observed road users (in particular other vehicles) from the viewpoint of an AV, which involves the estimation of 2D poses from an observed sequence of $2\mathrm{D}$ convex polygons in a "bird's eye view" (BEV) constructed from LiDAR point clouds at each time step. For these experiments, we assume that the size of the observed objects, the pose of the AV, and the association of observations with their corresponding objects are known a priori. One such observation (and its corresponding state) is shown in Figure 1a. Here, the observation model must learn to describe the likelihood of 2D points around the periphery of the observed road user (see [25] for a review on such models), while the transition model must learn to describe driving behaviour. We use a feed-forward neural network to parameterise our observation model, where we provide it with range, bearing, and relative bearing from the viewpoint of the corresponding AV as features (Appendix A), and factor our transition model into a deterministic and differentiable motion model based on Ackermann dynamics [32] (Appendix B.1), and a policy parameterised by another feed-forward neural network (Appendix B.2).
180
+
181
+ Baselines, Datasets, and Metrics. We compare the quality of the models learned using PF-SEFI (our method), DPF-SGR [17], PFNET [5], and differentiating through a vanilla PF (ignoring the bias introduced by resampling). In addition to using real data collected from an AV, we generate two synthetic datasets (with 25 and 50 step trajectories), using a hand-crafted policy, and an observation model trained using supervised learning on manually labelled trajectories (Appendix A). An example observation is shown in Figure 1b. Unlike with real data, where the true models are unknown, synthetic datasets allow us to compare the learned models against a known ground truth. We measure the quality of learned models using the following metrics:
182
+
183
+ - Marginal Log Likelihood (MLL): The marginal log likelihood ${\ell }_{T}\left( \theta \right)$ given by filtering observations ${y}_{0 : T}$ using the learned models.
184
+
185
+ ![01963f73-1ee9-71f7-b789-944cbc2ccc02_6_305_231_1180_349_0.jpg](images/01963f73-1ee9-71f7-b789-944cbc2ccc02_6_305_231_1180_349_0.jpg)
186
+
187
+ Figure 2: Marginal Log Likelihood (MLL) on synthetic and real test data for models trained using PF-SEFI (us), DPF-SGR, PFNET, and PF, plotted against the corresponding training steps. For synthetic data we also show the MLL of the true models.
188
+
189
+ - Average Displacement Error (ADE) and Average Yaw Error (AYE): The average error in the positions and yaws respectively of the smoothed state estimates ${\mathbb{E}}_{\theta }\left( {{x}_{0 : T} \mid {y}_{0 : T}}\right)$ against the true poses, ${\bar{x}}_{0 : T}$ . For the synthetic data, the true poses are sampled while generating the data; for the real data, the true poses are obtained by humans manually labelling object trajectories from videos. These measure the quality of the learned models for the purposes of state estimation.
190
+
191
+ - Average Observation True Log Likelihood (AOTLL): The average log likelihood of observations conditioned on the corresponding true states under the learned observation model, i.e., $\frac{1}{T + 1}\mathop{\sum }\limits_{{t = 0}}^{T}\log {g}_{\theta }\left( {{y}_{t} \mid {\bar{x}}_{t}}\right)$ . This measures the standalone quality of the learned observation model.
192
+
193
+ - Average Policy True Log Likelihood (APTLL): The average log likelihood of true actions, ${\bar{a}}_{1 : T}$ ,(only available for experiments with synthetic data since it is not possible to manually label latent actions) conditioned on the corresponding true states under the learned policy, i.e., $\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}\log {\pi }_{\theta }\left( {{\bar{a}}_{t} \mid {\bar{x}}_{t - 1}}\right)$ . This measures the standalone quality of the learned policy.
194
+
195
+ Results. Figure 2 shows the progress of the learned models by tracking MLL of held out test data for each of the three datasets (synthetic data with 25 steps, synthetic data with 50 steps, and real data with 60 steps), and for each of the four methods (PF-SEFI, DPF-SGR, PFNET, and PF). Table 1 summarise the performance of the learned models at convergence. We pick the best hyper-parameters, smoothing lag $L$ for PF-SEFI and trade-off parameter $\alpha$ for PFNET, in each of the experiments (Appendix C).
196
+
197
+ In our experiments with synthetic data with 25 steps (Figure 2a and Experiment A in Table 1), we observe a clear gap in performance of PF-SEFI and DPF-SGR relative to PFNET and PF. The improvements over PF are likely due to the bias in PF's score estimates due to the non-differentiable resampling step, while the improvements over PFNET are likely due to adverse effects of not resampling with the correct distribution at each time step. While PF-SEFI and DPF-SGR perform similarly on this dataset, the difference is stark in the case of synthetic data with 50 steps (Figure 2b and Experiment B in Table 1). PF-SEFI is invariant to the length of the trajectories used, converging stably; however, all other methods, struggle to learn useful models. We postulate that since each of the baselines, in one way or another, differentiate through all time steps of the filter, the variance in their score estimates is too high for good learning through gradient ascent. ${}^{1}$
198
+
199
+ The results of our experiments with real data with 60 steps (Figure 2c and Experiment C in Table 1) are consistent with Experiment B (i.e., with experiments on synthetic data with 50 steps) and show that PF-SEFI is able to learn useful models. The learned observation model using PF-SEFI performs even better than the model that was trained offline through supervision with manually labelled data (see AOTLL in Table 1 for Experiment C). We also find that sampling from the learned
200
+
201
+ ---
202
+
203
+ ${}^{1}$ The authors of DPF-SGR [17] recommend the use of stop gradients not only for particle weights after resampling, i.e., ${\widetilde{v}}_{t}^{i} = {\bar{v}}_{t}^{{a}^{i}}/ \bot {\bar{v}}_{t}^{{a}^{i}}\left\lbrack {{17}\text{, Algorithm 1}}\right\rbrack$ , but also, in the case of bootstrap particle filters, while computing the likelihood ratio ${v}_{t}^{i} = {\widetilde{v}}_{t - 1}^{i}{p}_{\theta }\left( {{x}_{t}^{i},{y}_{t} \mid {x}_{t - 1}^{{a}^{i}}}\right) / \bot {q}_{\theta }\left( {{x}_{t}^{i} \mid {x}_{t - 1}^{{a}^{i}}}\right)$ before resampling, and while sampling from ${x}_{t}^{i} \sim {q}_{\theta }\left( {\cdot \mid {x}_{t - 1}^{{a}^{i}}}\right) \left\lbrack {{17}\text{, Section 4.1}}\right\rbrack$ . While these additional stop gradients significantly reduce variance, our experiments with them yielded extremely poor overall performance (even with synthetic data with 25 steps). The results we report here thus make use of stop-gradients only for particle weights after resampling.
204
+
205
+ ---
206
+
207
+ Table 1: Metrics computed on held out synthetic test data comparing PF-SEFI (us) against baselines DPF-SGR, PFNET, and vanilla PF, on two experiments - (A) learning from synthetic data with 25 steps, (B) learning from synthetic data with 50 steps, and (C) learning from real data with 60 steps. For experiments (A) and (B), we also compare against the performance of the true models. For experiment (C), we compare against the supervised observation model trained using manually labelled trajectories. For MLL, AOTLL, and APTLL, higher values imply better models, while for ADE and AYE, lower values imply better models.
208
+
209
+ <table><tr><td>$\mathbf{{Exp}.}$</td><td>$\mathbf{{Method}}$</td><td>$\mathbf{{MLL}}$</td><td>AOTLL</td><td>APTLL</td><td>$\mathbf{{ADE}\left( m\right) }$</td><td>AYE (rad)</td></tr><tr><td rowspan="5">A</td><td>TRUE</td><td>$- {3.161} \pm {0.003}$</td><td>-2.128</td><td>2.674</td><td>${0.090} \pm {0.001}$</td><td>${0.014} \pm {0.000}$</td></tr><tr><td>PF-SEFI (us)</td><td>$- {3.147} \pm {0.004}$</td><td>$- {2.285} \pm {0.028}$</td><td>${2.661} \pm {0.014}$</td><td>${0.186} \pm {0.021}$</td><td>${0.016} \pm {0.000}$</td></tr><tr><td>DPF-SGR</td><td>$- {3.159} \pm {0.004}$</td><td>$- {2.265} \pm {0.010}$</td><td>${2.594} \pm {0.027}$</td><td>$\mathbf{{0.165} \pm {0.008}}$</td><td>$\mathbf{{0.014} \pm {0.000}}$</td></tr><tr><td>PFNET</td><td>$- {3.225} \pm {0.004}$</td><td>$- {2.487} \pm {0.026}$</td><td>${2.621} \pm {0.013}$</td><td>${0.264} \pm {0.019}$</td><td>${0.017} \pm {0.000}$</td></tr><tr><td>PF</td><td>$- {3.229} \pm {0.006}$</td><td>$- {2.484} \pm {0.026}$</td><td>${2.576} \pm {0.021}$</td><td>${0.245} \pm {0.021}$</td><td>${0.017} \pm {0.000}$</td></tr><tr><td rowspan="5">B</td><td>TRUE</td><td>$- {3.145} \pm {0.002}$</td><td>-2.165</td><td>2.693</td><td>${0.088} \pm {0.001}$</td><td>${0.012} \pm {0.000}$</td></tr><tr><td>PF-SEFI (us)</td><td>$- {3.141} \pm {0.005}$</td><td>$- {2.283} \pm {0.015}$</td><td>$\mathbf{{2.505} \pm {0.042}}$</td><td>$\mathbf{{0.165} \pm {0.013}}$</td><td>$\mathbf{{0.014} \pm {0.000}}$</td></tr><tr><td>DPF-SGR</td><td>$- {3.966} \pm {0.050}$</td><td>$- {2.636} \pm {0.031}$</td><td>${0.811} \pm {0.130}$</td><td>${2.828} \pm {0.415}$</td><td>${0.142} \pm {0.016}$</td></tr><tr><td>PFNET</td><td>$- {4.169} \pm {0.046}$</td><td>$- {2.901} \pm {0.039}$</td><td>${0.539} \pm {0.077}$</td><td>${2.809} \pm {0.176}$</td><td>${0.148} \pm {0.008}$</td></tr><tr><td>PF</td><td>$- {4.118} \pm {0.038}$</td><td>$- {2.841} \pm {0.025}$</td><td>${0.681} \pm {0.122}$</td><td>${2.502} \pm {0.042}$</td><td>${0.137} \pm {0.007}$</td></tr><tr><td rowspan="5">C</td><td>SUPERVISED</td><td>N/A</td><td>$- {2.224} \pm {0.006}$</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>PF-SEFI (us)</td><td>$- {2.447} \pm {0.029}$</td><td>$- {1.973} \pm {0.029}$</td><td>N/A</td><td>${0.275} \pm {0.011}$</td><td>$\mathbf{{0.034} \pm {0.006}}$</td></tr><tr><td>DPF-SGR</td><td>$- {3.297} \pm {0.287}$</td><td>$- {2.236} \pm {0.218}$</td><td>N/A</td><td>${0.643} \pm {0.177}$</td><td>${0.081} \pm {0.477}$</td></tr><tr><td>PFNET</td><td>$- {4.019} \pm {0.098}$</td><td>$- {2.752} \pm {0.079}$</td><td>N/A</td><td>${0.746} \pm {0.091}$</td><td>${1.015} \pm {0.159}$</td></tr><tr><td>PF</td><td>$- {3.848} \pm {0.045}$</td><td>$- {2.639} \pm {0.140}$</td><td>N/A</td><td>${0.701} \pm {0.109}$</td><td>${1.082} \pm {0.364}$</td></tr></table>
210
+
211
+ model produces observations that are qualitatively similar to the real data (Appendix D). While the supervised model is trained only on the subset of the observations that are labeled (labelling only a subset is common in practical applications due to the cost of labelling), PF-SEFI, by contrast, can leverage all observations in a self-supervised fashion. Moreover, we speculate that the labels contain noise and that the labelling distribution is biased towards observations that are easy to label. Both limitations hinder supervised learning.
212
+
213
+ ## 6 Discussion, Limitations, and Future Work
214
+
215
+ In this work, we proposed an efficient particle-based approach for estimating the score function to learn SSMs in a completely self-supervised way. Compared to previous particle-based methods that estimate the score, our method is more computationally efficient, allowing us to scale to learning models with many parameters and apply it to a real-world AV object tracking problem. We also showed empirically that our method learns better models and is more stable in training than recent methods that use automatic differentiation to estimate the score, and that we can learn an observation model that outperforms one trained through supervised learning, without using any labels.
216
+
217
+ While this solution is ideal for our problem, it does have a number of limitations. Most notably, it is restricted to maximising the marginal log-likelihood of the data, while differentiating through the filter allows for arbitrary differentiable loss functions. Furthermore, our method is not suitable for estimating the parameters of a proposal distribution. Beyond these algorithmic limitations, in our application, the models that we used were not very expressive. For the observation model, we did not model important phenomena that affect partial observability such as occlusions and we restricted our states and observations to 2D. For the policy, we used a simplified policy with only basic features that are insufficient for controlling an agent in simulation. Furthermore, the policy and the motion model are both specific to vehicles, and currently exclude other road users such as pedestrians.
218
+
219
+ In future work, we aim to scale up our problem setting, by making both models more expressive, and to estimate more state dimensions, such as full 3D poses and sizes of objects. We also believe that learning policies as components of an SSM to explicitly account for observation noise is, in practice, critical for learning good driving behaviour from demonstrations. Such policies could be used as models for predicting the behaviour of other road-users, or to control agents in simulation, and the method we proposed in this work offers an ideal starting point to explore this.
220
+
221
+ References
222
+
223
+ [1] S. Thrun, W. Burgard, and D. Fox. Probabilistic Robotics. MIT Press, 2005.
224
+
225
+ [2] T. A. Le, M. Igl, T. Rainforth, T. Jin, and F. Wood. Auto-encoding sequential Monte Carlo. In International Conference on Learning Representations, 2018.
226
+
227
+ [3] C. J. Maddison, J. Lawson, G. Tucker, N. Heess, M. Norouzi, A. Mnih, A. Doucet, and Y. Teh. Filtering variational objectives. Advances in Neural Information Processing Systems, 30, 2017.
228
+
229
+ [4] C. Naesseth, S. Linderman, R. Ranganath, and D. Blei. Variational sequential Monte Carlo. In International Conference on Artificial Intelligence and Statistics, pages 968-977, 2018.
230
+
231
+ [5] P. Karkus, D. Hsu, and W. S. Lee. Particle filter networks with application to visual localization. In Conference on Robot Learning, pages 169-178, 2018.
232
+
233
+ [6] A. Corenflos, J. Thornton, G. Deligiannidis, and A. Doucet. Differentiable particle filtering via entropy-regularized optimal transport. In International Conference on Machine Learning, pages 2100-2111, 2021.
234
+
235
+ [7] J. Lai, J. Domke, and D. Sheldon. Variational marginal particle filters. In International Conference on Artificial Intelligence and Statistics, pages 875-895, 2022.
236
+
237
+ [8] G. Poyiadjis, A. Doucet, and S. S. Singh. Particle approximations of the score and observed information matrix in state space models with application to parameter estimation. Biometrika, 98(1):65-80, 2011.
238
+
239
+ [9] N. Kantas, A. Doucet, S. S. Singh, J. Maciejowski, and N. Chopin. On particle methods for parameter estimation in state-space models. Statistical science, 30(3):328-351, 2015.
240
+
241
+ [10] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Joze-fowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Van-houcke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/.Software available from tensorflow.org.
242
+
243
+ [11] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Te-jani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf.
244
+
245
+ [12] A. Doucet and A. M. Johansen. A tutorial on particle filtering and smoothing: Fifteen years later. Handbook of Nonlinear Filtering, 12(656-704):3, 2009.
246
+
247
+ [13] N. J. Gordon, D. J. Salmond, and A. F. Smith. Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEE Proceedings $F$ (Radar and Signal Processing), 140(2):107- 113, 1993.
248
+
249
+ [14] M. K. Pitt and N. Shephard. Filtering via simulation: Auxiliary particle filters. Journal of the American Statistical Association, 94(446):590-599, 1999.
250
+
251
+ [15] E. L. Ionides, C. Bretó, and A. A. King. Inference for nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 103(49):18438-18443, 2006.
252
+
253
+ [16] J. Olsson and J. Westerborn. Efficient particle-based online smoothing in general hidden markov models: the PaRIS algorithm. Bernoulli, 23(3):1951-1996, 2017.
254
+
255
+ [17] A. Scibior and F. Wood. Differentiable particle filtering without modifying the forward pass. arXiv preprint arXiv:2106.10314, 2021.
256
+
257
+ [18] G. Kitagawa and S. Sato. Monte Carlo smoothing and self-organising state-space model. In A. Doucet, N. De Freitas, and N. Gordon, editors, Sequential Monte Carlo Methods in Practice, pages 177-195. Springer, 2001.
258
+
259
+ [19] J. Olsson, O. Cappé, R. Douc, and E. Moulines. Sequential Monte Carlo smoothing with application to parameter estimation in nonlinear state space models. Bernoulli, 14(1):155- 179, 2008.
260
+
261
+ [20] X. Ma, P. Karkus, D. Hsu, and W. S. Lee. Particle filter recurrent neural networks. In AAAI Conference on Artificial Intelligence, 2020.
262
+
263
+ [21] X. Ma, P. Karkus, D. Hsu, W. S. Lee, and N. Ye. Discriminative particle filter reinforcement learning for complex partial observations. In International Conference on Learning Representations, 2020.
264
+
265
+ [22] R. Jonschkowski, D. Rastogi, and O. Brock. Differentiable particle filters: End-to-end learning with algorithmic priors. In Proceedings of Robotics: Science and Systems, 2018.
266
+
267
+ [23] M. Zhu, K. Murphy, and R. Jonschkowski. Towards differentiable resampling. arXiv preprint arXiv:2004.11938, 2020.
268
+
269
+ [24] A. Kloss, G. Martius, and J. Bohg. How to train your differentiable filter. Autonomous Robots, 45(4):561-578, 2021.
270
+
271
+ [25] K. Granstrom, M. Baum, and S. Reuter. Extended object tracking: Introduction, overview and applications. arXiv preprint arXiv:1604.00970, 2016.
272
+
273
+ [26] R. S. S. Blackman, Popoli. Design and Analysis of Modern Tracking Systems. Artech House, Boston, 1999.
274
+
275
+ [27] K. Granström, S. Reuter, D. Meissner, and A. Scheel. A multiple model PHD approach to tracking of cars under an assumed rectangular shape. In 17th International Conference on Information Fusion (FUSION). IEEE, 2014.
276
+
277
+ [28] P. Del Moral. Feynman-Kac Formulae. Springer, 2004.
278
+
279
+ [29] F. Lindsten, P. Bunch, S. S. Singh, and T. B. Schön. Particle ancestor sampling for near-degenerate or intractable state transition models. arXiv preprint arXiv:1505.06356, 2015.
280
+
281
+ [30] S. G. Krantz and H. R. Parks. Geometric Integration Theory. Springer Science & Business Media, 2008.
282
+
283
+ [31] A. L. Caterini, G. Loaiza-Ganem, G. Pleiss, and J. P. Cunningham. Rectangular flows for manifold learning. Advances in Neural Information Processing Systems, 34, 2021.
284
+
285
+ [32] K. M. Lynch and F. C. Park. Modern Robotics. Cambridge University Press, 2017.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/7JVNhaMbZUu/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,256 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § PARTICLE-BASED SCORE ESTIMATION FOR STATE SPACE MODEL LEARNING IN AUTONOMOUS DRIVING
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Multi-object state estimation is a fundamental problem for robotic applications where a robot must interact with other moving objects. Typically, other objects' relevant state features are not directly observable, and must instead be inferred from observations. Particle filtering can perform such inference given approximate transition and observation models. However, these models are often unknown a priori, yielding a difficult parameter estimation problem since observations jointly carry transition and observation noise. In this work, we consider learning maximum-likelihood parameters using particle methods. Recent methods addressing this problem typically differentiate through time in a particle filter, which requires workarounds to the non-differentiable resampling step, that yield biased or high variance gradient estimates. By contrast, we exploit Fisher's identity to obtain a particle-based approximation of the score function (the gradient of the log likelihood) that yields a low variance estimate while only requiring stepwise differentiation through the transition and observation models. We apply our method to real data collected from autonomous vehicles (AVs) and show that it learns better models than existing techniques and is more stable in training, yielding an effective smoother for tracking the trajectories of vehicles around an AV.
12
+
13
+ Keywords: Autonomous Driving, Particle Filtering, Self-supervised Learning
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Multi-object state estimation is a fundamental problem in settings where a robot must interact with other moving objects, since their state is directly relevant for decision making. Typically, other objects' relevant state features are not directly observable. Instead, the robot must infer them from a stream of observations it receives via a perception system. For example, an autonomous vehicle (AV) selects actions based on the state of nearby road users. However, such road users are only partially observed, owing to limited field of view, occlusions, and imperfections in the AV's sensors and perception systems. Such partial observability negatively affects many downstream tasks in a robot's behavioural stack that depend on observations, e.g., action planning.
18
+
19
+ Addressing partial observability requires sequential state estimation, to which Bayesian filtering offers a generic probabilistic approach. In particular, sequential Monte Carlo methods, also known as particle filtering, have been successfully applied to state estimation in many robotics applications [1]. However, Bayesian filters require models that reasonably approximate the transition and observation models of a state-space model (SSM). In some special cases, these models can be derived analytically from first principles, e.g., when the physical dynamics are well understood, or by modeling a sensor's physical characteristics. In many real-world applications, however, these models cannot be specified analytically. For example, the transition model may encode complicated motion dynamics and environmental physics. In multi-agent settings, other agents' behaviour must also be modelled. Modelling observations is also difficult. Modern perception systems often involve multiple stages and combine information from multiple sensors, making observation models practically impossible to specify by hand. By contrast, collecting observations from a robotic system is relatively easy and cheap. We are interested, therefore, in algorithms that can leverage such observations to learn transition and observation models in a self-supervised fashion, and yield an effective particle smoother. Learned transition and observation models can also be independently useful for other applications, such as the evaluation of AVs by simulating realistic observations.
20
+
21
+ *These authors contributed equally to this work.
22
+
23
+ In this work, we propose Particle Filtering-Based Score Estimation using Fisher's Identity (PF-SEFI), a method for jointly learning maximum-likelihood parameters of both the transition and observation models of an SSM. Unlike many recently proposed methods [2, 3, 4, 5, 6, 7], our approach avoids differentiable approximations of the resampling step. We achieve this by revisiting a methodology originally proposed in statistics $\left\lbrack {8,9}\right\rbrack$ that relies on a particle approximation of the score, i.e., the gradient of the log likelihood of observation sequences, obtained through Fisher's identity. This only requires differentiating through the transition and observation models. Unfortunately, a direct particle approximation of this identity provides a high variance estimate of the score. While [8] propose an alternative low variance estimate, it admits a $\mathcal{O}\left( {N}^{2}\right)$ cost, where $N$ is the number of particles. Furthermore, these methods compute and store the gradient of the marginal log-likelihood with respect to model parameters for each particle. This requires computing Jacobian matrices, which are slow to compute using automatic differentiation tools such as TensorFlow and PyTorch $\left\lbrack {{10},{11}}\right\rbrack$ which rely on Jacobian-vector products. This makes these methods impractical for large models. By contrast, PF-SEFI is a simple scalable $\mathcal{O}\left( N\right)$ variant with only negligible bias. PF-SEFI marginalises over particles before computing gradients, allowing automatic differentiation tools to make use of efficient Jacobian-vector product operations, making it significantly faster and allowing us to scale to larger models. To the best of our knowledge, previous particle methods estimating the score have been limited to SSMs with few parameters, whereas we apply PF-SEFI to neural network models with thousands of parameters.
24
+
25
+ We apply PF-SEFI to jointly learn transition and observation models for tracking multiple objects around an AV, using a large set of noisy trajectories, containing almost 10 hours of road-user trajectories observed by an AV. We show that PF-SEFI learns an SSM that yields an effective object tracker as measured by average displacement and yaw errors. We compare the learned observation model to one trained through supervised learning on a dataset of manually labelled trajectories, and show that PF-SEFI yields a better model (as measured by log-likelihood on ground-truth labels) even though it requires no labels for training. Finally, we compare PF-SEFI to a number of existing particle methods for jointly learning transition and observation models and show that it learns better models and is more stable to train.
26
+
27
+ § 2 2 RELATED WORK
28
+
29
+ Particle filters are widely used for state estimation in non-linear non-Gaussian SSMs where no closed form solution is available; see e.g., [12] for a survey. The original bootstrap particle filter [13] samples at each time step using the transition density particles that are then reweighted according to their conditional likelihood, which measures their "fitness" w.r.t. to the available observation. Particles with low weights are then eliminated while particles with high weights are replicated to focus computational efforts into regions of high probability mass. Compared to many newer methods, such as the auxiliary particle filter [14], the bootstrap particle filter only requires sampling from the transition density, not its evaluation at arbitrary values, which is not possible for the compositional transition density used in this work.
30
+
31
+ In most practical applications, the SSM has unknown parameters that must be estimated together with the latent state posterior (see [9] for a review). Simply extending the latent space to include the unknown parameters suffers from insufficient parameter space exploration [15]. While particle filters can estimate consistently the likelihood for fixed model parameters, a core challenge is that the such estimated likelihood function is discontinuous in the model parameters due to the resampling step, hence complicating its optimization; see e.g. [6, Figure 1] for an illustration.
32
+
33
+ Instead, the score vector can be computed using Fisher's identity [8]. However, as shown in [8], performance degrades quickly for longer sequences if a standard particle filter is used, due to the path degeneracy problem: repeated resampling of particles and their ancestors will leave few or even just one remaining ancestor path for earlier timesteps, resulting in unbiased, but very high variance estimates. Methods for overcoming this limitation exist $\left\lbrack {8,{16},{17}}\right\rbrack$ , but with requirements making them unsuitable in this work. Poyiadjis et al. [8] store gradients separately for each particle, making
34
+
35
+ this approach infeasible for all but the smallest neural networks. Scibior and Wood [17] propose an improved implementation with lower memory requirements by smartly using automatic differentiation. However, their approach still requires storing a computation graph whose size scales with $\mathcal{O}\left( {N}^{2}\right)$ as the transition density for each particle pair must be evaluated during the forward pass. Both previous methods' computational complexity also scales quadratically with the number of particles, $N$ , which is problematic for costly gradient backpropagation through large neural networks. Lastly, Olsson and Westerborn [16] require evaluation of the transition density for arbitrary values, which our compositional transition model does not allow. Instead, in this work, we show that fixed-lag smoothing $\left\lbrack {{18},{19}}\right\rbrack$ is a viable alternative to compute the score function of large neural network models in the context of extended object tracking.
36
+
37
+ There is extensive literature on combining particle filters with learning complex models such as neural networks $\left\lbrack {2,3,4,5,6,{20},{21},{22},{23},{24}}\right\rbrack$ . In contrast to our work, they make use of a learned, data-dependent proposal distribution. However, for parameter estimation, they rely on differentiation of an estimated lower bound (ELBO). Due to the non-differentiable resampling step, this gradient estimation has either extremely high variance or is biased if the high variance terms are simply dropped, as in $\left\lbrack {2,3,4}\right\rbrack$ . As we show in Section 5, this degrades performance noticeably. A second line of work proposes soft resampling [5, 20, 21], which interpolates between regular and uniform sampling, thereby allowing to trade off variance reduction through resampling with the bias introduced by ignoring the non-differentiable component of resampling. Lastly, Corenflos et al. [6] make the resampling step differentiable by using entropy-regularized optimal transport, also inducing bias and a $\mathcal{O}\left( {N}^{2}\right)$ cost.
38
+
39
+ Extended object tracking [25] considers how to track objects which, in contrast to "small" objects [26], generate multiple sensor measurements per timestep. Unlike in our work, transition and measurement models are assumed to be known or to depend on only a few learnable parameters. Similar to our work, the measurement model proposed in [27] assumes measurement sources lying on a rectangular shape. However, our model is more flexible, for example, allowing non-zero probability on all four sides simultaneously.
40
+
41
+ § 3 STATE-SPACE MODELS AND PARTICLE FILTERING
42
+
43
+ § 3.1 STATE-SPACE MODELS
44
+
45
+ A SSM is a partially observed discrete-time Markov process with initial density, ${x}_{0} \sim \mu \left( \cdot \right)$ , transition density ${x}_{t} \mid {x}_{t - 1} \sim {f}_{\theta }\left( {\cdot \mid {x}_{t - 1}}\right)$ , and observation density ${y}_{t} \mid {x}_{t} \sim {g}_{\theta }\left( {\cdot \mid {x}_{t}}\right)$ , where ${x}_{t}$ is the latent state at time $t$ and ${y}_{t}$ the corresponding observation. The joint density of ${x}_{0 : T},{y}_{0 : T}$ satisfies:
46
+
47
+ $$
48
+ {p}_{\theta }\left( {{x}_{0 : T},{y}_{0 : T}}\right) = \mu \left( {x}_{0}\right) {g}_{\theta }\left( {{y}_{0} \mid {x}_{0}}\right) \mathop{\prod }\limits_{{t = 1}}^{T}{f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right) {g}_{\theta }\left( {{y}_{t} \mid {x}_{t}}\right) . \tag{1}
49
+ $$
50
+
51
+ Given this model, we are typically interested in inferring the states from the data by computing the filtering and one-step ahead prediction distributions, ${\left\{ p\left( {x}_{t} \mid {y}_{0 : t}\right) \right\} }_{t \in 0,\ldots ,T}$ and ${\left\{ p\left( {x}_{t + 1} \mid {y}_{0 : t}\right) \right\} }_{t \in 0,\ldots ,T - 1}$ respectively, and more generally the joint distributions ${\left\{ p\left( {x}_{0 : t} \mid {y}_{0 : t}\right) \right\} }_{t \in 0,\ldots ,T}$ satisfying
52
+
53
+ $$
54
+ {p}_{\theta }\left( {{x}_{0 : t} \mid {y}_{0 : t}}\right) = \frac{{p}_{\theta }\left( {{x}_{0 : t},{y}_{0 : t}}\right) }{{p}_{\theta }\left( {y}_{0 : t}\right) },\;{p}_{\theta }\left( {y}_{0 : T}\right) = \int {p}_{\theta }\left( {{x}_{0 : T},{y}_{0 : T}}\right) \mathrm{d}{x}_{0 : T}. \tag{2}
55
+ $$
56
+
57
+ * Additionally, to estimate parameters, we would also like to compute the marginal log likelihood:
58
+
59
+ $$
60
+ {\ell }_{T}\left( \theta \right) = \log {p}_{\theta }\left( {y}_{0 : T}\right) = \log {p}_{\theta }\left( {y}_{0}\right) + \mathop{\sum }\limits_{{t = 1}}^{T}\log {p}_{\theta }\left( {{y}_{t} \mid {y}_{0 : t - 1}}\right) , \tag{3}
61
+ $$
62
+
63
+ where ${p}_{\theta }\left( {y}_{0}\right) = \int {g}_{\theta }\left( {{y}_{0} \mid {x}_{0}}\right) \mu \left( {x}_{0}\right) \mathrm{d}{x}_{0}$ and ${p}_{\theta }\left( {{y}_{t} \mid {y}_{0 : t - 1}}\right) = \int {g}_{\theta }\left( {{y}_{t} \mid {x}_{t}}\right) {p}_{\theta }\left( {{x}_{t} \mid {y}_{0 : t - 1}}\right) \mathrm{d}{x}_{t}$ for $t \geq 1$ . For non-linear non-Gaussian SSMs, these posterior distributions and the corresponding marginal likelihood cannot be computed in closed form.
64
+
65
+ § 3.2 PARTICLE FILTERING
66
+
67
+ Particle methods provide non-parametric and consistent approximations of these quantities. They rely on the combination of importance sampling and resampling steps of a set of $N$ weighted parti-
68
+
69
+ cles $\left( {{x}_{t}^{i},{w}_{t}^{i}}\right)$ , where ${x}_{t}^{i}$ denotes the values of the ${i}^{\text{ th }}$ particle at time $t$ and ${w}_{t}^{i}$ is corresponding weight satisfying $\mathop{\sum }\limits_{{i = 1}}^{N}{w}_{t}^{i} = 1$ . We focus on the bootstrap particle filter, shown in Algorithm 1, which samples particles according to the transition density.
70
+
71
+ Algorithm 1 Bootstrap Particle Filter
72
+
73
+ Sample ${X}_{0}^{i}\overset{\text{ i.i.d. }}{ \sim }\mu \left( \cdot \right)$ for $i \in \left\lbrack N\right\rbrack$ and set ${\widehat{\ell }}_{0}\left( \theta \right) \leftarrow \log \left( {\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{g}_{\theta }\left( {{y}_{0} \mid {x}_{0}^{i}}\right) }\right)$ .
74
+
75
+ For $t = 1,\ldots ,T$
76
+
77
+ 1. Compute weights ${w}_{t - 1}^{i} \propto {g}_{\theta }\left( {{y}_{t - 1} \mid {x}_{t - 1}^{i}}\right)$ with $\mathop{\sum }\limits_{{i = 1}}^{N}{w}_{t - 1}^{i} = 1$ .
78
+
79
+ 2. Sample ${a}_{t - 1}^{i} \sim \operatorname{Cat}\left( {{w}_{t - 1}^{1},\ldots ,{w}_{t - 1}^{N}}\right)$ then ${x}_{t}^{i} \sim {f}_{\theta }\left( {\cdot \mid {x}_{t - 1}^{{a}_{t - 1}^{i}}}\right)$ for $i \in \left\lbrack N\right\rbrack$ .
80
+
81
+ 3. Set ${x}_{0 : t}^{i} \leftarrow \left( {{x}_{0 : t - 1}^{{a}_{t - 1}^{i}},{x}_{t}^{i}}\right)$ for $i \in \left\lbrack N\right\rbrack$ and ${\widehat{\ell }}_{t}\left( \theta \right) \leftarrow {\widehat{\ell }}_{t - 1}\left( \theta \right) + \log \left( {\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{g}_{\theta }\left( {{y}_{t} \mid {x}_{t}^{i}}\right) }\right)$ .
82
+
83
+ Let $k \sim \operatorname{Cat}\left( {{\alpha }_{1},\ldots ,{\alpha }_{N}}\right)$ denote the categorical distribution of parameters $\left( {{\alpha }_{1},\ldots ,{\alpha }_{N}}\right)$ so that $\mathbb{P}(k =$ $i) = {\alpha }_{i}$ . At any time $t$ , this algorithm produces particle approximations
84
+
85
+ $$
86
+ {\widehat{p}}_{\theta }\left( {{x}_{0 : t} \mid {y}_{0 : t}}\right) = \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{t}^{i}{\delta }_{{x}_{0 : t}^{i}}\left( {x}_{0 : t}\right) ,\;{\widehat{\ell }}_{t}\left( \theta \right) = \mathop{\sum }\limits_{{t = 0}}^{T}\log \left( {\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{g}_{\theta }\left( {{y}_{t} \mid {x}_{t}^{i}}\right) }\right) , \tag{4}
87
+ $$
88
+
89
+ of ${p}_{\theta }\left( {{x}_{0 : t} \mid {y}_{0 : t}}\right)$ and ${\ell }_{t}\left( \theta \right) = \log {p}_{\theta }\left( {y}_{0 : t}\right)$ . Step 2 resamples, discarding particles with small weights while replicating those with large weights before evolving according to the transition density. This focuses computational effort on the "promising" regions of the state space. Unfortunately, resampling involves sampling $N$ discrete random variables at each time step and as such produces estimates of the log likelihood that are not differentiable w.r.t. $\theta$ as illustrated in [6, Figure 1].
90
+
91
+ While the resulting estimates are consistent as $N \rightarrow \infty$ for any fixed time $t$ [28], this does not guarantee good practical performance. Fortunately, under regularity conditions the approximation error for the estimate ${\widehat{p}}_{\theta }\left( {{x}_{t} \mid {y}_{0 : t}}\right)$ and more generally ${\widehat{p}}_{\theta }\left( {{x}_{t - L + 1 : t} \mid {y}_{0 : t}}\right)$ for a fixed lag $L \geq 1$ as well as $\log {p}_{\theta }\left( {y}_{0 : t}\right) /t$ does not increase with $t$ for fixed $N$ . However, this is not the case for the joint smoothing approximation because successive resampling means that ${\widehat{p}}_{\theta }\left( {{x}_{0 : L} \mid {y}_{0 : t}}\right)$ is eventually approximated by a single unique particle for large enough $t$ , a phenomenon known as path degeneracy; see e.g. [12, Section 4.3].
92
+
93
+ § 4 SCORE ESTIMATION USING PARTICLE METHODS
94
+
95
+ To estimate the parameters $\theta$ of a given SSM (1) along with a dataset of observations ${y}_{0 : T}$ , we want to maximise via gradient ascent the marginal log likelihood in (3). However, the gradient of the marginal log likelihood, i.e., the score function, is intractable. As explained in Section 2, automatic differentiation through the filter is difficult due to the non-differentiable resampling step.
96
+
97
+ § 4.1 SCORE FUNCTION USING FISHER'S IDENTITY
98
+
99
+ We leverage here instead Fisher's identity [8] for the score to completely side-step the non-differentiability problem. This identity shows that
100
+
101
+ $$
102
+ {\nabla }_{\theta }{\ell }_{T}\left( \theta \right) = \int {\nabla }_{\theta }\log {p}_{\theta }\left( {{x}_{0 : T},{y}_{0 : T}}\right) {p}_{\theta }\left( {{x}_{0 : T} \mid {y}_{0 : T}}\right) \mathrm{d}{x}_{0 : T}, \tag{5}
103
+ $$
104
+
105
+ i.e., the score is the expectation of ${\nabla }_{\theta }\log {p}_{\theta }\left( {{x}_{0 : T},{y}_{0 : T}}\right)$ under the joint smoothing distribution ${p}_{\theta }\left( {{x}_{0 : T} \mid {y}_{0 : T}}\right)$ . Plugging in (1), the score function can be simplified to
106
+
107
+ $$
108
+ {\nabla }_{\theta }{\ell }_{T}\left( \theta \right) = \mathop{\sum }\limits_{{t = 0}}^{T}\int {\nabla }_{\theta }\log {g}_{\theta }\left( {{y}_{t} \mid {x}_{t}}\right) {p}_{\theta }\left( {{x}_{t} \mid {y}_{0 : T}}\right) \mathrm{d}{x}_{t}
109
+ $$
110
+
111
+ $$
112
+ + \mathop{\sum }\limits_{{t = 1}}^{T}\int {\nabla }_{\theta }\log {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right) {p}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : T}}\right) \mathrm{d}{x}_{t - 1 : t}. \tag{6}
113
+ $$
114
+
115
+ § 4.2 PARTICLE SCORE APPROXIMATION
116
+
117
+ The identity (6) shows that we can simply estimate the score by plugging particle approximations of the marginal smoothing distributions $p\left( {{x}_{t - 1 : t} \mid {y}_{0 : T}}\right)$ into (6). This identity makes differentiating through time superfluous and thereby renders the use of differentiable approximations of resampling unnecessary. However, as discussed in Section 3.2, naive particle approximations of the smoothing distribution’s marginals, ${p}_{\theta }\left( {{x}_{t} \mid {y}_{0 : T}}\right)$ and ${p}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : T}}\right)$ , suffer from path degeneracy. To bypass this problem,[8,17] propose an $\mathcal{O}\left( {N}^{2}\right)$ method inspired by dynamic programming. We propose here a simpler and computationally cheaper method that relies on the following fixed-lag approximation of the fixed-interval smoothing distribution, which states that for $L \geq 1$ large enough,
118
+
119
+ $$
120
+ {p}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : T}}\right) \approx {p}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : \min \{ t + L,T\} }}\right) . \tag{7}
121
+ $$
122
+
123
+ This approximation simply assumes that observations after time $t + L$ do not bring further information about the states ${x}_{t - 1},{x}_{t}$ . This is satisfied for most models and the resulting approximation error decreases geometrically fast with $L$ [19]. The benefit of this approximation is that the particle approximation of ${p}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : \min \{ t + L,T\} }}\right)$ does not suffer from path degeneracy and is a simple byproduct of the bootstrap particle filtering of Algorithm 1; e.g., for $t + L < T$ we consider the particle approximation ${\widehat{p}}_{\theta }\left( {{x}_{0 : t + L} \mid {y}_{0 : t + L}}\right) = \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{t + L}^{i}{\delta }_{{x}_{0 : t + L}^{i}}\left( {x}_{0 : t + L}^{i}\right)$ obtained at time $t + L$ and use its corresponding marginals in ${x}_{t - 1},{x}_{t}$ and ${x}_{t}$ to integrate respectively ${\nabla }_{\theta }\log {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right)$ and ${\nabla }_{\theta }\log {g}_{\theta }\left( {{y}_{t} \mid {x}_{t}}\right)$ . For $t + L \geq T$ , we just consider the marginals in ${x}_{t - 1},{x}_{t}$ and ${x}_{t}$ of ${\widehat{p}}_{\theta }\left( {{x}_{0 : T} \mid {y}_{0 : T}}\right)$ . So finally, we consider the estimate,
124
+
125
+ $$
126
+ \overset{⏜}{{\nabla }_{\theta }{\ell }_{T}}\left( \theta \right) = \mathop{\sum }\limits_{{t = 0}}^{T}\int {\nabla }_{\theta }\log {g}_{\theta }\left( {{y}_{t} \mid {x}_{t}}\right) {\widehat{p}}_{\theta }\left( {{x}_{t} \mid {y}_{0 : \min \{ t + L,T\} }}\right) \mathrm{d}{x}_{t}
127
+ $$
128
+
129
+ $$
130
+ + \mathop{\sum }\limits_{{t = 1}}^{T}\int {\nabla }_{\theta }\log {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right) {\widehat{p}}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : \min \{ t + L,T\} }}\right) \mathrm{d}{x}_{t - 1 : t}. \tag{8}
131
+ $$
132
+
133
+ § 4.3 SCORE ESTIMATION WITH DETERMINISTIC, DIFFERENTIABLE, INJECTIVE MOTION MODELS
134
+
135
+ We have described a generic method to approximate the score using particle filtering techniques. For many applications, however, the transition density function, ${f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right)$ , is the composition of a policy, ${\pi }_{\theta }\left( {{a}_{t} \mid {x}_{t - 1}}\right)$ , which characterises the action distribution conditioned on the state, and a potentially complex but deterministic, differentiable, and injective motion model, $\tau : {\mathbb{R}}^{{n}_{x}} \times {\mathbb{R}}^{{n}_{a}} \rightarrow {\mathbb{R}}^{{n}_{x}}$ where ${n}_{a} < {n}_{x}$ , which characterises kinematic constraints such that ${x}_{t} = \tau \left( {{x}_{t - 1},{a}_{t}}\right) = {\bar{\tau }}_{{x}_{t - 1}}\left( {a}_{t}\right)$ . Under such a composition, the transition density function on the induced manifold ${\mathcal{M}}_{{x}_{t - 1}} =$ $\left\{ {{\bar{\tau }}_{{x}_{t - 1}}\left( {a}_{t}\right) : {a}_{t} \in {\mathbb{R}}^{{n}_{a}}}\right\}$ is thus obtained by marginalising out the latent action variable, i.e.,
136
+
137
+ $$
138
+ {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right) = \mathbb{I}\left( {{x}_{t} \in {\mathcal{M}}_{{x}_{t - 1}}}\right) \int \delta \left( {{x}_{t} - {\bar{\tau }}_{{x}_{t - 1}}\left( {a}_{t}\right) }\right) {\pi }_{\theta }\left( {{a}_{t} \mid {x}_{t - 1}}\right) \mathrm{d}{a}_{t}. \tag{9}
139
+ $$
140
+
141
+ It is easy to sample from this density but it is intractable analytically if the motion model is only available through a complex simulator or if it is not invertible. This precludes the use of sophisticated proposal distributions within the particle filter. Additionally, even if it were known, one cannot use the $\mathcal{O}\left( {N}^{2}\right)$ smoothing type algorithms developed in [8,16] as the density is concentrated on a low-dimensional manifold [29]. This setting is common in mobile robotics, in which controllers factor into policies that select actions and motion models that determine the next state. Indeed, this is precisely the case in our application setting of estimating the state of observed road users around an AV (see Section 5). Learning the corresponding SSM reduces to learning the parameters $\theta$ of the policy, ${\pi }_{\theta }\left( {{a}_{t} \mid {x}_{t - 1}}\right)$ , and the observation model, ${g}_{\theta }\left( {{y}_{t} \mid {x}_{t}}\right)$ . Thankfully, even if the explicit form of the motion model is unknown, we can still compute $\nabla \log {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right)$ as required by the score estimate (8).
142
+
143
+ Lemma 4.1. For any $x \in {\mathbb{R}}^{{n}_{x}}$ , let ${\tau }_{x} : {\mathbb{R}}^{{n}_{a}} \rightarrow {\mathbb{R}}^{{n}_{x}}$ where ${n}_{a} < {n}_{x}$ be a smooth and injective mapping. Then, for any fixed ${x}_{t - 1}$ and ${x}_{t} \in {\mathcal{M}}_{{x}_{t - 1}}$ , the gradient of the transition log density, i.e., ${\nabla }_{\theta }\log {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right)$ , reduces to the gradient of the policy log density, i.e., ${\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t} \mid {x}_{t - 1}}\right)$ , where ${a}_{t}$ is the unique action that takes ${x}_{t - 1}$ to ${x}_{t}$ .
144
+
145
+ Proof. For ${x}_{t - 1}$ and ${x}_{t} \in {\mathcal{M}}_{{x}_{t - 1}}$ , we denote by $J\left\lbrack {\bar{\tau }}_{{x}_{t - 1}}\right\rbrack \left( {{\bar{\tau }}_{{x}_{t - 1}}^{-1}\left( {x}_{t}\right) }\right) \in {\mathbb{R}}^{{n}_{x} \times {n}_{a}}$ the rectangular Jacobian matrix and write ${a}_{t} = {\bar{\tau }}_{{x}_{t - 1}}^{-1}\left( {x}_{t}\right)$ , i.e., this is the unique action such ${\bar{\tau }}_{{x}_{t - 1}}\left( {a}_{t}\right) = {x}_{t}$ . By a standard result from differential geometry $\left\lbrack {{30},{31}}\right\rbrack$ , the transition density (9) satisfies
146
+
147
+ $$
148
+ {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right) = {\pi }_{\theta }\left( {{a}_{t} \mid {x}_{t - 1}}\right) {\left| \det J{\left\lbrack {\bar{\tau }}_{{x}_{t - 1}}\right\rbrack }^{\mathrm{T}}\left( {a}_{t}\right) J\left\lbrack {\bar{\tau }}_{{x}_{t - 1}}\right\rbrack \left( {a}_{t}\right) \right| }^{-1/2}\mathbb{I}\left( {{x}_{t} \in {\mathcal{M}}_{{x}_{t - 1}}}\right) . \tag{10}
149
+ $$
150
+
151
+ (a) An observation from the real data in the form of (b) Sampled observation from the synthetic data in the a set of $2\mathrm{D}$ points (in blue) forming a convex poly- form of a set of $2\mathrm{D}$ points (in blue) forming a con-gon around the observed road user's true (manually la- vex polygon around a synthetic road user's sampled belled) bounding box (in red). bounding box (in red).
152
+
153
+ Figure 1: Observations from real and synthetic data. Notice the similarity between the two with regards to the distribution of the 2D points from the viewpoint of the observing AV (in green).
154
+
155
+ It follows directly that ${\nabla }_{\theta }\log {f}_{\theta }\left( {{x}_{t} \mid {x}_{t - 1}}\right) = {\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t} \mid {x}_{t - 1}}\right)$ .
156
+
157
+ Indeed for the marginals ${\widehat{p}}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : \min \{ t + L,T\} }}\right)$ , we can store the actions corresponding to transitions ${x}_{t - 1} \rightarrow {x}_{t}$ during filtering, and it follows that for the class of SSMs described above, the score estimate reduces to:
158
+
159
+ $$
160
+ \overset{⏜}{{\nabla }_{\theta }{\ell }_{T}}\left( \theta \right) = \mathop{\sum }\limits_{{t = 0}}^{T}\int {\nabla }_{\theta }\log {g}_{\theta }\left( {{y}_{t} \mid {x}_{t}}\right) {\widehat{p}}_{\theta }\left( {{x}_{t} \mid {y}_{0 : \min \{ t + L,T\} }}\right) \mathrm{d}{x}_{t}
161
+ $$
162
+
163
+ $$
164
+ + \mathop{\sum }\limits_{{t = 1}}^{T}\int {\nabla }_{\theta }\log {\pi }_{\theta }\left( {{a}_{t} \mid {x}_{t - 1}}\right) {\widehat{p}}_{\theta }\left( {{x}_{t - 1 : t} \mid {y}_{0 : \min \{ t + L,T\} }}\right) \mathrm{d}{x}_{t - 1 : t}, \tag{11}
165
+ $$
166
+
167
+ where we use Lemma 4.1 to replace the gradient of the transition log density with the gradient of the policy log density in (8), and where ${a}_{t}$ is the action sampled to go from ${x}_{t - 1}$ to ${x}_{t}$ .
168
+
169
+ § 5 EXPERIMENTS
170
+
171
+ Problem Setting. Our experiments focus on the problem of state estimation of observed road users (in particular other vehicles) from the viewpoint of an AV, which involves the estimation of 2D poses from an observed sequence of $2\mathrm{D}$ convex polygons in a "bird's eye view" (BEV) constructed from LiDAR point clouds at each time step. For these experiments, we assume that the size of the observed objects, the pose of the AV, and the association of observations with their corresponding objects are known a priori. One such observation (and its corresponding state) is shown in Figure 1a. Here, the observation model must learn to describe the likelihood of 2D points around the periphery of the observed road user (see [25] for a review on such models), while the transition model must learn to describe driving behaviour. We use a feed-forward neural network to parameterise our observation model, where we provide it with range, bearing, and relative bearing from the viewpoint of the corresponding AV as features (Appendix A), and factor our transition model into a deterministic and differentiable motion model based on Ackermann dynamics [32] (Appendix B.1), and a policy parameterised by another feed-forward neural network (Appendix B.2).
172
+
173
+ Baselines, Datasets, and Metrics. We compare the quality of the models learned using PF-SEFI (our method), DPF-SGR [17], PFNET [5], and differentiating through a vanilla PF (ignoring the bias introduced by resampling). In addition to using real data collected from an AV, we generate two synthetic datasets (with 25 and 50 step trajectories), using a hand-crafted policy, and an observation model trained using supervised learning on manually labelled trajectories (Appendix A). An example observation is shown in Figure 1b. Unlike with real data, where the true models are unknown, synthetic datasets allow us to compare the learned models against a known ground truth. We measure the quality of learned models using the following metrics:
174
+
175
+ * Marginal Log Likelihood (MLL): The marginal log likelihood ${\ell }_{T}\left( \theta \right)$ given by filtering observations ${y}_{0 : T}$ using the learned models.
176
+
177
+ < g r a p h i c s >
178
+
179
+ Figure 2: Marginal Log Likelihood (MLL) on synthetic and real test data for models trained using PF-SEFI (us), DPF-SGR, PFNET, and PF, plotted against the corresponding training steps. For synthetic data we also show the MLL of the true models.
180
+
181
+ * Average Displacement Error (ADE) and Average Yaw Error (AYE): The average error in the positions and yaws respectively of the smoothed state estimates ${\mathbb{E}}_{\theta }\left( {{x}_{0 : T} \mid {y}_{0 : T}}\right)$ against the true poses, ${\bar{x}}_{0 : T}$ . For the synthetic data, the true poses are sampled while generating the data; for the real data, the true poses are obtained by humans manually labelling object trajectories from videos. These measure the quality of the learned models for the purposes of state estimation.
182
+
183
+ * Average Observation True Log Likelihood (AOTLL): The average log likelihood of observations conditioned on the corresponding true states under the learned observation model, i.e., $\frac{1}{T + 1}\mathop{\sum }\limits_{{t = 0}}^{T}\log {g}_{\theta }\left( {{y}_{t} \mid {\bar{x}}_{t}}\right)$ . This measures the standalone quality of the learned observation model.
184
+
185
+ * Average Policy True Log Likelihood (APTLL): The average log likelihood of true actions, ${\bar{a}}_{1 : T}$ ,(only available for experiments with synthetic data since it is not possible to manually label latent actions) conditioned on the corresponding true states under the learned policy, i.e., $\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}\log {\pi }_{\theta }\left( {{\bar{a}}_{t} \mid {\bar{x}}_{t - 1}}\right)$ . This measures the standalone quality of the learned policy.
186
+
187
+ Results. Figure 2 shows the progress of the learned models by tracking MLL of held out test data for each of the three datasets (synthetic data with 25 steps, synthetic data with 50 steps, and real data with 60 steps), and for each of the four methods (PF-SEFI, DPF-SGR, PFNET, and PF). Table 1 summarise the performance of the learned models at convergence. We pick the best hyper-parameters, smoothing lag $L$ for PF-SEFI and trade-off parameter $\alpha$ for PFNET, in each of the experiments (Appendix C).
188
+
189
+ In our experiments with synthetic data with 25 steps (Figure 2a and Experiment A in Table 1), we observe a clear gap in performance of PF-SEFI and DPF-SGR relative to PFNET and PF. The improvements over PF are likely due to the bias in PF's score estimates due to the non-differentiable resampling step, while the improvements over PFNET are likely due to adverse effects of not resampling with the correct distribution at each time step. While PF-SEFI and DPF-SGR perform similarly on this dataset, the difference is stark in the case of synthetic data with 50 steps (Figure 2b and Experiment B in Table 1). PF-SEFI is invariant to the length of the trajectories used, converging stably; however, all other methods, struggle to learn useful models. We postulate that since each of the baselines, in one way or another, differentiate through all time steps of the filter, the variance in their score estimates is too high for good learning through gradient ascent. ${}^{1}$
190
+
191
+ The results of our experiments with real data with 60 steps (Figure 2c and Experiment C in Table 1) are consistent with Experiment B (i.e., with experiments on synthetic data with 50 steps) and show that PF-SEFI is able to learn useful models. The learned observation model using PF-SEFI performs even better than the model that was trained offline through supervision with manually labelled data (see AOTLL in Table 1 for Experiment C). We also find that sampling from the learned
192
+
193
+ ${}^{1}$ The authors of DPF-SGR [17] recommend the use of stop gradients not only for particle weights after resampling, i.e., ${\widetilde{v}}_{t}^{i} = {\bar{v}}_{t}^{{a}^{i}}/ \bot {\bar{v}}_{t}^{{a}^{i}}\left\lbrack {{17}\text{ , Algorithm 1 }}\right\rbrack$ , but also, in the case of bootstrap particle filters, while computing the likelihood ratio ${v}_{t}^{i} = {\widetilde{v}}_{t - 1}^{i}{p}_{\theta }\left( {{x}_{t}^{i},{y}_{t} \mid {x}_{t - 1}^{{a}^{i}}}\right) / \bot {q}_{\theta }\left( {{x}_{t}^{i} \mid {x}_{t - 1}^{{a}^{i}}}\right)$ before resampling, and while sampling from ${x}_{t}^{i} \sim {q}_{\theta }\left( {\cdot \mid {x}_{t - 1}^{{a}^{i}}}\right) \left\lbrack {{17}\text{ , Section 4.1 }}\right\rbrack$ . While these additional stop gradients significantly reduce variance, our experiments with them yielded extremely poor overall performance (even with synthetic data with 25 steps). The results we report here thus make use of stop-gradients only for particle weights after resampling.
194
+
195
+ Table 1: Metrics computed on held out synthetic test data comparing PF-SEFI (us) against baselines DPF-SGR, PFNET, and vanilla PF, on two experiments - (A) learning from synthetic data with 25 steps, (B) learning from synthetic data with 50 steps, and (C) learning from real data with 60 steps. For experiments (A) and (B), we also compare against the performance of the true models. For experiment (C), we compare against the supervised observation model trained using manually labelled trajectories. For MLL, AOTLL, and APTLL, higher values imply better models, while for ADE and AYE, lower values imply better models.
196
+
197
+ max width=
198
+
199
+ $\mathbf{{Exp}.}$ $\mathbf{{Method}}$ $\mathbf{{MLL}}$ AOTLL APTLL $\mathbf{{ADE}\left( m\right) }$ AYE (rad)
200
+
201
+ 1-7
202
+ 5*A TRUE $- {3.161} \pm {0.003}$ -2.128 2.674 ${0.090} \pm {0.001}$ ${0.014} \pm {0.000}$
203
+
204
+ 2-7
205
+ PF-SEFI (us) $- {3.147} \pm {0.004}$ $- {2.285} \pm {0.028}$ ${2.661} \pm {0.014}$ ${0.186} \pm {0.021}$ ${0.016} \pm {0.000}$
206
+
207
+ 2-7
208
+ DPF-SGR $- {3.159} \pm {0.004}$ $- {2.265} \pm {0.010}$ ${2.594} \pm {0.027}$ $\mathbf{{0.165} \pm {0.008}}$ $\mathbf{{0.014} \pm {0.000}}$
209
+
210
+ 2-7
211
+ PFNET $- {3.225} \pm {0.004}$ $- {2.487} \pm {0.026}$ ${2.621} \pm {0.013}$ ${0.264} \pm {0.019}$ ${0.017} \pm {0.000}$
212
+
213
+ 2-7
214
+ PF $- {3.229} \pm {0.006}$ $- {2.484} \pm {0.026}$ ${2.576} \pm {0.021}$ ${0.245} \pm {0.021}$ ${0.017} \pm {0.000}$
215
+
216
+ 1-7
217
+ 5*B TRUE $- {3.145} \pm {0.002}$ -2.165 2.693 ${0.088} \pm {0.001}$ ${0.012} \pm {0.000}$
218
+
219
+ 2-7
220
+ PF-SEFI (us) $- {3.141} \pm {0.005}$ $- {2.283} \pm {0.015}$ $\mathbf{{2.505} \pm {0.042}}$ $\mathbf{{0.165} \pm {0.013}}$ $\mathbf{{0.014} \pm {0.000}}$
221
+
222
+ 2-7
223
+ DPF-SGR $- {3.966} \pm {0.050}$ $- {2.636} \pm {0.031}$ ${0.811} \pm {0.130}$ ${2.828} \pm {0.415}$ ${0.142} \pm {0.016}$
224
+
225
+ 2-7
226
+ PFNET $- {4.169} \pm {0.046}$ $- {2.901} \pm {0.039}$ ${0.539} \pm {0.077}$ ${2.809} \pm {0.176}$ ${0.148} \pm {0.008}$
227
+
228
+ 2-7
229
+ PF $- {4.118} \pm {0.038}$ $- {2.841} \pm {0.025}$ ${0.681} \pm {0.122}$ ${2.502} \pm {0.042}$ ${0.137} \pm {0.007}$
230
+
231
+ 1-7
232
+ 5*C SUPERVISED N/A $- {2.224} \pm {0.006}$ N/A N/A N/A
233
+
234
+ 2-7
235
+ PF-SEFI (us) $- {2.447} \pm {0.029}$ $- {1.973} \pm {0.029}$ N/A ${0.275} \pm {0.011}$ $\mathbf{{0.034} \pm {0.006}}$
236
+
237
+ 2-7
238
+ DPF-SGR $- {3.297} \pm {0.287}$ $- {2.236} \pm {0.218}$ N/A ${0.643} \pm {0.177}$ ${0.081} \pm {0.477}$
239
+
240
+ 2-7
241
+ PFNET $- {4.019} \pm {0.098}$ $- {2.752} \pm {0.079}$ N/A ${0.746} \pm {0.091}$ ${1.015} \pm {0.159}$
242
+
243
+ 2-7
244
+ PF $- {3.848} \pm {0.045}$ $- {2.639} \pm {0.140}$ N/A ${0.701} \pm {0.109}$ ${1.082} \pm {0.364}$
245
+
246
+ 1-7
247
+
248
+ model produces observations that are qualitatively similar to the real data (Appendix D). While the supervised model is trained only on the subset of the observations that are labeled (labelling only a subset is common in practical applications due to the cost of labelling), PF-SEFI, by contrast, can leverage all observations in a self-supervised fashion. Moreover, we speculate that the labels contain noise and that the labelling distribution is biased towards observations that are easy to label. Both limitations hinder supervised learning.
249
+
250
+ § 6 DISCUSSION, LIMITATIONS, AND FUTURE WORK
251
+
252
+ In this work, we proposed an efficient particle-based approach for estimating the score function to learn SSMs in a completely self-supervised way. Compared to previous particle-based methods that estimate the score, our method is more computationally efficient, allowing us to scale to learning models with many parameters and apply it to a real-world AV object tracking problem. We also showed empirically that our method learns better models and is more stable in training than recent methods that use automatic differentiation to estimate the score, and that we can learn an observation model that outperforms one trained through supervised learning, without using any labels.
253
+
254
+ While this solution is ideal for our problem, it does have a number of limitations. Most notably, it is restricted to maximising the marginal log-likelihood of the data, while differentiating through the filter allows for arbitrary differentiable loss functions. Furthermore, our method is not suitable for estimating the parameters of a proposal distribution. Beyond these algorithmic limitations, in our application, the models that we used were not very expressive. For the observation model, we did not model important phenomena that affect partial observability such as occlusions and we restricted our states and observations to 2D. For the policy, we used a simplified policy with only basic features that are insufficient for controlling an agent in simulation. Furthermore, the policy and the motion model are both specific to vehicles, and currently exclude other road users such as pedestrians.
255
+
256
+ In future work, we aim to scale up our problem setting, by making both models more expressive, and to estimate more state dimensions, such as full 3D poses and sizes of objects. We also believe that learning policies as components of an SSM to explicitly account for observation noise is, in practice, critical for learning good driving behaviour from demonstrations. Such policies could be used as models for predicting the behaviour of other road-users, or to control agents in simulation, and the method we proposed in this work offers an ideal starting point to explore this.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/7RyzGWLk79H/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,267 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # When the Sun Goes Down: Repairing Photometric Losses for All-Day Depth Estimation
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Self-supervised deep learning methods for joint depth and ego-motion estimation can yield accurate trajectories without needing ground-truth training data. However, as they typically use photometric losses, their performance can degrade significantly when the assumptions these losses make (e.g. temporal illumination consistency, a static scene, and the absence of noise and occlusions) are violated. This limits their use for e.g. nighttime sequences, which tend to contain many point light sources (including on dynamic objects) and low signal-to-noise ratio (SNR) in darker image regions. In this paper, we show how to use a combination of three techniques to allow the existing photometric losses to work for both day and nighttime images. First, we introduce a per-pixel neural intensity transformation to compensate for the light changes that occur between successive frames. Second, we predict a per-pixel residual flow map that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth from the networks. And third, we denoise the training images to improve the robustness and accuracy of our approach. These changes allow us to train a single model for both day and nighttime images without needing separate encoders or extra feature networks like existing methods. We perform extensive experiments and ablation studies on the challenging Oxford RobotCar dataset to demonstrate the efficacy of our approach for both day and nighttime sequences.
12
+
13
+ ## 20 1 Introduction
14
+
15
+ An ability to capture 3D scene structure is crucial for many applications, including autonomous driving [1], robotic manipulation [2], and augmented reality [3]. Many methods use LiDAR or fixed-baseline stereo to acquire the depth needed to reconstruct a scene, but researchers have also long been interested in estimating depth from monocular images, driven by the ubiquity, low cost, low power consumption and ease of deployment of monocular cameras. By contrast, LiDAR can be power-hungry, and stereo rigs must be calibrated and time-synchronised to achieve good performance.
16
+
17
+ Multi-view monocular depth estimation approaches have long used variable-baseline stereo over multiple images to recover depth $\left\lbrack {4,5}\right\rbrack$ . Meanwhile, progress in deep learning has opened up the additional possibility of estimating depth from a single monocular image. Deep learning methods for depth estimation can be broadly divided into two types, namely supervised methods [6, 7], and self/unsupervised methods $\left\lbrack {8,9,{10}}\right\rbrack$ . Typically, supervised approaches have achieved very good results for the dataset(s) on which they are trained, but their need for ground-truth information during training has often hindered their deployment in new domains.
18
+
19
+ By contrast, self/unsupervised methods have typically adopted the use of a geometry-based loss function, inspired by the strong physical principles of traditional methods $\left\lbrack {{11},{12}}\right\rbrack$ . This loss function is commonly referred to as the photometric or appearance loss, and is based on the assumptions that (i) the scene is static (i.e. contains no moving objects), (ii) the illumination in the scene is diffusive (i.e. there are no specular reflections) and temporally consistent (i.e. the pixels to which any scene point projects in any two consecutive frames have the same intensity), and (iii) the images are free of noise and occlusions $\left\lbrack {{11},{12},{13},{14}}\right\rbrack$ . In practice, many of these assumptions are at least partly false, which can lead to errors in the estimated depth: scenes are quite likely to contain dynamic objects (e.g. cars, cyclists and pedestrians, in an outdoor driving scenario), surface materials are rarely fully diffusive, and occlusions are common. During the day, it is somewhat reasonable to assume that the illumination is moderately temporally consistent for image sequences captured outdoors, as the sun is by far the dominant light source in that case, and the light it casts changes only slowly over time; however, at night, the numerous point light sources that are typically turned on after dark (e.g. car headlights, lamp posts, etc.) can cause the illumination to change drastically from one frame to the next. At night, also, the motion blur associated with the movement of dynamic objects in the scene (including the ego-vehicle) becomes worse, owing to the longer exposure times typically used when capturing night-time images [15, 16], and the signal-to-noise ratio of the (darker) images becomes much lower than it would be during daytime. Such issues, as illustrated in Figure 1, inhibit the straightforward use of deep networks based on photometric loss for night-time sequences.
20
+
21
+ ![01963fd0-434a-7b15-8734-921c5598ee59_1_476_215_852_308_0.jpg](images/01963fd0-434a-7b15-8734-921c5598ee59_1_476_215_852_308_0.jpg)
22
+
23
+ Figure 1: (a) The challenges posed by night-time images: (1) low visibility and noise (patch enhanced for better readability); (2) moving light sources with saturating image regions; (3) point light sources; (4) extreme motion blur. (b) Despite these adverse conditions, which violate the assumptions made by the photometric loss, our method can successfully estimate accurate depth maps.
24
+
25
+ In this paper, we address this problem by directly targeting violations of the temporal illumination consistency, static scene and noise-free assumptions on which the photometric loss relies. As shown by our day and night results in Table 1, these three together account for much of the discrepancy in performance between daytime and night-time. A lack of temporal illumination consistency caused by point light sources in the scene can cause pixels to be incorrectly matched between consecutive frames. To rectify this, we propose a novel per-pixel neural intensity transformation that learns to compensate for these light sources (see §3.2). Whilst conceptually straightforward, this approach is surprisingly effective, as our results in §4 demonstrate. Interestingly, they also show that it is able to operate well over wide (motion parallax) baselines, allowing us to leverage the better depth estimation performance that wider baselines offer. To correct for dynamic objects in the scene, as well as motion blur, we predict a per-pixel residual flow map (see §3.3) that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth from the networks. This improves depth estimation performance at any time of day (see §4), but has additional theoretical benefits for night-time sequences because of the greater motion blur from which they typically suffer. Lastly, we robustify our approach against noise by incorporating Neighbour2Neighbour [17], a state-of-the-art denoising module, in our photometric loss formulation (see §3.4).
26
+
27
+ ## 2 Related Work
28
+
29
+ Estimating depth from images has a long history in computer vision. Several methods use either stereo images $\left\lbrack {{18},{19},{20}}\right\rbrack$ , or two or more images taken from different viewing angles $\left\lbrack {{21},{22},{23}}\right\rbrack$ . We try to solve this problem using a single monocular image, without any constraints on the scene of interest. Various methods have addressed this problem using supervised learning [6,7,24,25,26]. However, it is infeasible to have ground-truth depth maps for training on every scene, which limits the application of these methods and helps motivate unsupervised solutions to this problem.
30
+
31
+ ![01963fd0-434a-7b15-8734-921c5598ee59_2_312_211_1104_587_0.jpg](images/01963fd0-434a-7b15-8734-921c5598ee59_2_312_211_1104_587_0.jpg)
32
+
33
+ Figure 2: The architecture of our proposed method (see $§3$ for details).
34
+
35
+ Unsupervised Methods: Garg et al. [8] proposed a geometry-based loss function to train a network in a completely unsupervised fashion using a pair of stereo images. Monodepth [27] improved this by using differentiable image warping [28] and structural similarity-based [29] image comparison loss. SfMLearner [30] used only monocular images to jointly learn depth and ego-motion. It was further improved by combining stereo and monocular losses in [31, 32]. Later, GeoNet [33] and EPC [34] learnt per-pixel optical flow maps along with depth and ego-motion to mitigate the effect of moving objects. Some methods use GAN-based learning to train their systems [10, 35, 36]. Recently, Monodepth2 [37] extended Monodepth to the temporal domain, proposing a few architectural changes and robust loss functions to achieve state-of-the-art results. HR-Depth [38] used an effective skip connection and a convolution block to integrate spatial and semantic information. SD-SSMDE [39] introduced a two-stage training strategy to improve scale and inter-frame scale consistency in depth by utilising depth estimation from the first stage as a pseudo-label. Based on channel-wise attention, CADepth-Net [40] proposed structure perception and detail emphasis modules for capturing the context of scenes with the detail for the depth estimation. RM-Depth [41] proposed recurrent modulation units for an effective fusion of deep features with fewer parameters, and a warping-based motion field for moving objects to improve the scene rigidity, leading to enhanced depth estimation. More broadly, recent years have also seen a wide range of other advances in depth estimation, e.g. changes to the network architecture [42], the addition of extra loss functions [43], and better handling of dynamic objects [44]. However, all of these methods have been tested on standard daytime datasets, whereas our method is designed to work at night as well.
36
+
37
+ Nighttime Methods: All the methods above are trained using photometric loss as the main supervision signal, and with an assumption of temporal illumination consistency, which is not valid at night. A few methods such as DeFeat-Net [45], ADFA [46] and [14] have explored how to estimate depth information from nighttime RGB images. DeFeat-Net [45] learns $n$ -dimensional deep feature representations (assumed to be illumination-invariant) using a pixel-wise contrastive loss. The feature maps are simultaneously used along with the images for photometric loss calculation during training. ADFA [36] mimics a daytime depth estimation model by learning a new encoder that can generate 'day-like' features from nighttime images using a domain adaptation approach. Instead of feature translation as in [46], the authors in [14] propose a joint network for image translation and stereo image-based depth estimation. Recently, photometric losses are again used with an image enhancement module and a GAN based depth regulariser in [47]. Liu et al. [48] divided the day and nighttime images into view-invariant and variant feature maps using separate encoders, and used the view-invariant information for depth estimation. All these methods either need two separate encoders for day and nighttime images $\left\lbrack {{46},{48},{47}}\right\rbrack$ , or need to learn an illumination-invariant feature space [45]. By contrast, our proposed method learns in a completely self-supervised fashion, without needing stereo images, ground-truth depth information or any additional feature learning.
38
+
39
+ ## 3 Method
40
+
41
+ ### 3.1 Baseline Method
42
+
43
+ We first recap the core tenets of existing photometric loss methods, which typically use two networks, a depth network (or DepthNet) and a motion network (or MotionNet). The DepthNet takes an individual colour image as input, and is used to predict a depth image ${D}_{t}$ for each colour image ${I}_{t}$ in the input sequence. The MotionNet takes a consecutive pair of images ${I}_{t}$ and ${I}_{t + 1}$ as input, and is used to output the ego-motion ${T}_{t, t + 1}$ of the camera between them. The estimated depth and ego-motion can be used to reproject a pixel $\mathbf{u} = {\left\lbrack u, v\right\rbrack }^{\top }$ in frame ${I}_{t}$ into ${I}_{t + 1}$ , the subsequent frame in the sequence, via ${\dot{V}}_{t}\left( \mathbf{u}\right) = K{T}_{t, t + 1}{D}_{t}\left( \mathbf{u}\right) {K}^{-1}\dot{\mathbf{u}}$ , in which $\dot{\mathbf{u}}$ denotes the homogeneous form of $\mathbf{u}, K \in {\mathbb{R}}^{3 \times 3}$ encodes the camera intrinsics, and ${\dot{V}}_{t}\left( \mathbf{u}\right) \in {\mathbb{R}}^{3}$ denotes the homogeneous form of ${V}_{t}\left( \mathbf{u}\right) \in {\mathbb{R}}^{2}$ , a 2D point in the image plane of ${I}_{t + 1}$ (which may or may not lie within the bounds of the actual image). This can be used to reconstruct an image ${I}_{t}^{\prime }$ by sampling from ${I}_{t + 1}$ around the reprojected points, using bilinear interpolation [28] to achieve a smoother result. Formally,
44
+
45
+ $$
46
+ {I}_{t}^{\prime }\left( \mathbf{u}\right) = \left\{ \begin{array}{ll} \operatorname{interpolate}\left( {{I}_{t + 1},{V}_{t}\left( \mathbf{u}\right) }\right) & \text{ if }\mathbf{u} \in {M}_{t} \\ \mathbf{0} & \text{ otherwise,} \end{array}\right. \tag{1}
47
+ $$
48
+
49
+ in which ${M}_{t} = \left\{ {\mathbf{u} : \rho \left( {{V}_{t}\left( \mathbf{u}\right) }\right) \in \Omega \left( {I}_{t + 1}\right) }\right\}$ is the set of pixels whose reprojections into ${I}_{t + 1}$ , when rounded to the nearest pixel using $\rho$ , fall within the image bounds $\Omega \left( {I}_{t + 1}\right)$ . The reconstructed image ${I}_{t}^{\prime }$ can then be compared to the original image ${I}_{t}$ to calculate the loss values needed for training. The loss we target, namely photometric loss, has been used by many recent deep learning-based depth estimation techniques $\left\lbrack {8,{31},{30},{36}}\right\rbrack$ . It is normally calculated as a convex combination of pixel-wise difference and single-scale structural dissimilarity (SSIM) [29], via
50
+
51
+ $$
52
+ {L}_{p}^{\left( t\right) } = \frac{1}{\left| {M}_{t}\right| }\mathop{\sum }\limits_{{\mathbf{u} \in {M}_{t}}}\left( {\alpha \frac{1 - \operatorname{SSIM}\left( {{I}_{t}\left( \mathbf{u}\right) ,{I}_{t}^{\prime }\left( \mathbf{u}\right) }\right) }{2} + \left( {1 - \alpha }\right) \left| {{I}_{t}\left( \mathbf{u}\right) - {I}_{t}^{\prime }\left( \mathbf{u}\right) }\right| }\right) . \tag{2}
53
+ $$
54
+
55
+ Most existing unsupervised methods (e.g. [30, 31, 37, 42]) use this as the backbone of their formulation. To ensure a fair comparison with current night-time state-of-the-art methods [45, 48, 47], we base our modifications in this paper on Monodepth2 [37], a commonly used baseline.
56
+
57
+ ### 3.2 Lighting Change Compensation
58
+
59
+ The numerous point light sources that are typically turned on after dark (e.g. car headlights, lamp posts, etc.) can cause the illumination of a scene to change significantly from frame ${I}_{t}$ to frame ${I}_{t + 1}$ . A problematic special case occurs when a light source moves with the camera (e.g. car headlights), which can lead to large holes in the estimated depth directly in front of the ego-vehicle [45, 48]. In our approach, we compensate for the illumination changes by estimating a per-pixel transformation that, when applied to ${I}_{t + 1}$ , can mitigate the changes in lighting that have occurred since ${I}_{t}$ . We draw some inspiration from [49, 50, 51], which use a single whole-image transformation based on two scalar values to compensate for the difference in exposure time between a pair of images, based on the observation that such a difference creates approximately uniform intensity changes over the entire image. However, in our case, the intensity changes are far from uniform over the image, owing to both the motions of the ego-vehicle and other objects in the scene, and the distances between the ego-vehicle and static point light sources. For this reason, we propose a per-pixel formulation here.
60
+
61
+ Our approach starts by passing the features produced by the last convolutional layer of the Motion-Net through a lighting change decoder to estimate two per-pixel change images, ${C}_{t}$ and ${B}_{t}$ (see Figure 2). These (respectively) aim to capture the per-pixel changes in contrast (scale) and brightness (shift) that have occurred between the two input frames. As shown in Figure 4, the brightness image ${B}_{t}$ broadly captures the extra light added to the image by e.g. vehicle headlights, and the contrast image ${C}_{t}$ broadly captures the changes in ambient light due to the motion of the ego-vehicle towards or away from point light sources such as street lamps. We use these images to transform the reconstructed image ${I}_{t}^{\prime }$ via ${\widetilde{I}}_{t} = {C}_{t} \odot {I}_{t}^{\prime } + {B}_{t}$ , in which $\odot$ denotes the Hadamard product.
62
+
63
+ ### 3.3 Motion Compensation
64
+
65
+ As seen in $§{3.1}$ , the standard photometric loss makes use of correspondences between consecutive frames that have been established via reprojection, based on the ego-motion and depth estimated by the networks. Assuming that (i) the ego-motion and depth have been estimated well, (ii) the scene is static, and (iii) there is minimal motion blur, the correspondences established in this way will broadly match those that would have been established had we used the ground truth optic flow ${\Phi }_{t}\left( \cdot \right)$ from frame $t$ to frame $t + 1$ . However, if objects move with respect to the background scene, or anything visible in the image moves with respect to the ego-camera (which can cause motion blur), then the reprojection correspondences may be incorrect. To correct for these errors, we predict a residual flow map ${R}_{t}$ , such that for each pixel $\mathbf{u} \in \Omega \left( {I}_{t}\right) ,{R}_{t}\left( \mathbf{u}\right) \in {\mathbb{R}}^{2}$ is an estimate of $\left( {\mathbf{u} + {\Phi }_{t}\left( \mathbf{u}\right) }\right) -$ ${V}_{t}\left( \mathbf{u}\right)$ , the 2D offset from the reprojection correspondence of $\mathbf{u}$ , namely ${V}_{t}\left( \mathbf{u}\right)$ , to its ground truth correspondence in frame $t + 1$ , namely $\mathbf{u} + {\Phi }_{t}\left( \mathbf{u}\right)$ . We can then add ${R}_{t}\left( \mathbf{u}\right)$ to ${V}_{t}\left( \mathbf{u}\right)$ for each pixel $\mathbf{u}$ to obtain a potentially more accurate correspondence for use in reconstructing ${I}_{t}^{\prime }$ via Equation 1.
66
+
67
+ Some methods $\left\lbrack {{34},{33},{44}}\right\rbrack$ already exist that predict residual flow for daytime images. By contrast, we avoid using a separate encoder-decoder network or computationally-intensive image warping-based bilinear interpolation for supervision. Instead, we estimate residual flow using an efficient sparsity-based formulation. This involves introducing a residual flow decoder that takes the features of the final convolutional layer of the MotionNet as input and the features of previous layers in the MotionNet via skip connections, and outputs residual flow maps $\left\{ {{R}_{t, s} : s \in \{ 0,1,2,3\} }\right\}$ at four different scales (each ${R}_{t, s}$ has a width and height that is $1/{2}^{s}$ that of ${I}_{t}$ , and ${R}_{t,0} \equiv {R}_{t}$ ).
68
+
69
+ There is no direct supervision available to learn the residual flow maps. For this reason, we choose instead to encourage sparsity in the residual flow estimates, so that the estimated depth and ego-motion can explain the majority of the scene, and the left-over can be explained by the residual flow maps. To achieve this, we adopt the sparsity loss from [44], i.e.
70
+
71
+ $$
72
+ {L}_{r}^{\left( t\right) } = \mathop{\sum }\limits_{{s = 0}}^{3}\left\langle \left| {R}_{t, s}\right| \right\rangle /{2}^{s}\mathop{\sum }\limits_{{\mathbf{u} \in \Omega \left( {I}_{t, s}\right) }}\sqrt{1 + \left| {{R}_{t, s}\left( \mathbf{u}\right) }\right| /\left\langle \left| {R}_{t, s}\right| \right\rangle }, \tag{3}
73
+ $$
74
+
75
+ in which ${I}_{t, s}$ is a downsampled version of ${I}_{t}$ at scale $s$ , and $\left\langle \left| {R}_{t, s}\right| \right\rangle$ is the spatial average of the absolute residual flow map $\left| {R}_{t, s}\right|$ . By contrast with [44], here we introduce a normalising factor of $1/{2}^{s}$ at each scale, since the original loss was for scene flow, where the flow magnitude is independent of the resolution of the flow maps, which is not the case for the 2D residual flow we consider.
76
+
77
+ ### 3.4 Image Denoising
78
+
79
+ Image noise is yet another key factor that affects the performance of the photometric loss. In practice, it is independent of the respective image, and is mainly caused by a low SNR in the darker regions of the image. Handling this noise is of crucial importance, as photometric loss is the only training signal, and supervises all of the modules we have mentioned thus far. To remove the noise from the images, we chose to use Neighbour2Neighbour [17], a state-of-the-art unsupervised denoising model trained on ImageNet with zero-mean Gaussian noise. The standard deviation values were varied from 5 to 50 during training. This model can either be used to denoise all images input to the network at both training time and test time, or it can be used solely at training time to denoise the images for the purpose of calculating the loss. In practice, we chose the latter approach, as denoising at test time has two major disadvantages: (i) it can significantly add to the computational burden at runtime, slowing down the depth estimation; and (ii) any errors in the denoising process can lead to downstream errors in the depth maps, even though the depth estimation model itself might have been trained well. By contrast, restricting denoising to training time has the advantage of allowing us to make the depth and motion networks robust to noise by training them on the original images. 3.5 Full Pipeline
80
+
81
+ We can now formulate our full pipeline as follows:
82
+
83
+ $$
84
+ {D}_{t} = \mathcal{D}\left( {I}_{t}\right) ,{f}_{n} = \mathcal{M}{\mathcal{E}}_{1 : n}\left( \left\lbrack {{I}_{t},{I}_{t + 1}}\right\rbrack \right)
85
+ $$
86
+
87
+ $$
88
+ {T}_{t, t + 1} = \mathcal{M}\mathcal{D}\left( {f}_{N}\right) ,{R}_{t} = \mathcal{R}\mathcal{F}\mathcal{D}\left( \left\{ {{f}_{n} : 1 \leq n \leq N}\right\} \right) ,\left( {{C}_{t},{B}_{t}}\right) = \mathcal{L}\mathcal{C}\mathcal{D}\left( {f}_{N}\right)
89
+ $$
90
+
91
+ $$
92
+ {\mathcal{I}}_{t} = \mathcal{D}\mathcal{N}\left( {I}_{t}\right) ,{\mathcal{I}}_{t + 1} = \mathcal{D}\mathcal{N}\left( {I}_{t + 1}\right)
93
+ $$
94
+
95
+ $$
96
+ {\mathcal{I}}_{t}^{\prime } = \operatorname{reconstruct}\left( {{\mathcal{I}}_{t + 1},{V}_{t} + {R}_{t}}\right) \tag{4}
97
+ $$
98
+
99
+ $$
100
+ {\widetilde{\mathcal{I}}}_{t} = {C}_{t} \odot {\mathcal{I}}_{t}^{\prime } + {B}_{t}
101
+ $$
102
+
103
+ $$
104
+ {L}_{p}^{\left( t\right) } = \frac{1}{\left| {M}_{t}\right| }\mathop{\sum }\limits_{{\mathbf{u} \in {M}_{t}}}\left( {\alpha \frac{1 - \operatorname{SSIM}\left( {{\mathcal{I}}_{t}\left( \mathbf{u}\right) ,{\widetilde{\mathcal{I}}}_{t}\left( \mathbf{u}\right) }\right) }{2} + \left( {1 - \alpha }\right) \left| {{\mathcal{I}}_{t}\left( \mathbf{u}\right) - {\widetilde{\mathcal{I}}}_{t}\left( \mathbf{u}\right) }\right| }\right)
105
+ $$
106
+
107
+ The inputs to our system are a consecutive pair of images ${I}_{t}$ and ${I}_{t + 1}$ , whilst $\mathcal{D}$ denotes the Depth-Net, $\mathcal{M}{\mathcal{E}}_{1 : n}$ denotes the first $n$ layers of the $N$ -layer MotionNet encoder, $\mathcal{M}\mathcal{D}$ denotes the Motion-Net decoder, $\mathcal{R}\mathcal{F}\mathcal{D}$ denotes the residual flow decoder, $\mathcal{L}\mathcal{C}\mathcal{D}$ denotes the lighting change decoder, and $\mathcal{D}\mathcal{N}$ denotes the denoiser [17]. The reconstruct function reconstructs ${\mathcal{I}}_{t}^{\prime }$ as per Equation 1.
108
+
109
+ ### 3.6 Making the Pipeline Bidirectional
110
+
111
+ Monodepth2 [37] calculates its photometric loss not only in the forwards direction, from ${I}_{t}$ to ${I}_{t + 1}$ , but also in the backwards direction, from ${I}_{t}$ to ${I}_{t - 1}$ , before combining the losses. This allows us to use the idea of minimum reprojection error to account for occluded pixels, and so we do the same. We also adopt the auto-masking losses ${L}_{a}^{\left( t\right) }$ from Monodepth2 [37], as even though our method can cope with moving objects, it is very difficult to use parallax to disentangle the motion of objects that are moving in the same direction and at the same speed as the ego-vehicle. We further include the commonly used edge-aware gradient smoothing loss ${L}_{g}^{\left( t\right) }$ to maintain spatial smoothness over the estimated depth maps. Our final loss ${L}^{\left( t\right) }$ then becomes the weighted sum
112
+
113
+ $$
114
+ {L}^{\left( t\right) } = \min \left( {{L}_{p - }^{\left( t\right) },{L}_{p + }^{\left( t\right) },{L}_{a - }^{\left( t\right) },{L}_{a + }^{\left( t\right) }}\right) + {\lambda }_{r}\left( {{L}_{r - }^{\left( t\right) } + {L}_{r + }^{\left( t\right) }}\right) + {\lambda }_{g}{L}_{g}^{\left( t\right) }, \tag{5}
115
+ $$
116
+
117
+ in which $+ / -$ denote the forward/backward versions of the losses, and ${\lambda }_{r},{\lambda }_{g} \in \mathbb{R}$ are the weights.
118
+
119
+ ## 4 Experiments
120
+
121
+ In $§{4.1}$ , we compare our depth estimation performance to a number of state-of-the-art approaches in a variety of different daytime and/or night-time contexts. In $§{4.2}$ , we present a study on the effect of parallax to help explain the importance of our neural intensity transformation module. Finally, in §4.3, we perform an ablation study to analyse the contributions made by the three individual components of our approach. Further experiments can be found in the supplementary material.
122
+
123
+ ### 4.1 Depth Evaluation
124
+
125
+ We compare with 4 state-of-the-art unsupervised monocular methods: Monodepth2 [37], DeFeat-Net [45], ADDS-Depth-Night [48] and RNW [47] (see Figure 3 and Table 1). We tested our model with 3 different data variations: day only(d), night only(n), and a mix of day and night $\left( {d\& n}\right)$ . Mon-odepth2 [37] can be trained with all 3 configurations, although it has already been outperformed by DeFeat-Net [45] in the $d\& n$ setting. For the $d$ and $n$ settings, we outperform it by a significant margin in both error and accuracy (see Table 1). DeFeat-Net [45] and ADDS-Depth-Night [48] were originally trained with a $d\& n$ configuration. We evaluated the pre-trained models they released on our test split. Our method outperforms both methods by a significant margin on the nighttime sequences (see Table 1). Please note that we do not use any additional feature representation-based losses as used in DeFeat-Net [45], or paired day and night images as used in ADDS-Depth-Net [48]. RNW [47], another recent method, is also built on Monodepth2, but targets nighttime data only. As per Figure 3, our depth estimation results are sharp and better able to preserve edges than the competing methods. We also found that using a longer baseline improves depth estimation performance. However, naïvely using a wider baseline without also using our neural intensity transform can lead to a severe decrease in accuracy, particularly for nighttime images.
126
+
127
+ <table><tr><td>Test</td><td>Method</td><td>Train</td><td>Abs. Rel.</td><td>Sq. Rel.</td><td>RMSE</td><td>Log RMSE</td><td>$\delta < {1.25}$</td><td>$\delta < {1.25}^{2}$</td><td>$\delta < {1.25}^{3}$</td></tr><tr><td rowspan="6"/><td>Monodepth2 [37]</td><td>d</td><td>0.219</td><td>4.525</td><td>7.641</td><td>0.285</td><td>0.679</td><td>0.862</td><td>0.930</td></tr><tr><td>Ours</td><td>d</td><td>0.191</td><td>1.710</td><td>6.158</td><td>0.253</td><td>0.713</td><td>0.904</td><td>0.962</td></tr><tr><td>DeFeat-Net [45]</td><td>$\mathrm{d}$ & $\mathrm{n}$</td><td>0.247</td><td>2,980</td><td>7.884</td><td>0.305</td><td>0.650</td><td>0.866</td><td>0.943</td></tr><tr><td>RNW [47]</td><td>$\mathrm{d}$ & $\mathrm{n}$</td><td>0.297</td><td>2.608</td><td>7.996</td><td>0.359</td><td>0.431</td><td>0.773</td><td>0.930</td></tr><tr><td>ADDS-Depth-Night [48]</td><td>$\mathrm{d}$ & $\mathrm{n}$</td><td>0.239</td><td>2.089</td><td>6.743</td><td>0.295</td><td>0.614</td><td>0.870</td><td>0.950</td></tr><tr><td>Ours</td><td>$\mathrm{d}$ & $\mathrm{n}$</td><td>0.176</td><td>1.603</td><td>6.036</td><td>0.245</td><td>0.750</td><td>0.912</td><td>0.963</td></tr><tr><td rowspan="7">Night</td><td>Monodepth2 [37]</td><td>n</td><td>0.453</td><td>21.310</td><td>11.420</td><td>0.444</td><td>0.700</td><td>0.873</td><td>0.930</td></tr><tr><td>RNW MCIE + SBM [47]</td><td>n</td><td>0.350</td><td>7.934</td><td>8.994</td><td>0.407</td><td>0.674</td><td>0.861</td><td>0.922</td></tr><tr><td>Ours</td><td>n</td><td>0.186</td><td>1.656</td><td>6.288</td><td>0.248</td><td>0.728</td><td>0.919</td><td>0.969</td></tr><tr><td>DeFeat-Net [45]</td><td>d & n</td><td>0.334</td><td>4.589</td><td>8.606</td><td>0.358</td><td>0.586</td><td>0.827</td><td>0.911</td></tr><tr><td>ADDS-Depth-Night [48]</td><td>d & n</td><td>0.287</td><td>2.569</td><td>7.985</td><td>0.339</td><td>0.490</td><td>0.816</td><td>0.946</td></tr><tr><td>RNW [47]</td><td>$\mathrm{d}$ & $\mathrm{n}$</td><td>0.185</td><td>1.710</td><td>6.549</td><td>0.262</td><td>0.733</td><td>0.910</td><td>0.960</td></tr><tr><td>Ours</td><td>$\mathrm{d}$ & $\mathrm{n}$</td><td>0.174</td><td>1.637</td><td>6.302</td><td>0.245</td><td>0.754</td><td>0.915</td><td>0.964</td></tr></table>
128
+
129
+ Table 1: A quantitative comparison of our method. The results of Monodepth2 [37] are reported after retraining it. Those of DeFeat-Net [45] and ADDS-Depth-Night [48] are reported using the checkpoints from their public repositories. The evaluation uses a maximum depth of ${50}\mathrm{\;m}$ . Underlined methods use daytime images as main supervision or for regularisation losses.
130
+
131
+ ![01963fd0-434a-7b15-8734-921c5598ee59_6_365_764_1064_483_0.jpg](images/01963fd0-434a-7b15-8734-921c5598ee59_6_365_764_1064_483_0.jpg)
132
+
133
+ Figure 3: A qualitative comparison of our proposed method with the state of the art.
134
+
135
+ ### 4.2 Effect of Parallax
136
+
137
+ To better understand how depth estimation performance is affected by increasing the average parallax (metric separation) between the images we use to calculate the photometric loss, we constructed a new nighttime training split by increasing the intra-triplet stride (see supplementary material) to 2, which increased the average parallax between the images from ${0.353}\mathrm{\;m}$ to ${0.706}\mathrm{\;m}$ . Without our neural intensity transformation, the depth estimation performance significantly decreased compared to the original nighttime training split (see the difference between the RMSEs of the baseline in the top and bottom parts of Table 2). A key cause of this in night images is likely the headlights of the ego-vehicle, which can cause the pixel intensities to change drastically between frames. However, with our neural intensity transformation, the depth estimation performance was found to instead increase, which we hypothesise to be because by compensating for the lighting changes, we make it possible to exploit the stronger supervision that can be offered by a wider baseline.
138
+
139
+ ### 4.3 Ablation Study
140
+
141
+ Lighting Change Compensation. In Figure 4(a), we show several reference images and their lighting change maps. The intensity changes are non-uniform, so we cannot use the existing correction approaches from $\left\lbrack {{49},{50},{51}}\right\rbrack$ . We also observe that our method is able to clearly disentangle both the changes in ambient light resulting from movement towards/away from point light sources (captured by ${C}_{t}$ ) and the additional light added to the road pixels in the images by the ego-vehicle headlights (captured by ${B}_{t}$ ). Our neural intensity transform significantly reduces the RMSE error compared to the baseline (see Table 2), and is also able to fill in holes in front of the ego-vehicle (see Figure 3).
142
+
143
+ <table><tr><td>Stride</td><td>Method</td><td>Abs. Rel.</td><td>Sq. Rel.</td><td>RMSE</td><td>Log RMSE</td><td>$\delta < {1.25}$</td><td>$\delta < {1.25}^{2}$</td><td>$\delta < {1.25}^{3}$</td></tr><tr><td rowspan="4">1</td><td>Baseline</td><td>0.266</td><td>5.647</td><td>6.305</td><td>0.331</td><td>0.759</td><td>0.9013</td><td>0.947</td></tr><tr><td>w/ NIT</td><td>0.190</td><td>1.824</td><td>4.848</td><td>0.257</td><td>0.763</td><td>0.919</td><td>0.965</td></tr><tr><td>w/ Denoising</td><td>0.163</td><td>1.256</td><td>4.193</td><td>0.224</td><td>0.801</td><td>0.935</td><td>0.973</td></tr><tr><td>Full Model</td><td>0.154</td><td>1.174</td><td>4.120</td><td>0.216</td><td>0.811</td><td>0.939</td><td>0.976</td></tr><tr><td rowspan="3">2</td><td>Baseline</td><td>0.602</td><td>63.914</td><td>14.726</td><td>0.467</td><td>0.785</td><td>0.902</td><td>0.939</td></tr><tr><td>w/ NIT</td><td>0.169</td><td>1.727</td><td>4.693</td><td>0.236</td><td>0.812</td><td>0.929</td><td>0.967</td></tr><tr><td>Full Model</td><td>0.131</td><td>0.926</td><td>3.731</td><td>0.188</td><td>0.852</td><td>0.949</td><td>0.980</td></tr></table>
144
+
145
+ Table 2: Ablation study showing the importance of different modules in our system. The maximum depth was set to ${30}\mathrm{\;m}$ for this study. ’Stride’ denotes intra-triplet stride (see supplementary material).
146
+
147
+ ![01963fd0-434a-7b15-8734-921c5598ee59_7_319_547_1158_284_0.jpg](images/01963fd0-434a-7b15-8734-921c5598ee59_7_319_547_1158_284_0.jpg)
148
+
149
+ Figure 4: Visualisations of (a) estimated light changes; (b) residual flow and estimated depth; (c) the effects of denoising on the training loss over time.
150
+
151
+ Motion Compensation. In Figure 4(b), we show several reference images and their residual flow and depth maps. In the second column, one can clearly see that our method is able to distinguish pixels on moving objects such as cars and pedestrians from static pixels. This effect can be observed for both daytime and night-time images, showcasing the generality of our approach through a single unified training pipeline. In Table 2, it can be seen that correcting the reprojection correspondences using the residual flow map we predict leads to a significant improvement in accuracy.
152
+
153
+ Image Denoising. Denoising the images while calculating the training loss should ideally reduce the ambiguity in establishing pixel correspondences between the images, giving a robust supervision signal for training our system and thereby achieving lower error and higher accuracy. This effect can be clearly seen in the training error plot shown in Figure 4(c), where we compare our baseline+NIT model with and without denoising. The denoising results in much more accurate depth maps, improving both the RMSE and accuracy metrics as shown in Table 2.
154
+
155
+ ## 5 Limitations
156
+
157
+ Our method has a number of limitations. First, as is typical of monocular methods, it is only able to estimate depth up to scale. Second, like most stereo approaches (whether variable-baseline like ours, or fixed-baseline with a rigid stereo rig), it struggles to preserve the detail of distant parts of the scene because of limited parallax. Third, it also struggles to recover structural detail from very dark image regions (e.g. see Figure 1(b)). And finally, in common with most vision-based depth estimation methods, it does not consistently perform well for transparent surfaces like glass.
158
+
159
+ ## 6 Conclusions
160
+
161
+ In this paper, we propose a self-supervised method to learn a single model to estimate depth maps from monocular day and nighttime RGB images. By compensating for the illumination changes that can occur from one frame to the next, we enable accurate nighttime depth estimation in nonuniform lighting conditions. Moreover, by predicting per-pixel residual flow and using it to correct the reprojection correspondences induced by the estimated ego-motion and depth, we improve our method's ability to cope with both moving objects in the scene and motion blur. Finally, by denoising the input images prior to calculating the photometric loss, we improve the loss's ability to provide a strong supervision signal, making the entire system more robust and accurate.
162
+
163
+ References
164
+
165
+ [1] E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda. A survey of autonomous driving: Common practices and emerging technologies. IEEE Access, 8:58443-58469, 2020.
166
+
167
+ [2] J. Sanchez, J.-A. Corrales, B.-C. Bouzgarrou, and Y. Mezouar. Robotic manipulation and sensing of deformable objects in domestic and industrial applications: a survey. The International Journal of Robotics Research, 37(7):688-716, 2018.
168
+
169
+ [3] M. A. Livingston, Z. Ai, J. E. Swan, and H. S. Smallman. Indoor vs. outdoor depth perception for mobile augmented reality. In 2009 IEEE Virtual Reality Conference, pages 55-62. IEEE, 2009.
170
+
171
+ [4] R. A. Newcombe, S. J. Lovegrove, and A. J. Davison. DTAM: Dense tracking and mapping in real-time. In Proceedings of the IEEE International Conference on Computer Vision, 2011.
172
+
173
+ [5] K. Wang and S. Shen. MVDepthNet: Real-time Multiview Depth Estimation Neural Network. In ${3DV},{2018}$ .
174
+
175
+ [6] D. Eigen, C. Puhrsch, and R. Fergus. Depth map prediction from a single image using a multi-scale deep network. In Advances in neural information processing systems, pages 2366-2374, 2014.
176
+
177
+ [7] F. Liu, C. Shen, and G. Lin. Deep convolutional neural fields for depth estimation from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5162-5170, 2015.
178
+
179
+ [8] R. Garg, V. K. BG, G. Carneiro, and I. Reid. Unsupervised cnn for single view depth estimation: Geometry to the rescue. In European Conference on Computer Vision, pages 740-756. Springer, 2016.
180
+
181
+ [9] H. Zhan, R. Garg, C. S. Weerasekera, K. Li, H. Agarwal, and I. Reid. Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 340-349, 2018.
182
+
183
+ [10] Y. Almalioglu, M. R. U. Saputra, P. P. de Gusmao, A. Markham, and N. Trigoni. Ganvo: Unsupervised deep monocular visual odometry and depth estimation with generative adversarial networks. 2017 IEEE International Conference on Robotics and Automation (ICRA), 2019.
184
+
185
+ [11] C. Kerl, J. Sturm, and D. Cremers. Dense visual slam for rgb-d cameras. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2100-2106. IEEE, 2013.
186
+
187
+ [12] C. Kerl, J. Sturm, and D. Cremers. Robust odometry estimation for rgb-d cameras. In 2013 IEEE International Conference on Robotics and Automation, pages 3748-3754. IEEE, 2013.
188
+
189
+ [13] A. I. Comport, E. Malis, and P. Rives. Accurate quadrifocal tracking for robust 3d visual odometry. In Proceedings 2007 IEEE International Conference on Robotics and Automation, pages 40-45. IEEE, 2007.
190
+
191
+ [14] A. Sharma, L.-F. Cheong, L. Heng, and R. T. Tan. Nighttime stereo depth estimation using joint translation-stereo learning: Light effects and uninformative regions. In 2020 International Conference on 3D Vision (3DV), pages 23-31. IEEE, 2020.
192
+
193
+ [15] T. Portz, L. Zhang, and H. Jiang. Optical flow in the presence of spatially-varying motion blur. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 1752-1759. IEEE, 2012.
194
+
195
+ [16] A. Rav-Acha and S. Peleg. Restoration of multiple images with motion blur in different directions. In Proceedings Fifth IEEE Workshop on Applications of Computer Vision, pages 22-28. IEEE, 2000.
196
+
197
+ [17] T. Huang, S. Li, X. Jia, H. Lu, and J. Liu. Neighbor2neighbor: Self-supervised denoising from single noisy images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14781-14790, 2021.
198
+
199
+ [18] D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International journal of computer vision, 47(1-3):7-42, 2002.
200
+
201
+ [19] D. Scharstein and C. Pal. Learning conditional random fields for stereo. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pages 1-8. IEEE, 2007.
202
+
203
+ [20] L. Zou and Y. Li. A method of stereo vision matching based on opencv. In 2010 International Conference on Audio, Language and Image Processing, pages 185-190. IEEE, 2010.
204
+
205
+ [21] J. L. Schonberger and J.-M. Frahm. Structure-from-motion revisited. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4104-4113, 2016.
206
+
207
+ [22] Y. Dai, H. Li, and M. He. Projective multiview structure and motion from element-wise factorization. IEEE transactions on pattern analysis and machine intelligence, 35(9):2238-2251, 2013.
208
+
209
+ [23] F. Yu and D. Gallup. 3d reconstruction from accidental motion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3986-3993, 2014.
210
+
211
+ [24] L. Ladicky, J. Shi, and M. Pollefeys. Pulling things out of perspective. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 89-96, 2014.
212
+
213
+ [25] N. dos Santos Rosa, V. Guizilini, and V. Grassi. Sparse-to-continuous: Enhancing monocular depth estimation using occupancy maps. In 2019 19th International Conference on Advanced Robotics (ICAR), pages 793-800. IEEE, 2019.
214
+
215
+ [26] H. Fu, M. Gong, C. Wang, K. Batmanghelich, and D. Tao. Deep ordinal regression network for monocular depth estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2002-2011, 2018.
216
+
217
+ [27] C. Godard, O. Mac Aodha, and G. J. Brostow. Unsupervised monocular depth estimation with left-right consistency. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6602-6611. IEEE, 2017.
218
+
219
+ [28] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In Advances in neural information processing systems, pages 2017-2025, 2015.
220
+
221
+ [29] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004.
222
+
223
+ [30] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe. Unsupervised learning of depth and ego-motion from video. In CVPR, 2017.
224
+
225
+ [31] V. M. Babu, K. Das, A. Majumdar, and S. Kumar. Undemon: Unsupervised deep network for depth and ego-motion estimation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1082-1088. IEEE, 2018.
226
+
227
+ [32] R. Li, S. Wang, Z. Long, and D. Gu. Undeepvo: Monocular visual odometry through unsupervised deep learning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 7286-7291. IEEE, 2018.
228
+
229
+ [33] Z. Yin and J. Shi. GeoNet: Unsupervised learning of dense depth, optical flow and camera pose. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, 2018.
230
+
231
+ [34] C. Luo, Z. Yang, P. Wang, Y. Wang, W. Xu, R. Nevatia, and A. Yuille. Every pixel counts++: Joint learning of geometry and motion with 3d holistic understanding. arXiv preprint arXiv:1810.06125, 2018.
232
+
233
+ [35] F. Aleotti, F. Tosi, M. Poggi, and S. Mattoccia. Generative adversarial networks for unsupervised monocular depth prediction. In 15th European Conference on Computer Vision (ECCV) Workshops, volume 1, page 8, 2018.
234
+
235
+ [36] M. Vankadari, S. Kumar, A. Majumder, and K. Das. Unsupervised learning of monocular depth and ego-motion using conditional patchgans. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 5677-5684. International Joint Conferences on Artificial Intelligence Organization, 7 2019. doi:10.24963/ijcai.2019/ 787. URL https://doi.org/10.24963/ijcai.2019/787.
236
+
237
+ [37] C. Godard, O. Mac Aodha, M. Firman, and G. Brostow. Digging into self-supervised monocular depth estimation. arXiv preprint arXiv:1806.01260, 2018.
238
+
239
+ [38] X. Lyu, L. Liu, M. Wang, X. Kong, L. Liu, Y. Liu, X. Chen, and Y. Yuan. Hr-depth: High resolution self-supervised monocular depth estimation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 2294-2301, 2021.
240
+
241
+ [39] A. Petrovai and S. Nedevschi. Exploiting pseudo labels in a self-supervised learning framework for improved monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1578-1588, 2022.
242
+
243
+ [40] J. Yan, H. Zhao, P. Bu, and Y. Jin. Channel-wise attention-based network for self-supervised monocular depth estimation. In 2021 International Conference on ${3D}$ Vision (3DV), pages 464-473. IEEE, 2021.
244
+
245
+ [41] T.-W. Hui. Rm-depth: Unsupervised learning of recurrent monocular depth in dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1675-1684, 2022.
246
+
247
+ [42] V. Guizilini, R. Ambrus, S. Pillai, A. Raventos, and A. Gaidon. 3d packing for self-supervised monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2485-2494, 2020.
248
+
249
+ [43] C. Shu, K. Yu, Z. Duan, and K. Yang. Feature-metric loss for self-supervised learning of depth and egomotion. In European Conference on Computer Vision, pages 572-588. Springer, 2020.
250
+
251
+ [44] H. Li, A. Gordon, H. Zhao, V. Casser, and A. Angelova. Unsupervised monocular depth learning in dynamic scenes. arXiv preprint arXiv:2010.16404, 2020.
252
+
253
+ [45] J. Spencer, R. Bowden, and S. Hadfield. Defeat-net: General monocular depth via simultaneous unsupervised representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14402-14413, 2020.
254
+
255
+ [46] M. Vankadari, S. Garg, A. Majumder, S. Kumar, and A. Behera. Unsupervised monocular depth estimation for night-time images using adversarial domain feature adaptation. In European Conference on Computer Vision, pages 443-459. Springer, 2020.
256
+
257
+ [47] K. Wang, Z. Zhang, Z. Yan, X. Li, B. Xu, J. Li, and J. Yang. Regularizing nighttime weirdness: Efficient self-supervised monocular depth estimation in the dark. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 16055-16064, 2021.
258
+
259
+ [48] L. Liu, X. Song, M. Wang, Y. Liu, and L. Zhang. Self-supervised monocular depth estimation for all day images using domain separation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12737-12746, 2021.
260
+
261
+ [49] H. Jin, P. Favaro, and S. Soatto. Real-time feature tracking and outlier rejection with changes in illumination. In Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, volume 1, pages 684-689. IEEE, 2001.
262
+
263
+ [50] S. Baker and I. Matthews. Lucas-kanade 20 years on: A unifying framework. International journal of computer vision, 56(3):221-255, 2004.
264
+
265
+ [51] N. Yang, L. v. Stumberg, R. Wang, and D. Cremers. D3vo: Deep depth, deep pose and deep uncertainty for monocular visual odometry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1281-1292, 2020.
266
+
267
+ 417 418 420 422
papers/CoRL/CoRL 2022/CoRL 2022 Conference/7RyzGWLk79H/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § WHEN THE SUN GOES DOWN: REPAIRING PHOTOMETRIC LOSSES FOR ALL-DAY DEPTH ESTIMATION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Self-supervised deep learning methods for joint depth and ego-motion estimation can yield accurate trajectories without needing ground-truth training data. However, as they typically use photometric losses, their performance can degrade significantly when the assumptions these losses make (e.g. temporal illumination consistency, a static scene, and the absence of noise and occlusions) are violated. This limits their use for e.g. nighttime sequences, which tend to contain many point light sources (including on dynamic objects) and low signal-to-noise ratio (SNR) in darker image regions. In this paper, we show how to use a combination of three techniques to allow the existing photometric losses to work for both day and nighttime images. First, we introduce a per-pixel neural intensity transformation to compensate for the light changes that occur between successive frames. Second, we predict a per-pixel residual flow map that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth from the networks. And third, we denoise the training images to improve the robustness and accuracy of our approach. These changes allow us to train a single model for both day and nighttime images without needing separate encoders or extra feature networks like existing methods. We perform extensive experiments and ablation studies on the challenging Oxford RobotCar dataset to demonstrate the efficacy of our approach for both day and nighttime sequences.
12
+
13
+ § 20 1 INTRODUCTION
14
+
15
+ An ability to capture 3D scene structure is crucial for many applications, including autonomous driving [1], robotic manipulation [2], and augmented reality [3]. Many methods use LiDAR or fixed-baseline stereo to acquire the depth needed to reconstruct a scene, but researchers have also long been interested in estimating depth from monocular images, driven by the ubiquity, low cost, low power consumption and ease of deployment of monocular cameras. By contrast, LiDAR can be power-hungry, and stereo rigs must be calibrated and time-synchronised to achieve good performance.
16
+
17
+ Multi-view monocular depth estimation approaches have long used variable-baseline stereo over multiple images to recover depth $\left\lbrack {4,5}\right\rbrack$ . Meanwhile, progress in deep learning has opened up the additional possibility of estimating depth from a single monocular image. Deep learning methods for depth estimation can be broadly divided into two types, namely supervised methods [6, 7], and self/unsupervised methods $\left\lbrack {8,9,{10}}\right\rbrack$ . Typically, supervised approaches have achieved very good results for the dataset(s) on which they are trained, but their need for ground-truth information during training has often hindered their deployment in new domains.
18
+
19
+ By contrast, self/unsupervised methods have typically adopted the use of a geometry-based loss function, inspired by the strong physical principles of traditional methods $\left\lbrack {{11},{12}}\right\rbrack$ . This loss function is commonly referred to as the photometric or appearance loss, and is based on the assumptions that (i) the scene is static (i.e. contains no moving objects), (ii) the illumination in the scene is diffusive (i.e. there are no specular reflections) and temporally consistent (i.e. the pixels to which any scene point projects in any two consecutive frames have the same intensity), and (iii) the images are free of noise and occlusions $\left\lbrack {{11},{12},{13},{14}}\right\rbrack$ . In practice, many of these assumptions are at least partly false, which can lead to errors in the estimated depth: scenes are quite likely to contain dynamic objects (e.g. cars, cyclists and pedestrians, in an outdoor driving scenario), surface materials are rarely fully diffusive, and occlusions are common. During the day, it is somewhat reasonable to assume that the illumination is moderately temporally consistent for image sequences captured outdoors, as the sun is by far the dominant light source in that case, and the light it casts changes only slowly over time; however, at night, the numerous point light sources that are typically turned on after dark (e.g. car headlights, lamp posts, etc.) can cause the illumination to change drastically from one frame to the next. At night, also, the motion blur associated with the movement of dynamic objects in the scene (including the ego-vehicle) becomes worse, owing to the longer exposure times typically used when capturing night-time images [15, 16], and the signal-to-noise ratio of the (darker) images becomes much lower than it would be during daytime. Such issues, as illustrated in Figure 1, inhibit the straightforward use of deep networks based on photometric loss for night-time sequences.
20
+
21
+ (a) (b)
22
+
23
+ Figure 1: (a) The challenges posed by night-time images: (1) low visibility and noise (patch enhanced for better readability); (2) moving light sources with saturating image regions; (3) point light sources; (4) extreme motion blur. (b) Despite these adverse conditions, which violate the assumptions made by the photometric loss, our method can successfully estimate accurate depth maps.
24
+
25
+ In this paper, we address this problem by directly targeting violations of the temporal illumination consistency, static scene and noise-free assumptions on which the photometric loss relies. As shown by our day and night results in Table 1, these three together account for much of the discrepancy in performance between daytime and night-time. A lack of temporal illumination consistency caused by point light sources in the scene can cause pixels to be incorrectly matched between consecutive frames. To rectify this, we propose a novel per-pixel neural intensity transformation that learns to compensate for these light sources (see §3.2). Whilst conceptually straightforward, this approach is surprisingly effective, as our results in §4 demonstrate. Interestingly, they also show that it is able to operate well over wide (motion parallax) baselines, allowing us to leverage the better depth estimation performance that wider baselines offer. To correct for dynamic objects in the scene, as well as motion blur, we predict a per-pixel residual flow map (see §3.3) that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth from the networks. This improves depth estimation performance at any time of day (see §4), but has additional theoretical benefits for night-time sequences because of the greater motion blur from which they typically suffer. Lastly, we robustify our approach against noise by incorporating Neighbour2Neighbour [17], a state-of-the-art denoising module, in our photometric loss formulation (see §3.4).
26
+
27
+ § 2 RELATED WORK
28
+
29
+ Estimating depth from images has a long history in computer vision. Several methods use either stereo images $\left\lbrack {{18},{19},{20}}\right\rbrack$ , or two or more images taken from different viewing angles $\left\lbrack {{21},{22},{23}}\right\rbrack$ . We try to solve this problem using a single monocular image, without any constraints on the scene of interest. Various methods have addressed this problem using supervised learning [6,7,24,25,26]. However, it is infeasible to have ground-truth depth maps for training on every scene, which limits the application of these methods and helps motivate unsupervised solutions to this problem.
30
+
31
+ DepthNet ${I}_{t}$ ${I}_{t + 1}$ Denoising ${\mathcal{I}}_{t + 1}$ View ${\mathcal{I}}_{t}$ Reconstruction Residual Flow Map ${R}_{t}$ ${\mathcal{I}}_{t}^{\prime }$ Photometric Loss ${T}_{t,t + 1}$ Neural Intensity Ego-motion Transform ${\widetilde{\mathcal{I}}}_{t}$ Light changes $\;B$ ${I}_{t}$ Depth Map ${D}_{t}$ ${I}_{t}$ ${I}_{t + 1}$ MotionNet
32
+
33
+ Figure 2: The architecture of our proposed method (see $§3$ for details).
34
+
35
+ Unsupervised Methods: Garg et al. [8] proposed a geometry-based loss function to train a network in a completely unsupervised fashion using a pair of stereo images. Monodepth [27] improved this by using differentiable image warping [28] and structural similarity-based [29] image comparison loss. SfMLearner [30] used only monocular images to jointly learn depth and ego-motion. It was further improved by combining stereo and monocular losses in [31, 32]. Later, GeoNet [33] and EPC [34] learnt per-pixel optical flow maps along with depth and ego-motion to mitigate the effect of moving objects. Some methods use GAN-based learning to train their systems [10, 35, 36]. Recently, Monodepth2 [37] extended Monodepth to the temporal domain, proposing a few architectural changes and robust loss functions to achieve state-of-the-art results. HR-Depth [38] used an effective skip connection and a convolution block to integrate spatial and semantic information. SD-SSMDE [39] introduced a two-stage training strategy to improve scale and inter-frame scale consistency in depth by utilising depth estimation from the first stage as a pseudo-label. Based on channel-wise attention, CADepth-Net [40] proposed structure perception and detail emphasis modules for capturing the context of scenes with the detail for the depth estimation. RM-Depth [41] proposed recurrent modulation units for an effective fusion of deep features with fewer parameters, and a warping-based motion field for moving objects to improve the scene rigidity, leading to enhanced depth estimation. More broadly, recent years have also seen a wide range of other advances in depth estimation, e.g. changes to the network architecture [42], the addition of extra loss functions [43], and better handling of dynamic objects [44]. However, all of these methods have been tested on standard daytime datasets, whereas our method is designed to work at night as well.
36
+
37
+ Nighttime Methods: All the methods above are trained using photometric loss as the main supervision signal, and with an assumption of temporal illumination consistency, which is not valid at night. A few methods such as DeFeat-Net [45], ADFA [46] and [14] have explored how to estimate depth information from nighttime RGB images. DeFeat-Net [45] learns $n$ -dimensional deep feature representations (assumed to be illumination-invariant) using a pixel-wise contrastive loss. The feature maps are simultaneously used along with the images for photometric loss calculation during training. ADFA [36] mimics a daytime depth estimation model by learning a new encoder that can generate 'day-like' features from nighttime images using a domain adaptation approach. Instead of feature translation as in [46], the authors in [14] propose a joint network for image translation and stereo image-based depth estimation. Recently, photometric losses are again used with an image enhancement module and a GAN based depth regulariser in [47]. Liu et al. [48] divided the day and nighttime images into view-invariant and variant feature maps using separate encoders, and used the view-invariant information for depth estimation. All these methods either need two separate encoders for day and nighttime images $\left\lbrack {{46},{48},{47}}\right\rbrack$ , or need to learn an illumination-invariant feature space [45]. By contrast, our proposed method learns in a completely self-supervised fashion, without needing stereo images, ground-truth depth information or any additional feature learning.
38
+
39
+ § 3 METHOD
40
+
41
+ § 3.1 BASELINE METHOD
42
+
43
+ We first recap the core tenets of existing photometric loss methods, which typically use two networks, a depth network (or DepthNet) and a motion network (or MotionNet). The DepthNet takes an individual colour image as input, and is used to predict a depth image ${D}_{t}$ for each colour image ${I}_{t}$ in the input sequence. The MotionNet takes a consecutive pair of images ${I}_{t}$ and ${I}_{t + 1}$ as input, and is used to output the ego-motion ${T}_{t,t + 1}$ of the camera between them. The estimated depth and ego-motion can be used to reproject a pixel $\mathbf{u} = {\left\lbrack u,v\right\rbrack }^{\top }$ in frame ${I}_{t}$ into ${I}_{t + 1}$ , the subsequent frame in the sequence, via ${\dot{V}}_{t}\left( \mathbf{u}\right) = K{T}_{t,t + 1}{D}_{t}\left( \mathbf{u}\right) {K}^{-1}\dot{\mathbf{u}}$ , in which $\dot{\mathbf{u}}$ denotes the homogeneous form of $\mathbf{u},K \in {\mathbb{R}}^{3 \times 3}$ encodes the camera intrinsics, and ${\dot{V}}_{t}\left( \mathbf{u}\right) \in {\mathbb{R}}^{3}$ denotes the homogeneous form of ${V}_{t}\left( \mathbf{u}\right) \in {\mathbb{R}}^{2}$ , a 2D point in the image plane of ${I}_{t + 1}$ (which may or may not lie within the bounds of the actual image). This can be used to reconstruct an image ${I}_{t}^{\prime }$ by sampling from ${I}_{t + 1}$ around the reprojected points, using bilinear interpolation [28] to achieve a smoother result. Formally,
44
+
45
+ $$
46
+ {I}_{t}^{\prime }\left( \mathbf{u}\right) = \left\{ \begin{array}{ll} \operatorname{interpolate}\left( {{I}_{t + 1},{V}_{t}\left( \mathbf{u}\right) }\right) & \text{ if }\mathbf{u} \in {M}_{t} \\ \mathbf{0} & \text{ otherwise, } \end{array}\right. \tag{1}
47
+ $$
48
+
49
+ in which ${M}_{t} = \left\{ {\mathbf{u} : \rho \left( {{V}_{t}\left( \mathbf{u}\right) }\right) \in \Omega \left( {I}_{t + 1}\right) }\right\}$ is the set of pixels whose reprojections into ${I}_{t + 1}$ , when rounded to the nearest pixel using $\rho$ , fall within the image bounds $\Omega \left( {I}_{t + 1}\right)$ . The reconstructed image ${I}_{t}^{\prime }$ can then be compared to the original image ${I}_{t}$ to calculate the loss values needed for training. The loss we target, namely photometric loss, has been used by many recent deep learning-based depth estimation techniques $\left\lbrack {8,{31},{30},{36}}\right\rbrack$ . It is normally calculated as a convex combination of pixel-wise difference and single-scale structural dissimilarity (SSIM) [29], via
50
+
51
+ $$
52
+ {L}_{p}^{\left( t\right) } = \frac{1}{\left| {M}_{t}\right| }\mathop{\sum }\limits_{{\mathbf{u} \in {M}_{t}}}\left( {\alpha \frac{1 - \operatorname{SSIM}\left( {{I}_{t}\left( \mathbf{u}\right) ,{I}_{t}^{\prime }\left( \mathbf{u}\right) }\right) }{2} + \left( {1 - \alpha }\right) \left| {{I}_{t}\left( \mathbf{u}\right) - {I}_{t}^{\prime }\left( \mathbf{u}\right) }\right| }\right) . \tag{2}
53
+ $$
54
+
55
+ Most existing unsupervised methods (e.g. [30, 31, 37, 42]) use this as the backbone of their formulation. To ensure a fair comparison with current night-time state-of-the-art methods [45, 48, 47], we base our modifications in this paper on Monodepth2 [37], a commonly used baseline.
56
+
57
+ § 3.2 LIGHTING CHANGE COMPENSATION
58
+
59
+ The numerous point light sources that are typically turned on after dark (e.g. car headlights, lamp posts, etc.) can cause the illumination of a scene to change significantly from frame ${I}_{t}$ to frame ${I}_{t + 1}$ . A problematic special case occurs when a light source moves with the camera (e.g. car headlights), which can lead to large holes in the estimated depth directly in front of the ego-vehicle [45, 48]. In our approach, we compensate for the illumination changes by estimating a per-pixel transformation that, when applied to ${I}_{t + 1}$ , can mitigate the changes in lighting that have occurred since ${I}_{t}$ . We draw some inspiration from [49, 50, 51], which use a single whole-image transformation based on two scalar values to compensate for the difference in exposure time between a pair of images, based on the observation that such a difference creates approximately uniform intensity changes over the entire image. However, in our case, the intensity changes are far from uniform over the image, owing to both the motions of the ego-vehicle and other objects in the scene, and the distances between the ego-vehicle and static point light sources. For this reason, we propose a per-pixel formulation here.
60
+
61
+ Our approach starts by passing the features produced by the last convolutional layer of the Motion-Net through a lighting change decoder to estimate two per-pixel change images, ${C}_{t}$ and ${B}_{t}$ (see Figure 2). These (respectively) aim to capture the per-pixel changes in contrast (scale) and brightness (shift) that have occurred between the two input frames. As shown in Figure 4, the brightness image ${B}_{t}$ broadly captures the extra light added to the image by e.g. vehicle headlights, and the contrast image ${C}_{t}$ broadly captures the changes in ambient light due to the motion of the ego-vehicle towards or away from point light sources such as street lamps. We use these images to transform the reconstructed image ${I}_{t}^{\prime }$ via ${\widetilde{I}}_{t} = {C}_{t} \odot {I}_{t}^{\prime } + {B}_{t}$ , in which $\odot$ denotes the Hadamard product.
62
+
63
+ § 3.3 MOTION COMPENSATION
64
+
65
+ As seen in $§{3.1}$ , the standard photometric loss makes use of correspondences between consecutive frames that have been established via reprojection, based on the ego-motion and depth estimated by the networks. Assuming that (i) the ego-motion and depth have been estimated well, (ii) the scene is static, and (iii) there is minimal motion blur, the correspondences established in this way will broadly match those that would have been established had we used the ground truth optic flow ${\Phi }_{t}\left( \cdot \right)$ from frame $t$ to frame $t + 1$ . However, if objects move with respect to the background scene, or anything visible in the image moves with respect to the ego-camera (which can cause motion blur), then the reprojection correspondences may be incorrect. To correct for these errors, we predict a residual flow map ${R}_{t}$ , such that for each pixel $\mathbf{u} \in \Omega \left( {I}_{t}\right) ,{R}_{t}\left( \mathbf{u}\right) \in {\mathbb{R}}^{2}$ is an estimate of $\left( {\mathbf{u} + {\Phi }_{t}\left( \mathbf{u}\right) }\right) -$ ${V}_{t}\left( \mathbf{u}\right)$ , the 2D offset from the reprojection correspondence of $\mathbf{u}$ , namely ${V}_{t}\left( \mathbf{u}\right)$ , to its ground truth correspondence in frame $t + 1$ , namely $\mathbf{u} + {\Phi }_{t}\left( \mathbf{u}\right)$ . We can then add ${R}_{t}\left( \mathbf{u}\right)$ to ${V}_{t}\left( \mathbf{u}\right)$ for each pixel $\mathbf{u}$ to obtain a potentially more accurate correspondence for use in reconstructing ${I}_{t}^{\prime }$ via Equation 1.
66
+
67
+ Some methods $\left\lbrack {{34},{33},{44}}\right\rbrack$ already exist that predict residual flow for daytime images. By contrast, we avoid using a separate encoder-decoder network or computationally-intensive image warping-based bilinear interpolation for supervision. Instead, we estimate residual flow using an efficient sparsity-based formulation. This involves introducing a residual flow decoder that takes the features of the final convolutional layer of the MotionNet as input and the features of previous layers in the MotionNet via skip connections, and outputs residual flow maps $\left\{ {{R}_{t,s} : s \in \{ 0,1,2,3\} }\right\}$ at four different scales (each ${R}_{t,s}$ has a width and height that is $1/{2}^{s}$ that of ${I}_{t}$ , and ${R}_{t,0} \equiv {R}_{t}$ ).
68
+
69
+ There is no direct supervision available to learn the residual flow maps. For this reason, we choose instead to encourage sparsity in the residual flow estimates, so that the estimated depth and ego-motion can explain the majority of the scene, and the left-over can be explained by the residual flow maps. To achieve this, we adopt the sparsity loss from [44], i.e.
70
+
71
+ $$
72
+ {L}_{r}^{\left( t\right) } = \mathop{\sum }\limits_{{s = 0}}^{3}\left\langle \left| {R}_{t,s}\right| \right\rangle /{2}^{s}\mathop{\sum }\limits_{{\mathbf{u} \in \Omega \left( {I}_{t,s}\right) }}\sqrt{1 + \left| {{R}_{t,s}\left( \mathbf{u}\right) }\right| /\left\langle \left| {R}_{t,s}\right| \right\rangle }, \tag{3}
73
+ $$
74
+
75
+ in which ${I}_{t,s}$ is a downsampled version of ${I}_{t}$ at scale $s$ , and $\left\langle \left| {R}_{t,s}\right| \right\rangle$ is the spatial average of the absolute residual flow map $\left| {R}_{t,s}\right|$ . By contrast with [44], here we introduce a normalising factor of $1/{2}^{s}$ at each scale, since the original loss was for scene flow, where the flow magnitude is independent of the resolution of the flow maps, which is not the case for the 2D residual flow we consider.
76
+
77
+ § 3.4 IMAGE DENOISING
78
+
79
+ Image noise is yet another key factor that affects the performance of the photometric loss. In practice, it is independent of the respective image, and is mainly caused by a low SNR in the darker regions of the image. Handling this noise is of crucial importance, as photometric loss is the only training signal, and supervises all of the modules we have mentioned thus far. To remove the noise from the images, we chose to use Neighbour2Neighbour [17], a state-of-the-art unsupervised denoising model trained on ImageNet with zero-mean Gaussian noise. The standard deviation values were varied from 5 to 50 during training. This model can either be used to denoise all images input to the network at both training time and test time, or it can be used solely at training time to denoise the images for the purpose of calculating the loss. In practice, we chose the latter approach, as denoising at test time has two major disadvantages: (i) it can significantly add to the computational burden at runtime, slowing down the depth estimation; and (ii) any errors in the denoising process can lead to downstream errors in the depth maps, even though the depth estimation model itself might have been trained well. By contrast, restricting denoising to training time has the advantage of allowing us to make the depth and motion networks robust to noise by training them on the original images. 3.5 Full Pipeline
80
+
81
+ We can now formulate our full pipeline as follows:
82
+
83
+ $$
84
+ {D}_{t} = \mathcal{D}\left( {I}_{t}\right) ,{f}_{n} = \mathcal{M}{\mathcal{E}}_{1 : n}\left( \left\lbrack {{I}_{t},{I}_{t + 1}}\right\rbrack \right)
85
+ $$
86
+
87
+ $$
88
+ {T}_{t,t + 1} = \mathcal{M}\mathcal{D}\left( {f}_{N}\right) ,{R}_{t} = \mathcal{R}\mathcal{F}\mathcal{D}\left( \left\{ {{f}_{n} : 1 \leq n \leq N}\right\} \right) ,\left( {{C}_{t},{B}_{t}}\right) = \mathcal{L}\mathcal{C}\mathcal{D}\left( {f}_{N}\right)
89
+ $$
90
+
91
+ $$
92
+ {\mathcal{I}}_{t} = \mathcal{D}\mathcal{N}\left( {I}_{t}\right) ,{\mathcal{I}}_{t + 1} = \mathcal{D}\mathcal{N}\left( {I}_{t + 1}\right)
93
+ $$
94
+
95
+ $$
96
+ {\mathcal{I}}_{t}^{\prime } = \operatorname{reconstruct}\left( {{\mathcal{I}}_{t + 1},{V}_{t} + {R}_{t}}\right) \tag{4}
97
+ $$
98
+
99
+ $$
100
+ {\widetilde{\mathcal{I}}}_{t} = {C}_{t} \odot {\mathcal{I}}_{t}^{\prime } + {B}_{t}
101
+ $$
102
+
103
+ $$
104
+ {L}_{p}^{\left( t\right) } = \frac{1}{\left| {M}_{t}\right| }\mathop{\sum }\limits_{{\mathbf{u} \in {M}_{t}}}\left( {\alpha \frac{1 - \operatorname{SSIM}\left( {{\mathcal{I}}_{t}\left( \mathbf{u}\right) ,{\widetilde{\mathcal{I}}}_{t}\left( \mathbf{u}\right) }\right) }{2} + \left( {1 - \alpha }\right) \left| {{\mathcal{I}}_{t}\left( \mathbf{u}\right) - {\widetilde{\mathcal{I}}}_{t}\left( \mathbf{u}\right) }\right| }\right)
105
+ $$
106
+
107
+ The inputs to our system are a consecutive pair of images ${I}_{t}$ and ${I}_{t + 1}$ , whilst $\mathcal{D}$ denotes the Depth-Net, $\mathcal{M}{\mathcal{E}}_{1 : n}$ denotes the first $n$ layers of the $N$ -layer MotionNet encoder, $\mathcal{M}\mathcal{D}$ denotes the Motion-Net decoder, $\mathcal{R}\mathcal{F}\mathcal{D}$ denotes the residual flow decoder, $\mathcal{L}\mathcal{C}\mathcal{D}$ denotes the lighting change decoder, and $\mathcal{D}\mathcal{N}$ denotes the denoiser [17]. The reconstruct function reconstructs ${\mathcal{I}}_{t}^{\prime }$ as per Equation 1.
108
+
109
+ § 3.6 MAKING THE PIPELINE BIDIRECTIONAL
110
+
111
+ Monodepth2 [37] calculates its photometric loss not only in the forwards direction, from ${I}_{t}$ to ${I}_{t + 1}$ , but also in the backwards direction, from ${I}_{t}$ to ${I}_{t - 1}$ , before combining the losses. This allows us to use the idea of minimum reprojection error to account for occluded pixels, and so we do the same. We also adopt the auto-masking losses ${L}_{a}^{\left( t\right) }$ from Monodepth2 [37], as even though our method can cope with moving objects, it is very difficult to use parallax to disentangle the motion of objects that are moving in the same direction and at the same speed as the ego-vehicle. We further include the commonly used edge-aware gradient smoothing loss ${L}_{g}^{\left( t\right) }$ to maintain spatial smoothness over the estimated depth maps. Our final loss ${L}^{\left( t\right) }$ then becomes the weighted sum
112
+
113
+ $$
114
+ {L}^{\left( t\right) } = \min \left( {{L}_{p - }^{\left( t\right) },{L}_{p + }^{\left( t\right) },{L}_{a - }^{\left( t\right) },{L}_{a + }^{\left( t\right) }}\right) + {\lambda }_{r}\left( {{L}_{r - }^{\left( t\right) } + {L}_{r + }^{\left( t\right) }}\right) + {\lambda }_{g}{L}_{g}^{\left( t\right) }, \tag{5}
115
+ $$
116
+
117
+ in which $+ / -$ denote the forward/backward versions of the losses, and ${\lambda }_{r},{\lambda }_{g} \in \mathbb{R}$ are the weights.
118
+
119
+ § 4 EXPERIMENTS
120
+
121
+ In $§{4.1}$ , we compare our depth estimation performance to a number of state-of-the-art approaches in a variety of different daytime and/or night-time contexts. In $§{4.2}$ , we present a study on the effect of parallax to help explain the importance of our neural intensity transformation module. Finally, in §4.3, we perform an ablation study to analyse the contributions made by the three individual components of our approach. Further experiments can be found in the supplementary material.
122
+
123
+ § 4.1 DEPTH EVALUATION
124
+
125
+ We compare with 4 state-of-the-art unsupervised monocular methods: Monodepth2 [37], DeFeat-Net [45], ADDS-Depth-Night [48] and RNW [47] (see Figure 3 and Table 1). We tested our model with 3 different data variations: day only(d), night only(n), and a mix of day and night $\left( {d\& n}\right)$ . Mon-odepth2 [37] can be trained with all 3 configurations, although it has already been outperformed by DeFeat-Net [45] in the $d\& n$ setting. For the $d$ and $n$ settings, we outperform it by a significant margin in both error and accuracy (see Table 1). DeFeat-Net [45] and ADDS-Depth-Night [48] were originally trained with a $d\& n$ configuration. We evaluated the pre-trained models they released on our test split. Our method outperforms both methods by a significant margin on the nighttime sequences (see Table 1). Please note that we do not use any additional feature representation-based losses as used in DeFeat-Net [45], or paired day and night images as used in ADDS-Depth-Net [48]. RNW [47], another recent method, is also built on Monodepth2, but targets nighttime data only. As per Figure 3, our depth estimation results are sharp and better able to preserve edges than the competing methods. We also found that using a longer baseline improves depth estimation performance. However, naïvely using a wider baseline without also using our neural intensity transform can lead to a severe decrease in accuracy, particularly for nighttime images.
126
+
127
+ max width=
128
+
129
+ Test Method Train Abs. Rel. Sq. Rel. RMSE Log RMSE $\delta < {1.25}$ $\delta < {1.25}^{2}$ $\delta < {1.25}^{3}$
130
+
131
+ 1-10
132
+ 6*X Monodepth2 [37] d 0.219 4.525 7.641 0.285 0.679 0.862 0.930
133
+
134
+ 2-10
135
+ Ours d 0.191 1.710 6.158 0.253 0.713 0.904 0.962
136
+
137
+ 2-10
138
+ DeFeat-Net [45] $\mathrm{d}$ & $\mathrm{n}$ 0.247 2,980 7.884 0.305 0.650 0.866 0.943
139
+
140
+ 2-10
141
+ RNW [47] $\mathrm{d}$ & $\mathrm{n}$ 0.297 2.608 7.996 0.359 0.431 0.773 0.930
142
+
143
+ 2-10
144
+ ADDS-Depth-Night [48] $\mathrm{d}$ & $\mathrm{n}$ 0.239 2.089 6.743 0.295 0.614 0.870 0.950
145
+
146
+ 2-10
147
+ Ours $\mathrm{d}$ & $\mathrm{n}$ 0.176 1.603 6.036 0.245 0.750 0.912 0.963
148
+
149
+ 1-10
150
+ 7*Night Monodepth2 [37] n 0.453 21.310 11.420 0.444 0.700 0.873 0.930
151
+
152
+ 2-10
153
+ RNW MCIE + SBM [47] n 0.350 7.934 8.994 0.407 0.674 0.861 0.922
154
+
155
+ 2-10
156
+ Ours n 0.186 1.656 6.288 0.248 0.728 0.919 0.969
157
+
158
+ 2-10
159
+ DeFeat-Net [45] d & n 0.334 4.589 8.606 0.358 0.586 0.827 0.911
160
+
161
+ 2-10
162
+ ADDS-Depth-Night [48] d & n 0.287 2.569 7.985 0.339 0.490 0.816 0.946
163
+
164
+ 2-10
165
+ RNW [47] $\mathrm{d}$ & $\mathrm{n}$ 0.185 1.710 6.549 0.262 0.733 0.910 0.960
166
+
167
+ 2-10
168
+ Ours $\mathrm{d}$ & $\mathrm{n}$ 0.174 1.637 6.302 0.245 0.754 0.915 0.964
169
+
170
+ 1-10
171
+
172
+ Table 1: A quantitative comparison of our method. The results of Monodepth2 [37] are reported after retraining it. Those of DeFeat-Net [45] and ADDS-Depth-Night [48] are reported using the checkpoints from their public repositories. The evaluation uses a maximum depth of ${50}\mathrm{\;m}$ . Underlined methods use daytime images as main supervision or for regularisation losses.
173
+
174
+ Input RGB Image LiDAR Ground Truth Monodepth2 DeFeat-Net ADDS-DepthNet Ours
175
+
176
+ Figure 3: A qualitative comparison of our proposed method with the state of the art.
177
+
178
+ § 4.2 EFFECT OF PARALLAX
179
+
180
+ To better understand how depth estimation performance is affected by increasing the average parallax (metric separation) between the images we use to calculate the photometric loss, we constructed a new nighttime training split by increasing the intra-triplet stride (see supplementary material) to 2, which increased the average parallax between the images from ${0.353}\mathrm{\;m}$ to ${0.706}\mathrm{\;m}$ . Without our neural intensity transformation, the depth estimation performance significantly decreased compared to the original nighttime training split (see the difference between the RMSEs of the baseline in the top and bottom parts of Table 2). A key cause of this in night images is likely the headlights of the ego-vehicle, which can cause the pixel intensities to change drastically between frames. However, with our neural intensity transformation, the depth estimation performance was found to instead increase, which we hypothesise to be because by compensating for the lighting changes, we make it possible to exploit the stronger supervision that can be offered by a wider baseline.
181
+
182
+ § 4.3 ABLATION STUDY
183
+
184
+ Lighting Change Compensation. In Figure 4(a), we show several reference images and their lighting change maps. The intensity changes are non-uniform, so we cannot use the existing correction approaches from $\left\lbrack {{49},{50},{51}}\right\rbrack$ . We also observe that our method is able to clearly disentangle both the changes in ambient light resulting from movement towards/away from point light sources (captured by ${C}_{t}$ ) and the additional light added to the road pixels in the images by the ego-vehicle headlights (captured by ${B}_{t}$ ). Our neural intensity transform significantly reduces the RMSE error compared to the baseline (see Table 2), and is also able to fill in holes in front of the ego-vehicle (see Figure 3).
185
+
186
+ max width=
187
+
188
+ Stride Method Abs. Rel. Sq. Rel. RMSE Log RMSE $\delta < {1.25}$ $\delta < {1.25}^{2}$ $\delta < {1.25}^{3}$
189
+
190
+ 1-9
191
+ 4*1 Baseline 0.266 5.647 6.305 0.331 0.759 0.9013 0.947
192
+
193
+ 2-9
194
+ w/ NIT 0.190 1.824 4.848 0.257 0.763 0.919 0.965
195
+
196
+ 2-9
197
+ w/ Denoising 0.163 1.256 4.193 0.224 0.801 0.935 0.973
198
+
199
+ 2-9
200
+ Full Model 0.154 1.174 4.120 0.216 0.811 0.939 0.976
201
+
202
+ 1-9
203
+ 3*2 Baseline 0.602 63.914 14.726 0.467 0.785 0.902 0.939
204
+
205
+ 2-9
206
+ w/ NIT 0.169 1.727 4.693 0.236 0.812 0.929 0.967
207
+
208
+ 2-9
209
+ Full Model 0.131 0.926 3.731 0.188 0.852 0.949 0.980
210
+
211
+ 1-9
212
+
213
+ Table 2: Ablation study showing the importance of different modules in our system. The maximum depth was set to ${30}\mathrm{\;m}$ for this study. ’Stride’ denotes intra-triplet stride (see supplementary material).
214
+
215
+ ${I}_{t}$ ${C}_{t}$ ${B}_{t}$ ${I}_{t}$ ${R}_{t}$ ${D}_{t}$ 0.035 w/ NIT only w/ NIT + Denoising 0.025 0.020 0.015 0.010 0.005 125 175 agging step (b) (c) (a)
216
+
217
+ Figure 4: Visualisations of (a) estimated light changes; (b) residual flow and estimated depth; (c) the effects of denoising on the training loss over time.
218
+
219
+ Motion Compensation. In Figure 4(b), we show several reference images and their residual flow and depth maps. In the second column, one can clearly see that our method is able to distinguish pixels on moving objects such as cars and pedestrians from static pixels. This effect can be observed for both daytime and night-time images, showcasing the generality of our approach through a single unified training pipeline. In Table 2, it can be seen that correcting the reprojection correspondences using the residual flow map we predict leads to a significant improvement in accuracy.
220
+
221
+ Image Denoising. Denoising the images while calculating the training loss should ideally reduce the ambiguity in establishing pixel correspondences between the images, giving a robust supervision signal for training our system and thereby achieving lower error and higher accuracy. This effect can be clearly seen in the training error plot shown in Figure 4(c), where we compare our baseline+NIT model with and without denoising. The denoising results in much more accurate depth maps, improving both the RMSE and accuracy metrics as shown in Table 2.
222
+
223
+ § 5 LIMITATIONS
224
+
225
+ Our method has a number of limitations. First, as is typical of monocular methods, it is only able to estimate depth up to scale. Second, like most stereo approaches (whether variable-baseline like ours, or fixed-baseline with a rigid stereo rig), it struggles to preserve the detail of distant parts of the scene because of limited parallax. Third, it also struggles to recover structural detail from very dark image regions (e.g. see Figure 1(b)). And finally, in common with most vision-based depth estimation methods, it does not consistently perform well for transparent surfaces like glass.
226
+
227
+ § 6 CONCLUSIONS
228
+
229
+ In this paper, we propose a self-supervised method to learn a single model to estimate depth maps from monocular day and nighttime RGB images. By compensating for the illumination changes that can occur from one frame to the next, we enable accurate nighttime depth estimation in nonuniform lighting conditions. Moreover, by predicting per-pixel residual flow and using it to correct the reprojection correspondences induced by the estimated ego-motion and depth, we improve our method's ability to cope with both moving objects in the scene and motion blur. Finally, by denoising the input images prior to calculating the photometric loss, we improve the loss's ability to provide a strong supervision signal, making the entire system more robust and accurate.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/7ZcePvChS7u/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,285 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Human-Robot Commensality: Bite Timing Prediction for Robot-Assisted Feeding in Groups
2
+
3
+ Anonymous authors
4
+
5
+ Abstract: We develop data-driven models to predict when a robot should feed during social dining scenarios. Being able to eat independently with friends and family is considered one of the most memorable and important activities for people with mobility limitations. Robots can potentially help with this activity but robot-assisted feeding is a multi-faceted problem with challenges in bite acquisition, bite timing, and bite transfer. Bite timing in particular becomes uniquely challenging in social dining scenarios due to the possibility of interrupting a social human-robot group interaction during commensality. Our key insight is that bite timing strategies that take into account the delicate balance of social cues can lead to seamless interactions during robot-assisted feeding in a social dining scenario. We approach this problem by collecting a multimodal Human-Human Commensality Dataset (HHCD) containing 30 groups of three people eating together. We use this dataset to analyze human-human commensality behaviors and develop bite timing prediction models in social dining scenarios. We also transfer these models to human-robot commensality scenarios. Our user studies show that prediction improves when our algorithm uses multimodal social signaling cues between diners to model bite timing. The HHCD dataset, videos of user studies, and code will be publicly released after acceptance.
6
+
7
+ Keywords: Multimodal Learning, HRI, Assistive Robotics, Group Dynamics
8
+
9
+ ## 1 Introduction
10
+
11
+ Nearly 27% of people living in the United States have a disability, and close to 24 million people aged 18 years or older need assistance with activities of daily living (ADL) [1]. Key among these activities is feeding, which is both time-consuming for the caregiver, and challenging for the care recipient (patient) to accept socially [2]. Indeed, needing help with one or more ADLs is the most cited reason for moving to assisted or institutionalized living $\left\lbrack {3,4}\right\rbrack$ . Although there are several automated feeding systems on the market [5-13], they have lacked widespread acceptance. One of the key reasons is that all of them require manual triggering of bite timing by the user, which is challenging for users with cognitive disabilities and inconvenient in social settings. A key challenge for the realization of autonomous robotic feeding systems is therefore to infer proper bite timing [14].
12
+
13
+ While existing systems focus on solitary dining (e.g. [15-32]), commensality, the act of eating together, is often the practice of choice. People like to share meals with others. The social experience of a shared meal is an important part of the overall eating experience and current robot feeding systems are not designed with that experience in mind. Transferring the challenge of inferring appropriate bite timing to a social dining setting requires not only attuning to the user's eating behavior but also to the complex social dynamics of the group. For example, a robot should not attempt to feed a user who is actively engaged in conversation. Here we ask the seemingly simple question: How should an assistive feeding robot decide the right timing for feeding a user in ever-changing and dynamic social dining scenarios?
14
+
15
+ We developed an intelligent autonomous robot-assisted feeding system that uses multimodal sensing to feed people in dynamic social dining scenarios. We collected a novel audio-visual Human-Human Commensality Dataset (HHCD) capturing human social eating behaviors. Using this data we then trained multimodal machine learning models to predict bite timing in human-human commensality. We explored how our models trained on human-human commensality scenarios performed in a
16
+
17
+ ![01963f9c-b9e9-722e-9239-5453168ed8a0_1_341_204_1115_619_0.jpg](images/01963f9c-b9e9-722e-9239-5453168ed8a0_1_341_204_1115_619_0.jpg)
18
+
19
+ Figure 1: Our bite timing prediction workflow: (Left) Human-Human Commensality Dataset collection: We record audio and video of participants eating food in triads. (Middle) Our Social Nibbling NETwork (SoNNET) learns to predict whether a user intends to take a bite based on a 6-second window of social signals. (Right) We conduct a social robot-assisted feeding user study by deploying a variation of SoNNET on a robot. We refer to the User also as a Target user.
20
+
21
+ human-robot commensality setting and evaluated them in a user study. The overall workflow is shown in Fig. 1. Our results indicate that bite timing prediction improves when our model accounts for social signaling among diners, and such a model is preferred over a manual trigger and a fixed-interval trigger. Our main contributions include:
22
+
23
+ - A SOcial Nibbling NETwork (SoNNET) which captures the subtle inter-personal social dynamics in human-human and human-robot groups for predicting bite timing in social-dining scenarios.
24
+
25
+ - Methods that can successfully transfer bite timing strategies learned from human-human commensality cues to human-robot commensality situations, which we evaluate in a user study with a robot in 10 triadic human groups.
26
+
27
+ - A socially-aware robot-assisted feeding system that extends our capacity to feed people in solitary settings to groups of people sharing a meal.
28
+
29
+ - An analysis of various social and functional factors that affect human feeding behaviors during human-human commensality.
30
+
31
+ - A novel Human-Human Commensality Dataset (HHCD) containing multi-view RGBD video and directional audio recordings capturing 30 groups of three people sharing a meal.
32
+
33
+ ## 2 Human-Robot Commensality
34
+
35
+ Eating is a complex process that requires the sensitive coordination of a number of motor and sensory functions. Anyone who has fed another knows that feeding, particularly social feeding where a person is eating or being fed in a social setting, is a delicate dance of multimodal signaling (via gaze, facial expressions, gestures, and speech, to name a few). Research on commensality, the practice of eating together, has highlighted the importance of the social nature of eating for social communion, order, health, and well-being [33]. As a consequence, digital commensality has focused on understanding the role of technology in facilitating or inhibiting the more pleasurable social aspects of dining [34].
36
+
37
+ When a person relies on assisted feeding, meals require that patient and caregiver coordinate their behavior [35]. To achieve this subtle cooperation, the people involved must be able to initiate, perceive, and interpret each other's verbal and non-verbal behavior. The main responsibility for this cooperation lies with caregivers, whose experiences, educational background, and personal beliefs may influence the course of the mealtime [36]. Our goal in this work is to understand the rhythm and timing of this dance to enable an automated feeding assistant to be thoughtful of when it should feed the user in social dining settings. We introduce the concept of Human-Robot Commensality at the intersection of commensality and robot-assisted feeding in social group settings.
38
+
39
+ Our research is fueled by the key insight that bite timing strategies that take into account ever-changing social signals and group dynamics can lead to a seamless human-robot collaboration in social dining scenarios. Fueled by this insight, we believe a feeding device that takes the initiative and offers bites proactively during the meal at times when a bite is likely to be desired will create a more seamless dining experience than a device that requires the user to initiate bites. Herlant [37] designed an HMM to predict bite timing in dyadic robot-assisted feeding. However, her model only considered the social cues of the user. Bhattacharjee et al. [38] found users preferred less intrusive interfaces in a social dining scenario, specifically a web interface over a voice interface. Our work aims to build non-intrusive bite timing strategies by focusing on learning when to feed a user in triadic scenarios while using implicit social features from all diners.
40
+
41
+ Particularly, bite timing is important because the consequences of presenting a bite to the diner earlier than expected is poorly tolerated. This can include an interruption to conversation or to finishing chewing the prior bite. The consequences of presenting a bite later than desired can include frustration towards the robot and disruption of the natural flow of conversation during the meal. Parallels can be drawn to interruptibility research on finding the most appropriate timing to probe a user. Researchers have found that people performed best on a task if interruptions were mediated rather than timed immediately or on scheduled intervals $\left\lbrack {{39},{40}}\right\rbrack$ , often mediated based on modelling contextual and social factors [41-44].
42
+
43
+ A socially-aware robot-assisted feeding system should be designed such that if needed, the user should be able to communicate these intentions via multiple different modalities such as body language, gaze, or speech. These various modalities have been found to be effective in modelling social interactions [45-48]. Capturing these natural social interactions in computational models are likely crucial to provide accurate bite timing without distracting users from the social ambiance.
44
+
45
+ ## 3 Problem Formulation
46
+
47
+ The objective of the bite timing prediction problem in robot-assisted feeding with a single diner is to predict the timing of when this user will take a bite of food by capturing their signals $\mathbf{U}$ such as voice, body gestures, head movements or speaking status. We define the proper timing for when a robot should feed as when the user intends to take a bite of food. It takes input signals $\mathbf{U}\left( {{t}_{0} : t}\right)$ from time ${t}_{0}$ to time $t$ and learns a function $\mathcal{F}\left( \mathbf{U}\right)$ to predict a Boolean $y\left( {t + h}\right) = \mathcal{F}\left( {\mathbf{U}\left( {{t}_{0} : t}\right) }\right)$ , which indicates whether the user intends to take a bite in the time horizon $h$ and trigger a bite transfer at time $t + 1$ . When a person lifts their fork off the plate to eat, they intend to take a bite of food, where this time horizon $h$ is the time it takes to transfer the food to their mouth from their plate.
48
+
49
+ In this paper, we consider a social variant of the bite timing prediction problem where a user is interacting with two co-diners. Our goal is to predict the timing of a user to take a bite of food based on the social cues within the interaction. From an initial time ${t}_{0}$ to time $t$ , the user receives social signals $\mathbf{L}\left( {{t}_{0} : t}\right)$ and $\mathbf{R}\left( {{t}_{0} : t}\right)$ from their left and right conversational co-diners, respectively. Given these external social signals and the target user’s own history of signals $\mathbf{U}\left( {{t}_{0} : t}\right)$ , we aim to predict $y$ . We note that it may not always be possible to track the same set of features for a user and their co-diners. Therefore, for some time range $k = t - {t}_{0}$ and feature dimensions $n, m$ for the user and co-diners respectively, $\mathbf{U} \in {\mathbb{R}}^{k \times n}$ while $\mathbf{L},\mathbf{R} \in {\mathbb{R}}^{k \times m}$ , where $n$ does not necessarily equal $m$ . The function to learn is:
50
+
51
+ $$
52
+ y\left( {t + 1}\right) = \mathcal{F}\left( {\mathbf{U}\left( {{t}_{0} : t}\right) ,\mathbf{L}\left( {{t}_{0} : t}\right) ,\mathbf{R}\left( {{t}_{0} : t}\right) }\right)
53
+ $$
54
+
55
+ ## 4 Model: SOcial Nibbling NETwork (SoNNET)
56
+
57
+ We present the SOcial Nibbling NETwork (SoNNET) that predicts when a user has the intention to eat based on various social signals. We selected features to represent both human eating and social behavior: bite features, which include the number of bites taken so far and the time since the last bite of food $b \in {\mathbb{R}}^{2}$ , a diner’s gaze and head pose direction $d \in {\mathbb{R}}^{4}$ , binary speaking status $s \in \{ 0,1\}$ , and face and body keypoints $o \in {\mathbb{R}}^{168}$ from OpenPose [49]. We note that, in our case, the bite
58
+
59
+ ![01963f9c-b9e9-722e-9239-5453168ed8a0_3_340_209_1109_516_0.jpg](images/01963f9c-b9e9-722e-9239-5453168ed8a0_3_340_209_1109_516_0.jpg)
60
+
61
+ Figure 2: Triplet-SoNNET contains three interacting channels for features of a target user and two co-diners. Each channel concatenates the input of time, gaze, speaking and skeleton features from each single diner. Couplet-SoNNET eliminates all features from the target user by dropping the last channel; however, it continues to use the user's bite features. Batch normalization layers are not shown in the figure.
62
+
63
+ features $b$ are computed only for the user and not the co-diners, since we do not estimate in real-time whether a co-diner is taking a bite of food. Thus, for a time interval $k = t - {t}_{0}$ , these features are temporally stacked to construct the input signals $\mathbf{U} \in {\mathbb{R}}^{k \times {175}},\mathbf{L} \in {\mathbb{R}}^{k \times {173}},\mathbf{R} \in {\mathbb{R}}^{k \times {173}}$ for the user, left co-diner, and right co-diner, respectively.
64
+
65
+ Recently, convolutional neural networks (CNNs) have demonstrated significant success for multichannel time series classification from various kinds of signals [50-52]. Wu et al. [44] proposed PazNet: a multi-channel deep convolutional neural network which is able to handle inputs of different dimensions. PazNet is designed to predict the interruptibility of individual drivers. However, the information of different channels are not shared, and it lacks ability to capture social interactions among multiple people.
66
+
67
+ We design the Social Nibbling NETwork (SoNNET), a new model architecture which follows a multi-channel pattern allowing multiple interconnected branches to interleave and fuse at different stages. We create input processing channels for each diner, then add interleaving tunnels between each convolutional module and adjacent branches. The information capturing visually-observable behaviors between the diners is allowed to flow between the frames and channels. We conjecture that our model will learn a socially-coherent structure, allowing the model to implicitly represent the diners in an embedding space. Therefore, each channel has the same structure but does not share the same weight parameters. To help capture informative features, we performed dimension-reduction after the interleaving components using max pooling layers and $1 \times 1$ convolutional layers. These per-diner channels are concatenated and then followed by 2 dense layers for classification, which decides whether the user intends to feed or not. For SoNNET, the range between $t$ and ${t}_{0}$ is six seconds. The social signals in this range are used to predict a user's bite intentions.
68
+
69
+ Triplet-SoNNET. For modelling the bite-timing prediction of three users with no mobility limitations, we propose Triplet-SoNNET which uses social signals from the left and right co-diners $\mathbf{L},\mathbf{R}$ and signals from the user $\mathbf{U}$ . Depicted in Fig. 2, Triplet-SoNNET ensures that the features from other co-diners $\mathbf{L},\mathbf{R}$ interleave into the target user’s features $\mathbf{U}$ .
70
+
71
+ Couplet-SoNNET. To run Triplet-SoNNet in a robot-assisted feeding setting, there would be a distribution shift in the kinds of signals a target user outputs. Our goal is to feed people with mobility limitations while they are engaged in social conversations. The features from someone self-feeding are inherently different from someone using a robot-assisted feeding system. In the case of body pose, a target user with mobility limitations would be largely still, which is different from the training data. Our Human-Human Commensality Dataset consists of healthy adult diners, thus applying a trained Triplet-SoNNET model to robot-assisted feeding of a user with disabilities would be out-of-distribution. We create Couplet-SoNNET, where we ignore most social signals from the target user by removing the last channel in Triplet-SoNNET. Therefore, the intention to feed $y\left( {t + 1}\right) = \mathcal{F}\left( {{\mathbf{U}}_{b}\left( {{t}_{0} : t}\right) ,\mathbf{L}\left( {{t}_{0} : t}\right) ,\mathbf{R}\left( {{t}_{0} : t}\right) }\right)$ , where ${\mathbf{U}}_{b} \in {\mathbb{R}}^{k \times 2}$ are the user’s bite features for $k = t - {t}_{0}$ . The user’s bite features, such as the time since the last bite and the number of bites since the onset of the feeding activity, are the only social signals from the target user.
72
+
73
+ ## 5 Human-Human Commensality Dataset (HHCD)
74
+
75
+ We introduce a novel Human-Human Commensality Dataset (HHCD) of three healthy adult participants eating in a social scenario. We used this dataset to develop models that predict a diner's intention to take a bite of food while taking into account subtle social cues. We deployed the trained models in a social robot-assisted feeding setting where one diner is fed by a robot.
76
+
77
+ Data Collection Setup. We recruited 90 people among our Institution-affiliated fully-vaccinated students, faculty, and staff to eat a meal in a triadic dining scenario. Each participant was 18+ years old and took part in the study only once. The study setup is illustrated in Fig. 1 (left). There are three cameras (mutually at ${120}^{ \circ }$ ) in the middle of the table, each capturing one participant, and a fourth camera capturing the whole scene. All four cameras are Intel RealSense Depth Cameras D455 [53]. The scene audio was captured by a microphone array ReSpeaker Mic Array v2.0 [54] placed in the middle of the table. The ReSpeaker microphone array has four microphones arranged at the corners of a square and estimates the direction of sound arrival.
78
+
79
+ Participants were free to bring any kind of food and any utensil with them. They could also bring a drink (some drank from a cup, others from a bottle or both, with or without a straw) and were provided with napkins. Before the study, each participant was asked to fill in a pre-study questionnaire about their demographic background, relationship to other participants, and social dining habits. The experimenter then asked them to eat their meals and have natural conversations. At this point, the experimenter started the recording and left the room. When all three participants finished eating or after 60 minutes have passed, whichever was earlier, the experimenter stopped the recording and asked participants to fill in a post-study questionnaire about their dining experience. The specific questions asked in both pre/post-study questionnaires can be found in App. 8.1.2. The study was approved by our Institution's IRB.
80
+
81
+ Data Annotation. We annotated each participant's video based on their interactions with food, drink, and napkins. In particular, we annotated food_entered, food_lifted, food_to_mouth, drink_entered, drink_lifted, drink_to_mouth, napkin_entered, napkin_lifted, napkin_to_mouth, and mouth_open events. We chose these events as they are key transition points during feeding. We spent 151 hours annotating and used the ELAN annotation tool [55]. We assigned the annotation value $\in \{$ fork, knife, spoon, chopsticks, hand $\}$ based on the utensil performing the food-to-mouth handover. While annotating, we also noted down per-participant food types and observations of interesting behaviors. All annotation types with detailed rules are provided in App. 8.1.3.
82
+
83
+ Data Statistics. There were 56 female and 34 male participants, and their ages ranged 18-38 ( $\mu =$ ${22},\sigma = 3$ ) years. Session durations ranged ${21} - {55}\left( {\mu = {37},\sigma = 9}\right)$ minutes and 1 session was at breakfast, 10 at lunch, and 19 at dinner time. For additional dataset statistics, see App. 8.1.4.
84
+
85
+ For a summary of all available data in the dataset and its detailed analysis, see App. 8.1. For the purposes of this work, we only consider bite features, speaking status, gaze and head pose, and body and face keypoints.
86
+
87
+ ## 6 Model Evaluation on Human-Human Commensality Dataset
88
+
89
+ In this section, we evaluate Triplet- and Couplet-SoNNET against other models on the HHCD. In particular, we compare against a regularized linear SVM trained with SGD to evaluate performance of linear classifiers. We also consider a Temporal Convolution Network (TCN) [56, 57], which uses causal convolutions and dilations to represent temporal data. TCNs have been found to perform better than LSTMs and GRUs on temporal anomaly detection [58] and robot food manipulation tasks [20], therefore they would provide a strong baseline to compare our models to. We also perform an ablation study to investigate the importance of various modalities. Implementation details about baseline models, SoNNET, and training procedure can be found in App. 8.2.
90
+
91
+ For training, we use 6811 food lifted annotations as positive training labels since they precede an actual bite of food and indicate an intention to eat. We use a time interval of $k = t - {t}_{0} = 6$ seconds because it takes roughly 6 seconds for the robot to move from its wait position to feeding the user. Since bite actions are sparsely distributed over time, we select 2486 6-second clips as negative samples that are in the middle of two food_lifted annotations. All reported models are trained with leave-one-session-out (LOSO) cross-validation to evaluate generalizability to new groups of people. Due to an issue with recording, we train over 29 sessions if speaking status features are used.
92
+
93
+ The user’s bite features $b \in {\mathbb{R}}^{2}$ (time since last bite and the number of bites eaten since the start) are indicators of hunger. To ensure this feature is not dominated by higher dimensional features, we scale the size of the input by $\gamma$ . This hyperparameter $\gamma$ scales $b \in {\mathbb{R}}^{2} \rightarrow b \in {\mathbb{R}}^{2\gamma }$ . We selected $\gamma = {100}$ after a grid search over the training set on the TCN and SoNNET models.
94
+
95
+ Evaluation Metrics. A high recall indicates that our model can reliably feed when it should. In contrast, a high precision indicates that a model tends to be stricter in deciding when to feed. Due to our dataset imbalance, the average accuracy across 29 sessions for a model that predicts it should always feed is 71.56%. This classifier achieves perfect recall, and relatively high precision, causing the model to have a high F1 score. It is clear that given this class imbalance, a high F1 score poorly represents the capabilities of this model. To evaluate our model effectively, we consider the normalized Matthews Correlation Coefficient (nMCC) in addition to F1 score, precision, recall, and accuracy. Unlike F1 score, nMCC considers the size of the majority and minority classes, and can only produce high scores if a classifier is able to make correct predictions for a majority of both the negative and positive classes [59]. A value of 0.5 indicates random prediction, while 0 is inverse prediction and 1 is perfect prediction.
96
+
97
+ Effects of Modality. We are interested in investigating features that are the most informative for designing a good bite timing predictor in social dining. We perform a feature ablation study on the Triplet-SoNNET model, as shown in Table 1. We selectively remove feature streams, such as body and face data from OpenPose, gaze and head features from RT-GENE, speaking status signals, and the user's bite features. We find that users' bite features such as the time since last bite and the number of bites are important, as accuracy drops drastically without them. Intuitively, we believe this feature is important because a user's bite features are a proxy for their level of hunger. We also see that without body and face features, F1 and recall slightly increase. This could be due to the fact these data streams are noisy; however, as indicated by the lower accuracy and nMCC when removing OpenPose features, these features are useful.
98
+
99
+ Effects of Model Type. Table 2 shows the outcomes of various model comparisons when trained using LOSO. We compare performance of Triplet-SoNNET against a linear SVM and TCN trained on all three diners. We call this TCN a Triplet-TCN. Triplet-TCN has all the diners' features concatenated per-timestep, and we compare this result to Triplet-SoNNET. We find that Triplet-SoNNet achieves higher accuracy and nMCC compared to all other models; however, it performs worse on recall and F1 score compared to Triplet-TCN. In our scenario, we want to ensure that the robot feeds when it should and does not feed when it should not. A bite prediction model that overfeeds or underfeeds is not ideal. A high nMCC balances the roles of recall and precision and better represents whether a classifier should or should not feed. Therefore, for our scenarios, Triplet-SoNNET is a more effective predictor of bite timing than other models trained on all three diners.
100
+
101
+ Effects of Social Scenario. We are interested in comparing the ability of models to learn social behaviors using only two co-diners' features rather than having full observability. We compare Couplet-SoNNET to a similarly-named Couplet-TCN trained on two co-diners' features and a user's bite features. As expected, Couplet-TCN and Couplet-SoNNET perform worse than their Triplet-counterparts, with Couplet-TCN being close to random prediction with an nMCC of 0.5539 while Couplet-SoNNET has an nMCC of 0.6648 . We find that Couplet-SoNNET performs better than Couplet-TCN. This result reveals Couplet-SoNNET is able to understand social signals better than a predictor that always feeds. This implies that it is possible to predict the behavior of a user using only their co-diner information, which indicates that there is social coordination in human-human commensality. These findings also suggest that social signals were captured by the interleaving structure of the SoNNET models.
102
+
103
+ Table 1: Ablation study on different modalities from various data sources. We use average over LOSO cross-validation.
104
+
105
+ <table><tr><td>Method</td><td>Acc.</td><td>Prec.</td><td>Rec.</td><td>F1</td><td>nMCC</td></tr><tr><td>Triplet-SoNNET</td><td>0.820</td><td>0.861</td><td>0.871</td><td>0.862</td><td>0.772</td></tr><tr><td>- Speaking Status</td><td>0.816</td><td>0.864</td><td>0.863</td><td>0.856</td><td>0.771</td></tr><tr><td>- Gaze & Head Pose</td><td>0.815</td><td>0.863</td><td>0.863</td><td>0.856</td><td>0.769</td></tr><tr><td>- Bite Features</td><td>0.781</td><td>0.832</td><td>0.855</td><td>0.834</td><td>0.727</td></tr><tr><td>- Body & Face</td><td>0.820</td><td>0.854</td><td>0.886</td><td>0.865</td><td>0.771</td></tr></table>
106
+
107
+ Table 2: Model comparison on LOSO cross-validation over 29 sessions.
108
+
109
+ <table><tr><td>Method</td><td>Acc.</td><td>Prec.</td><td>Rec.</td><td>F1</td><td>nMCC</td></tr><tr><td>Always Feed</td><td>0.72</td><td>0.72</td><td>1</td><td>0.83</td><td>0.5</td></tr><tr><td>Linear SVM (SGD)</td><td>0.68</td><td>0.82</td><td>0.77</td><td>0.74</td><td>0.64</td></tr><tr><td>Triplet-TCN</td><td>0.82</td><td>0.82</td><td>0.96</td><td>0.88</td><td>0.72</td></tr><tr><td>Triplet-SoNNET</td><td>0.82</td><td>0.86</td><td>0.87</td><td>0.86</td><td>0.77</td></tr><tr><td>Couplet-TCN</td><td>0.73</td><td>0.73</td><td>0.98</td><td>0.83</td><td>0.55</td></tr><tr><td>Couplet-SoNNET</td><td>0.76</td><td>0.78</td><td>0.96</td><td>0.85</td><td>0.66</td></tr></table>
110
+
111
+ ## 7 Transferring from Human-Human to Human-Robot Commensality
112
+
113
+ Our objective is to develop a bite timing strategy for a robot that feeds a user in a social dining setting. We design a study where users evaluate the effect of different bite timing strategies on their overall social dining experience. To simulate robotic caregiving scenarios for people with upper-extremity mobility limitations, we instructed users to not move their upper body. This study was approved by our Institution's IRB.
114
+
115
+ Experimental Setup. We evaluate three bite timing strategies for triggering the robot to feed a user, also depicted in Fig. 3 (middle):
116
+
117
+ 1. Learned Timing. This social, fully autonomous bite timing strategy feeds based on our Couplet-SoNNET model's output. We sample this model every three seconds with the last six seconds of preprocessed features at a rate of 15 frames per second. This approach takes into account the social context. Since we want to evaluate the generalization performance, we train Couplet-SoNNET on ${80}\%$ of the HHCD data and use the remaining ${20}\%$ of HHCD data to select early-stopping criteria.
118
+
119
+ 2. Fixed-Interval Timing. This fully autonomous bite timing strategy feeds every 44.5 seconds, which is a scaled average time a robot should take to feed a user after it has picked up a food item. To derive this value, we first find the appropriate scaling factor between human motion from the HHCD and robot motion. We note the average time for a human from the food_entered transition to food-lifted transition is 9.9 seconds. The robot end-effector motion is not designed to match the human speed but rather to be perceived as safe and comfortable to a user being fed. We find the equivalent key transitions for the robot to be ${5x}$ slower than a human. Since we define the intention to take a bite as when the food is lifted, the robot should take 49.5 seconds to feed a user after picking up a food item. Given the robot takes roughly 5 seconds to move to its wait position after picking the food, the robot waits 44.5 seconds.
120
+
121
+ 3. Mouth-Open Timing. This partially autonomous bite timing strategy feeds only when the user prompts the robot by opening their mouth. The target user is prompted each time by the robot saying "When ready, look at me and open your mouth". This approach gives the user explicit control of when the robot should feed [38].
122
+
123
+ The robot user is seated on a wheelchair mounted with a Kinova Gen3 6-DoF arm [60], which is used to feed the participant (Fig. 3, left). For implementation details of the robot study, see App. 8.3.
124
+
125
+ Experimental Procedure. In this study, participants are seated in a similar setup as that used for HHCD data collection in Sec. 5. All participants were asked to bring their own food, and each group chose who would be fed by the robot. We recruited 30 participants over 10 sessions. There were 16 female and 14 male participants, and their ages ranged from 19-70 $\left( {\mu = {27},\sigma = 9}\right)$ years.
126
+
127
+ A single trial consists of bite acquisition, followed by one of the three bite timing strategies, then bite transfer. For bite acquisition, the robot alternates feeding the user cantaloupes and strawberries. We chose these fruits due to their high acquisition success rates [24]. We used the bite acquisition strategies and bite transfer strategies from [25, 26]. All participants take a survey after each trial, which administers a forced-choice question on the participants' preferences between the previous and current conditions. Each pair of comparisons between any two conditions occurs three times, leading to ten trials. The condition orderings are counterbalanced over ten trials. Additionally,
128
+
129
+ ![01963f9c-b9e9-722e-9239-5453168ed8a0_7_327_207_1124_447_0.jpg](images/01963f9c-b9e9-722e-9239-5453168ed8a0_7_327_207_1124_447_0.jpg)
130
+
131
+ Figure 3: Left: We use a 6-DoF Kinova Gen3 robotic arm [60] on Rovi wheelchair [61]. Middle: User study conditions/bite timing strategies: Learned, Fixed-Interval, and Mouth-Open Timings. Top right: Preferences for bite timing strategies rated by users, two co-diners, and all three diners. Bottom right: Level of distraction by the robot perceived by users, two co-diners, and all three diners on a Likert scale 1-5 (agreement with "I felt distracted by the robot"), for each bite timing strategy. $* , * * , * * *$ denote statistically significant differences with ${p}_{0.05},{p}_{0.005},{p}_{0.0005}$ respectively.
132
+
133
+ we ask participants whether they felt the robot fed them too early, on-time, or too late. To avoid interruptions in social conversations due to the presence of a robot in human groups, we provide the participants with a list of questions (see App. 8.3.1), which they could optionally use to help get the conversation started at each trial, similarly to previous work [37]. The post-study questionnaire included questions after each trial or condition about bite timing appropriateness, distractions due to the robot, ability to have natural conversations, ability to feel comfortable around the robot, as well as system reliability and trust in the robot [62].
134
+
135
+ Results and Discussion. As shown in Fig. 3 (top right), users and co-diners preferred the Learned strategy for bite timing as compared to Fixed-Interval or Mouth-Open Timing. This confirms that our insight to incorporate social signals in model structure (SoNNET) improves bite timing prediction. These results using Couplet-SoNNET also imply that it is possible to predict the behavior of a user using only their co-diner information, which indicates that there is social coordination in human groups even in the presence of a robot. In Fig. 3 (bottom right) we further compared the level of distraction by the robot as perceived by participants. We performed Kruskal-Wallis H-tests and Tukey HSD post-hoc tests and found that Mouth-Open Timing distracts dining participants significantly more than Learned or Fixed-Interval Timing. We believe this is because the Mouth-Open strategy prompts the user using a voice interface, which can disrupt the rhythm of conversation. Even though the participants had a clear preference for the Learned strategy when given a forced-choice, when asked to individually rate the conditions using a 5-point Likert scale, interestingly we could not find any statistically significant differences between Mouth-Open Timing and Learned Timing. This is probably because the Mouth-Open Timing strategy provides full control of bite timing to the users themselves. Note, regardless of the conditions, the users found the system comfortable, reliable, and trustworthy. Detailed analysis is given in App. 8.3.
136
+
137
+ Limitations. There is a risk that our results from human-robot user studies with healthy adults may not generalize to those with people with mobility limitations. People with mobility limitations may have different preferences and cognitive workload associated with a robotic intervention. It remains to be investigated how our models perform with our target population. We also made multiple assumptions when transferring our results from human-human to human-robot commensality scenarios. During human-human commensality, the user was self-feeding whereas in human-robot commensality the user was being fed. We also assumed that the addition of a robot into a human-human commensality scenario does not change the social dynamics of the diners significantly. Given these assumptions, it would be an interesting exploration to see how our models perform when trained on similar human-robot commensality scenarios. Finally, it is an open question as to how these models would perform with groups of different cultures. This motivates further investigation into human-robot commensality, both from technical and societal perspectives.
138
+
139
+ References
140
+
141
+ [1] U. S. C. Bureau. Americans with disabilities: 2014. 2014.
142
+
143
+ [2] L. Perry. Assisted feeding. Journal of advanced nursing, 62(5):511-511, 2008.
144
+
145
+ [3] M. E. Mlinac and M. C. Feng. Assessment of activities of daily living, self-care, and independence. Archives of Clinical Neuropsychology, 31(6):506-516, 2016.
146
+
147
+ [4] Agis living. http://www.agis.com/Document/484/
148
+
149
+ assisted-living-care-with-an-independent-flavor.
150
+
151
+ [5] Obi, 2018. https://meetobi.com/,[Online; Retrieved on 25th January, 2018].
152
+
153
+ [6] My spoon, 2018. https://www.secom.co.jp/english/myspoon/food.html,[Online; Retrieved on 25th January, 2018].
154
+
155
+ [7] Meal-mate, 2018. https://www.made2aid.co.uk/productprofile?productId=8& company=RBF%20Healthcare&product=Meal-Mate,[Online; Retrieved on 25th January, 2018].
156
+
157
+ [8] Meal buddy, 2018.
158
+
159
+ https://www.performancehealth.com/meal-buddy-system,[Online; Retrieved on 25th January, 2018].
160
+
161
+ [9] Winsford feeder, 2018. https://www.youtube.com/watch?v=KZRFj1UZ1-c,[Online; Retrieved on 15th February, 2018].
162
+
163
+ [10] Bestic, 2019. https://www.camanio.com/us/products/bestic/,[Online; Retrieved on 18th April, 2019].
164
+
165
+ [11] The mealtime partner dining system, 2018. http://mealtimepartners.com/,[Online; Retrieved on 15th February, 2018].
166
+
167
+ [12] Neater eater robot, 2019. http://www.neater.co.uk/neater-eater-2-2/,[Online; Retrieved on 18th April, 2019].
168
+
169
+ [13] Beeson automaddak feeder, 2019. https:
170
+
171
+ //abledata.acl.gov/product/beeson-automaddak-feeder-model-h74501,[Online; Retrieved on 18th April, 2019].
172
+
173
+ [14] T. Bhattacharjee, G. Lee, H. Song, and S. S. Srinivasa. Towards robotic feeding: Role of haptics in fork-based food manipulation. IEEE Robotics and Automation Letters, 2019.
174
+
175
+ [15] A. Candeias, T. Rhodes, M. Marques, M. Veloso, et al. Vision augmented robot feeding. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018.
176
+
177
+ [16] E. K. Gordon, S. Roychowdhury, T. Bhattacharjee, K. Jamieson, and S. S. Srinivasa. Leveraging post hoc context for faster learning in bandit settings with applications in robot-assisted feeding. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 10528-10535. IEEE, 2021.
178
+
179
+ [17] H. Higa, K. Kurisu, and H. Uehara. A vision-based assistive robotic arm for people with severe disabilities. Transactions on Machine Learning and Artificial Intelligence, 2(4):12-23, 2014.
180
+
181
+ [18] T. Bhattacharjee, M. E. Cabrera, A. Caspi, M. Cakmak, and S. S. Srinivasa. A community-centered design framework for robot-assisted feeding systems. In The 21st international ACM SIGACCESS conference on computers and accessibility, pages 482-494, 2019.
182
+
183
+ [19] A. Jardón, C. A. Monje, and C. Balaguer. Functional evaluation of asibot: A new approach on portable robotic system for disabled people. Applied Bionics and Biomechanics, 9(1):85-97, 2012.
184
+
185
+ [20] T. Bhattacharjee, G. Lee, H. Song, and S. S. Srinivasa. Towards robotic feeding: Role of haptics in fork-based food manipulation. IEEE Robotics and Automation Letters, 4(2): 1485-1492, 2019.
186
+
187
+ [21] R. M. Alqasemi, E. J. McCaffrey, K. D. Edwards, and R. V. Dubey. Wheelchair-mounted robotic arms: Analysis, evaluation and development. In Proceedings, 2005 IEEE/ASME International Conference on Advanced Intelligent Mechatronics., pages 1164-1169. IEEE, 2005.
188
+
189
+ [22] Z. Bien, M.-J. Chung, P.-H. Chang, D.-S. Kwon, D.-J. Kim, J.-S. Han, J.-H. Kim, D.-H. Kim, H.-S. Park, S.-H. Kang, et al. Integration of a rehabilitation robotic system (kares ii) with human-friendly man-machine interaction units. Autonomous robots, 16(2):165-191, 2004.
190
+
191
+ [23] D. Park, Y. K. Kim, Z. M. Erickson, and C. C. Kemp. Towards assistive feeding with a general-purpose mobile manipulator. arXiv preprint arXiv:1605.07996, 2016.
192
+
193
+ [24] R. Feng, Y. Kim, G. Lee, E. K. Gordon, M. Schmittle, S. Kumar, T. Bhattacharjee, and S. S. Srinivasa. Robot-assisted feeding: Generalizing skewering strategies across food items on a plate. In The International Symposium of Robotics Research, pages 427-442. Springer, 2019.
194
+
195
+ [25] D. Gallenberger, T. Bhattacharjee, Y. Kim, and S. S. Srinivasa. Transfer depends on acquisition: Analyzing manipulation strategies for robotic feeding. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 267-276. IEEE, 2019.
196
+
197
+ [26] S. Belkhale, E. K. Gordon, Y. Chen, S. Srinivasa, T. Bhattacharjee, and D. Sadigh. Balancing efficiency and comfort in robot-assisted bite transfer. arXiv preprint arXiv:2111.11401, 2021.
198
+
199
+ [27] G. Canal, G. Alenyà, and C. Torras. Personalization framework for adaptive robotic feeding assistance. In International conference on social robotics, pages 22-31. Springer, 2016.
200
+
201
+ [28] T. Rhodes and M. Veloso. Robot-driven trajectory improvement for feeding tasks. In 2018 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), pages 2991-2996. IEEE, 2018.
202
+
203
+ [29] I. Naotunna, C. J. Perera, C. Sandaruwan, R. Gopura, and T. D. Lalitharatne. Meal assistance robots: A review on current status, challenges and future directions. In 2015 IEEE/SICE International Symposium on System Integration (SII), pages 211-216. IEEE, 2015.
204
+
205
+ [30] E. K. Gordon, X. Meng, M. Barnes, T. Bhattacharjee, and S. S. Srinivasa. Learning from failures in robot-assisted feeding: Using online learning to develop manipulation strategies for bite acquisition. 2019.
206
+
207
+ [31] D. Park, Y. Hoshi, and C. C. Kemp. A multimodal anomaly detector for robot-assisted feeding using an Istm-based variational autoencoder. IEEE Robotics and Automation Letters, 3(3):1544-1551, 2018.
208
+
209
+ [32] D. Park, H. Kim, Y. Hoshi, Z. Erickson, A. Kapusta, and C. C. Kemp. A multimodal execution monitor with anomaly classification for robot-assisted feeding. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5406-5413. IEEE, 2017.
210
+
211
+ [33] H. Jönsson, M. Michaud, and N. Neuman. What is commensality? a critical discussion of an expanding research field. International Journal of Environmental Research and Public Health, 18(12):6235, 2021.
212
+
213
+ [34] C. Spence, M. Mancini, and G. Huisman. Digital commensality: Eating and drinking in the
214
+
215
+ company of technology. Frontiers in psychology, 10:2252, 2019.
216
+
217
+ [35] E. Athlin, A. Norberg, and K. Asplund. Caregivers' perceptions and interpretations of severely demented patients during feeding in a task assignment system. Scandinavian Journal of Caring Sciences, 4(4):147-156, 1990.
218
+
219
+ [36] E. Athlin and A. Norberg. Interaction between the severely demented patient and his caregiver during feeding. Scandinavian Journal of Caring Sciences, 1(3-4):117-123, 1987.
220
+
221
+ [37] L. V. Herlant. Algorithms, implementation, and studies on eating with a shared control robot arm. PhD Dissertation, 2018. URL
222
+
223
+ http://www.cs.cmu.edu/afs/cs/user/lcv/www/herlant-thesis.pdf.
224
+
225
+ [38] T. Bhattacharjee, E. K. Gordon, R. Scalise, M. E. Cabrera, A. Caspi, M. Cakmak, and S. S. Srinivasa. Is more autonomy always better? exploring preferences of users with mobility impairments in robot-assisted feeding. In 2020 15th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 181-190. IEEE, 2020.
226
+
227
+ [39] D. C. McFarlane. Comparison of four primary methods for coordinating the interruption of people in human-computer interaction. Human-Computer Interaction, 17(1):63-139, 2002.
228
+
229
+ [40] M. Czerwinski, E. Cutrell, and E. Horvitz. Instant messaging: Effects of relevance and timing. In People and computers XIV: Proceedings of HCI, volume 2, pages 71-76, 2000.
230
+
231
+ [41] M. Pielot, B. Cardoso, K. Katevas, J. Serrà, A. Matic, and N. Oliver. Beyond interruptibility: Predicting opportune moments to engage mobile phone users. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 1(3), Sept. 2017. doi:10.1145/3130956. URL https://doi.org/10.1145/3130956.
232
+
233
+ [42] Y. Takemae, T. Ohno, I. Yoda, and S. Ozawa. Estimating interruptibility in the home for remote communication based on audio-visual tracking. IPSJ Digital Courier, 3:125-133, 2007.
234
+
235
+ [43] S. Banerjee, A. Silva, K. Feigh, and S. Chernova. Effects of interruptibility-aware robot behavior. arXiv preprint arXiv:1804.06383, 2018.
236
+
237
+ [44] T. Wu, N. Martelaro, S. Stent, J. Ortiz, and W. Ju. Learning when agents can talk to drivers using the inagt dataset and multisensor fusion. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 5(3):1-28, 2021.
238
+
239
+ [45] J. Wang, H. Xu, M. Narasimhan, and X. Wang. Multi-person 3d motion prediction with multi-range transformers. Advances in Neural Information Processing Systems, 34, 2021.
240
+
241
+ [46] P. Müller, M. X. Huang, X. Zhang, and A. Bulling. Robust eye contact detection in natural multi-person interactions using gaze and speaking behaviour. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications, pages 1-10, 2018.
242
+
243
+ [47] H. Park, E. Jain, and Y. Sheikh. 3d social saliency from head-mounted cameras. Advances in Neural Information Processing Systems, 25, 2012.
244
+
245
+ [48] H. Joo, T. Simon, M. Cikara, and Y. Sheikh. Towards social artificial intelligence: Nonverbal social signal prediction in a triadic interaction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10873-10883, 2019.
246
+
247
+ [49] Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh. Openpose: realtime multi-person 2d pose estimation using part affinity fields. arXiv preprint arXiv:1812.08008, 2018.
248
+
249
+ [50] B. Zhao, H. Lu, S. Chen, J. Liu, and D. Wu. Convolutional neural networks for time series classification. Journal of Systems Engineering and Electronics, 28(1):162-169, 2017.
250
+
251
+ [51] J. Yang, M. N. Nguyen, P. P. San, X. Li, and S. Krishnaswamy. Deep convolutional neural networks on multichannel time series for human activity recognition. In Ijcai, volume 15, pages 3995-4001. Buenos Aires, Argentina, 2015.
252
+
253
+ [52] C.-L. Liu, W.-H. Hsaio, and Y.-C. Tu. Time series classification with multivariate convolutional neural network. IEEE Transactions on Industrial Electronics, 66(6): 4788-4797, 2018.
254
+
255
+ [53] RealSense. Introducing the Intel® realsense ${}^{\mathrm{{TM}}}$ depth camera D455,2020. URL https://www.intelrealsense.com/depth-camera-d455/.
256
+
257
+ [54] B. Zuo. ReSpeaker Mic Array v2.0, 2018. URL
258
+
259
+ https://wiki.seeedstudio.com/ReSpeaker_Mic_Array_v2.0/.
260
+
261
+ [55] P. Wittenburg, H. Brugman, A. Russel, A. Klassmann, and H. Sloetjes. Elan: A professional framework for multimodality research. In 5th International Conference on Language Resources and Evaluation (LREC 2006), pages 1556-1559, 2006.
262
+
263
+ [56] C. Lea, R. Vidal, A. Reiter, and G. D. Hager. Temporal convolutional networks: A unified approach to action segmentation. In European conference on computer vision, pages 47-54. Springer, 2016.
264
+
265
+ [57] C. Lea, M. D. Flynn, R. Vidal, A. Reiter, and G. D. Hager. Temporal convolutional networks for action segmentation and detection. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 156-165, 2017.
266
+
267
+ [58] Y. He and J. Zhao. Temporal convolutional networks for anomaly detection in time series. In Journal of Physics: Conference Series, volume 1213, page 042050. IOP Publishing, 2019.
268
+
269
+ [59] D. Chicco and G. Jurman. The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation. BMC genomics, 21(1):1-13, 2020.
270
+
271
+ [60] Kinova gen3. https://www.kinovarobotics.com/product/gen3-robots, 2022. Accessed: 2022-06-14.
272
+
273
+ [61] Rovi wheelchair. https://www.rovimobility.com/, 2022. Accessed: 2022-06-14.
274
+
275
+ [62] J.-Y. Jian, A. M. Bisantz, and C. G. Drury. Foundations for an empirically determined scale of trust in automated systems. International journal of cognitive ergonomics, 4(1):53-71, 2000.
276
+
277
+ [63] T. Fischer, H. J. Chang, and Y. Demiris. Rt-gene: Real-time eye gaze estimation in natural environments. In Proceedings of the European Conference on Computer Vision (ECCV), pages 334-352, 2018.
278
+
279
+ [64] J. Carreira and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6299-6308, 2017.
280
+
281
+ [65] T. Fischer and Y. Demiris. Markerless perspective taking for humanoid robots in unconstrained environments. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 3309-3316. IEEE, 2016.
282
+
283
+ [66] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1-9, 2015.
284
+
285
+ [67] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/7ZcePvChS7u/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § HUMAN-ROBOT COMMENSALITY: BITE TIMING PREDICTION FOR ROBOT-ASSISTED FEEDING IN GROUPS
2
+
3
+ Anonymous authors
4
+
5
+ Abstract: We develop data-driven models to predict when a robot should feed during social dining scenarios. Being able to eat independently with friends and family is considered one of the most memorable and important activities for people with mobility limitations. Robots can potentially help with this activity but robot-assisted feeding is a multi-faceted problem with challenges in bite acquisition, bite timing, and bite transfer. Bite timing in particular becomes uniquely challenging in social dining scenarios due to the possibility of interrupting a social human-robot group interaction during commensality. Our key insight is that bite timing strategies that take into account the delicate balance of social cues can lead to seamless interactions during robot-assisted feeding in a social dining scenario. We approach this problem by collecting a multimodal Human-Human Commensality Dataset (HHCD) containing 30 groups of three people eating together. We use this dataset to analyze human-human commensality behaviors and develop bite timing prediction models in social dining scenarios. We also transfer these models to human-robot commensality scenarios. Our user studies show that prediction improves when our algorithm uses multimodal social signaling cues between diners to model bite timing. The HHCD dataset, videos of user studies, and code will be publicly released after acceptance.
6
+
7
+ Keywords: Multimodal Learning, HRI, Assistive Robotics, Group Dynamics
8
+
9
+ § 1 INTRODUCTION
10
+
11
+ Nearly 27% of people living in the United States have a disability, and close to 24 million people aged 18 years or older need assistance with activities of daily living (ADL) [1]. Key among these activities is feeding, which is both time-consuming for the caregiver, and challenging for the care recipient (patient) to accept socially [2]. Indeed, needing help with one or more ADLs is the most cited reason for moving to assisted or institutionalized living $\left\lbrack {3,4}\right\rbrack$ . Although there are several automated feeding systems on the market [5-13], they have lacked widespread acceptance. One of the key reasons is that all of them require manual triggering of bite timing by the user, which is challenging for users with cognitive disabilities and inconvenient in social settings. A key challenge for the realization of autonomous robotic feeding systems is therefore to infer proper bite timing [14].
12
+
13
+ While existing systems focus on solitary dining (e.g. [15-32]), commensality, the act of eating together, is often the practice of choice. People like to share meals with others. The social experience of a shared meal is an important part of the overall eating experience and current robot feeding systems are not designed with that experience in mind. Transferring the challenge of inferring appropriate bite timing to a social dining setting requires not only attuning to the user's eating behavior but also to the complex social dynamics of the group. For example, a robot should not attempt to feed a user who is actively engaged in conversation. Here we ask the seemingly simple question: How should an assistive feeding robot decide the right timing for feeding a user in ever-changing and dynamic social dining scenarios?
14
+
15
+ We developed an intelligent autonomous robot-assisted feeding system that uses multimodal sensing to feed people in dynamic social dining scenarios. We collected a novel audio-visual Human-Human Commensality Dataset (HHCD) capturing human social eating behaviors. Using this data we then trained multimodal machine learning models to predict bite timing in human-human commensality. We explored how our models trained on human-human commensality scenarios performed in a
16
+
17
+ < g r a p h i c s >
18
+
19
+ Figure 1: Our bite timing prediction workflow: (Left) Human-Human Commensality Dataset collection: We record audio and video of participants eating food in triads. (Middle) Our Social Nibbling NETwork (SoNNET) learns to predict whether a user intends to take a bite based on a 6-second window of social signals. (Right) We conduct a social robot-assisted feeding user study by deploying a variation of SoNNET on a robot. We refer to the User also as a Target user.
20
+
21
+ human-robot commensality setting and evaluated them in a user study. The overall workflow is shown in Fig. 1. Our results indicate that bite timing prediction improves when our model accounts for social signaling among diners, and such a model is preferred over a manual trigger and a fixed-interval trigger. Our main contributions include:
22
+
23
+ * A SOcial Nibbling NETwork (SoNNET) which captures the subtle inter-personal social dynamics in human-human and human-robot groups for predicting bite timing in social-dining scenarios.
24
+
25
+ * Methods that can successfully transfer bite timing strategies learned from human-human commensality cues to human-robot commensality situations, which we evaluate in a user study with a robot in 10 triadic human groups.
26
+
27
+ * A socially-aware robot-assisted feeding system that extends our capacity to feed people in solitary settings to groups of people sharing a meal.
28
+
29
+ * An analysis of various social and functional factors that affect human feeding behaviors during human-human commensality.
30
+
31
+ * A novel Human-Human Commensality Dataset (HHCD) containing multi-view RGBD video and directional audio recordings capturing 30 groups of three people sharing a meal.
32
+
33
+ § 2 HUMAN-ROBOT COMMENSALITY
34
+
35
+ Eating is a complex process that requires the sensitive coordination of a number of motor and sensory functions. Anyone who has fed another knows that feeding, particularly social feeding where a person is eating or being fed in a social setting, is a delicate dance of multimodal signaling (via gaze, facial expressions, gestures, and speech, to name a few). Research on commensality, the practice of eating together, has highlighted the importance of the social nature of eating for social communion, order, health, and well-being [33]. As a consequence, digital commensality has focused on understanding the role of technology in facilitating or inhibiting the more pleasurable social aspects of dining [34].
36
+
37
+ When a person relies on assisted feeding, meals require that patient and caregiver coordinate their behavior [35]. To achieve this subtle cooperation, the people involved must be able to initiate, perceive, and interpret each other's verbal and non-verbal behavior. The main responsibility for this cooperation lies with caregivers, whose experiences, educational background, and personal beliefs may influence the course of the mealtime [36]. Our goal in this work is to understand the rhythm and timing of this dance to enable an automated feeding assistant to be thoughtful of when it should feed the user in social dining settings. We introduce the concept of Human-Robot Commensality at the intersection of commensality and robot-assisted feeding in social group settings.
38
+
39
+ Our research is fueled by the key insight that bite timing strategies that take into account ever-changing social signals and group dynamics can lead to a seamless human-robot collaboration in social dining scenarios. Fueled by this insight, we believe a feeding device that takes the initiative and offers bites proactively during the meal at times when a bite is likely to be desired will create a more seamless dining experience than a device that requires the user to initiate bites. Herlant [37] designed an HMM to predict bite timing in dyadic robot-assisted feeding. However, her model only considered the social cues of the user. Bhattacharjee et al. [38] found users preferred less intrusive interfaces in a social dining scenario, specifically a web interface over a voice interface. Our work aims to build non-intrusive bite timing strategies by focusing on learning when to feed a user in triadic scenarios while using implicit social features from all diners.
40
+
41
+ Particularly, bite timing is important because the consequences of presenting a bite to the diner earlier than expected is poorly tolerated. This can include an interruption to conversation or to finishing chewing the prior bite. The consequences of presenting a bite later than desired can include frustration towards the robot and disruption of the natural flow of conversation during the meal. Parallels can be drawn to interruptibility research on finding the most appropriate timing to probe a user. Researchers have found that people performed best on a task if interruptions were mediated rather than timed immediately or on scheduled intervals $\left\lbrack {{39},{40}}\right\rbrack$ , often mediated based on modelling contextual and social factors [41-44].
42
+
43
+ A socially-aware robot-assisted feeding system should be designed such that if needed, the user should be able to communicate these intentions via multiple different modalities such as body language, gaze, or speech. These various modalities have been found to be effective in modelling social interactions [45-48]. Capturing these natural social interactions in computational models are likely crucial to provide accurate bite timing without distracting users from the social ambiance.
44
+
45
+ § 3 PROBLEM FORMULATION
46
+
47
+ The objective of the bite timing prediction problem in robot-assisted feeding with a single diner is to predict the timing of when this user will take a bite of food by capturing their signals $\mathbf{U}$ such as voice, body gestures, head movements or speaking status. We define the proper timing for when a robot should feed as when the user intends to take a bite of food. It takes input signals $\mathbf{U}\left( {{t}_{0} : t}\right)$ from time ${t}_{0}$ to time $t$ and learns a function $\mathcal{F}\left( \mathbf{U}\right)$ to predict a Boolean $y\left( {t + h}\right) = \mathcal{F}\left( {\mathbf{U}\left( {{t}_{0} : t}\right) }\right)$ , which indicates whether the user intends to take a bite in the time horizon $h$ and trigger a bite transfer at time $t + 1$ . When a person lifts their fork off the plate to eat, they intend to take a bite of food, where this time horizon $h$ is the time it takes to transfer the food to their mouth from their plate.
48
+
49
+ In this paper, we consider a social variant of the bite timing prediction problem where a user is interacting with two co-diners. Our goal is to predict the timing of a user to take a bite of food based on the social cues within the interaction. From an initial time ${t}_{0}$ to time $t$ , the user receives social signals $\mathbf{L}\left( {{t}_{0} : t}\right)$ and $\mathbf{R}\left( {{t}_{0} : t}\right)$ from their left and right conversational co-diners, respectively. Given these external social signals and the target user’s own history of signals $\mathbf{U}\left( {{t}_{0} : t}\right)$ , we aim to predict $y$ . We note that it may not always be possible to track the same set of features for a user and their co-diners. Therefore, for some time range $k = t - {t}_{0}$ and feature dimensions $n,m$ for the user and co-diners respectively, $\mathbf{U} \in {\mathbb{R}}^{k \times n}$ while $\mathbf{L},\mathbf{R} \in {\mathbb{R}}^{k \times m}$ , where $n$ does not necessarily equal $m$ . The function to learn is:
50
+
51
+ $$
52
+ y\left( {t + 1}\right) = \mathcal{F}\left( {\mathbf{U}\left( {{t}_{0} : t}\right) ,\mathbf{L}\left( {{t}_{0} : t}\right) ,\mathbf{R}\left( {{t}_{0} : t}\right) }\right)
53
+ $$
54
+
55
+ § 4 MODEL: SOCIAL NIBBLING NETWORK (SONNET)
56
+
57
+ We present the SOcial Nibbling NETwork (SoNNET) that predicts when a user has the intention to eat based on various social signals. We selected features to represent both human eating and social behavior: bite features, which include the number of bites taken so far and the time since the last bite of food $b \in {\mathbb{R}}^{2}$ , a diner’s gaze and head pose direction $d \in {\mathbb{R}}^{4}$ , binary speaking status $s \in \{ 0,1\}$ , and face and body keypoints $o \in {\mathbb{R}}^{168}$ from OpenPose [49]. We note that, in our case, the bite
58
+
59
+ < g r a p h i c s >
60
+
61
+ Figure 2: Triplet-SoNNET contains three interacting channels for features of a target user and two co-diners. Each channel concatenates the input of time, gaze, speaking and skeleton features from each single diner. Couplet-SoNNET eliminates all features from the target user by dropping the last channel; however, it continues to use the user's bite features. Batch normalization layers are not shown in the figure.
62
+
63
+ features $b$ are computed only for the user and not the co-diners, since we do not estimate in real-time whether a co-diner is taking a bite of food. Thus, for a time interval $k = t - {t}_{0}$ , these features are temporally stacked to construct the input signals $\mathbf{U} \in {\mathbb{R}}^{k \times {175}},\mathbf{L} \in {\mathbb{R}}^{k \times {173}},\mathbf{R} \in {\mathbb{R}}^{k \times {173}}$ for the user, left co-diner, and right co-diner, respectively.
64
+
65
+ Recently, convolutional neural networks (CNNs) have demonstrated significant success for multichannel time series classification from various kinds of signals [50-52]. Wu et al. [44] proposed PazNet: a multi-channel deep convolutional neural network which is able to handle inputs of different dimensions. PazNet is designed to predict the interruptibility of individual drivers. However, the information of different channels are not shared, and it lacks ability to capture social interactions among multiple people.
66
+
67
+ We design the Social Nibbling NETwork (SoNNET), a new model architecture which follows a multi-channel pattern allowing multiple interconnected branches to interleave and fuse at different stages. We create input processing channels for each diner, then add interleaving tunnels between each convolutional module and adjacent branches. The information capturing visually-observable behaviors between the diners is allowed to flow between the frames and channels. We conjecture that our model will learn a socially-coherent structure, allowing the model to implicitly represent the diners in an embedding space. Therefore, each channel has the same structure but does not share the same weight parameters. To help capture informative features, we performed dimension-reduction after the interleaving components using max pooling layers and $1 \times 1$ convolutional layers. These per-diner channels are concatenated and then followed by 2 dense layers for classification, which decides whether the user intends to feed or not. For SoNNET, the range between $t$ and ${t}_{0}$ is six seconds. The social signals in this range are used to predict a user's bite intentions.
68
+
69
+ Triplet-SoNNET. For modelling the bite-timing prediction of three users with no mobility limitations, we propose Triplet-SoNNET which uses social signals from the left and right co-diners $\mathbf{L},\mathbf{R}$ and signals from the user $\mathbf{U}$ . Depicted in Fig. 2, Triplet-SoNNET ensures that the features from other co-diners $\mathbf{L},\mathbf{R}$ interleave into the target user’s features $\mathbf{U}$ .
70
+
71
+ Couplet-SoNNET. To run Triplet-SoNNet in a robot-assisted feeding setting, there would be a distribution shift in the kinds of signals a target user outputs. Our goal is to feed people with mobility limitations while they are engaged in social conversations. The features from someone self-feeding are inherently different from someone using a robot-assisted feeding system. In the case of body pose, a target user with mobility limitations would be largely still, which is different from the training data. Our Human-Human Commensality Dataset consists of healthy adult diners, thus applying a trained Triplet-SoNNET model to robot-assisted feeding of a user with disabilities would be out-of-distribution. We create Couplet-SoNNET, where we ignore most social signals from the target user by removing the last channel in Triplet-SoNNET. Therefore, the intention to feed $y\left( {t + 1}\right) = \mathcal{F}\left( {{\mathbf{U}}_{b}\left( {{t}_{0} : t}\right) ,\mathbf{L}\left( {{t}_{0} : t}\right) ,\mathbf{R}\left( {{t}_{0} : t}\right) }\right)$ , where ${\mathbf{U}}_{b} \in {\mathbb{R}}^{k \times 2}$ are the user’s bite features for $k = t - {t}_{0}$ . The user’s bite features, such as the time since the last bite and the number of bites since the onset of the feeding activity, are the only social signals from the target user.
72
+
73
+ § 5 HUMAN-HUMAN COMMENSALITY DATASET (HHCD)
74
+
75
+ We introduce a novel Human-Human Commensality Dataset (HHCD) of three healthy adult participants eating in a social scenario. We used this dataset to develop models that predict a diner's intention to take a bite of food while taking into account subtle social cues. We deployed the trained models in a social robot-assisted feeding setting where one diner is fed by a robot.
76
+
77
+ Data Collection Setup. We recruited 90 people among our Institution-affiliated fully-vaccinated students, faculty, and staff to eat a meal in a triadic dining scenario. Each participant was 18+ years old and took part in the study only once. The study setup is illustrated in Fig. 1 (left). There are three cameras (mutually at ${120}^{ \circ }$ ) in the middle of the table, each capturing one participant, and a fourth camera capturing the whole scene. All four cameras are Intel RealSense Depth Cameras D455 [53]. The scene audio was captured by a microphone array ReSpeaker Mic Array v2.0 [54] placed in the middle of the table. The ReSpeaker microphone array has four microphones arranged at the corners of a square and estimates the direction of sound arrival.
78
+
79
+ Participants were free to bring any kind of food and any utensil with them. They could also bring a drink (some drank from a cup, others from a bottle or both, with or without a straw) and were provided with napkins. Before the study, each participant was asked to fill in a pre-study questionnaire about their demographic background, relationship to other participants, and social dining habits. The experimenter then asked them to eat their meals and have natural conversations. At this point, the experimenter started the recording and left the room. When all three participants finished eating or after 60 minutes have passed, whichever was earlier, the experimenter stopped the recording and asked participants to fill in a post-study questionnaire about their dining experience. The specific questions asked in both pre/post-study questionnaires can be found in App. 8.1.2. The study was approved by our Institution's IRB.
80
+
81
+ Data Annotation. We annotated each participant's video based on their interactions with food, drink, and napkins. In particular, we annotated food_entered, food_lifted, food_to_mouth, drink_entered, drink_lifted, drink_to_mouth, napkin_entered, napkin_lifted, napkin_to_mouth, and mouth_open events. We chose these events as they are key transition points during feeding. We spent 151 hours annotating and used the ELAN annotation tool [55]. We assigned the annotation value $\in \{$ fork, knife, spoon, chopsticks, hand $\}$ based on the utensil performing the food-to-mouth handover. While annotating, we also noted down per-participant food types and observations of interesting behaviors. All annotation types with detailed rules are provided in App. 8.1.3.
82
+
83
+ Data Statistics. There were 56 female and 34 male participants, and their ages ranged 18-38 ( $\mu =$ ${22},\sigma = 3$ ) years. Session durations ranged ${21} - {55}\left( {\mu = {37},\sigma = 9}\right)$ minutes and 1 session was at breakfast, 10 at lunch, and 19 at dinner time. For additional dataset statistics, see App. 8.1.4.
84
+
85
+ For a summary of all available data in the dataset and its detailed analysis, see App. 8.1. For the purposes of this work, we only consider bite features, speaking status, gaze and head pose, and body and face keypoints.
86
+
87
+ § 6 MODEL EVALUATION ON HUMAN-HUMAN COMMENSALITY DATASET
88
+
89
+ In this section, we evaluate Triplet- and Couplet-SoNNET against other models on the HHCD. In particular, we compare against a regularized linear SVM trained with SGD to evaluate performance of linear classifiers. We also consider a Temporal Convolution Network (TCN) [56, 57], which uses causal convolutions and dilations to represent temporal data. TCNs have been found to perform better than LSTMs and GRUs on temporal anomaly detection [58] and robot food manipulation tasks [20], therefore they would provide a strong baseline to compare our models to. We also perform an ablation study to investigate the importance of various modalities. Implementation details about baseline models, SoNNET, and training procedure can be found in App. 8.2.
90
+
91
+ For training, we use 6811 food lifted annotations as positive training labels since they precede an actual bite of food and indicate an intention to eat. We use a time interval of $k = t - {t}_{0} = 6$ seconds because it takes roughly 6 seconds for the robot to move from its wait position to feeding the user. Since bite actions are sparsely distributed over time, we select 2486 6-second clips as negative samples that are in the middle of two food_lifted annotations. All reported models are trained with leave-one-session-out (LOSO) cross-validation to evaluate generalizability to new groups of people. Due to an issue with recording, we train over 29 sessions if speaking status features are used.
92
+
93
+ The user’s bite features $b \in {\mathbb{R}}^{2}$ (time since last bite and the number of bites eaten since the start) are indicators of hunger. To ensure this feature is not dominated by higher dimensional features, we scale the size of the input by $\gamma$ . This hyperparameter $\gamma$ scales $b \in {\mathbb{R}}^{2} \rightarrow b \in {\mathbb{R}}^{2\gamma }$ . We selected $\gamma = {100}$ after a grid search over the training set on the TCN and SoNNET models.
94
+
95
+ Evaluation Metrics. A high recall indicates that our model can reliably feed when it should. In contrast, a high precision indicates that a model tends to be stricter in deciding when to feed. Due to our dataset imbalance, the average accuracy across 29 sessions for a model that predicts it should always feed is 71.56%. This classifier achieves perfect recall, and relatively high precision, causing the model to have a high F1 score. It is clear that given this class imbalance, a high F1 score poorly represents the capabilities of this model. To evaluate our model effectively, we consider the normalized Matthews Correlation Coefficient (nMCC) in addition to F1 score, precision, recall, and accuracy. Unlike F1 score, nMCC considers the size of the majority and minority classes, and can only produce high scores if a classifier is able to make correct predictions for a majority of both the negative and positive classes [59]. A value of 0.5 indicates random prediction, while 0 is inverse prediction and 1 is perfect prediction.
96
+
97
+ Effects of Modality. We are interested in investigating features that are the most informative for designing a good bite timing predictor in social dining. We perform a feature ablation study on the Triplet-SoNNET model, as shown in Table 1. We selectively remove feature streams, such as body and face data from OpenPose, gaze and head features from RT-GENE, speaking status signals, and the user's bite features. We find that users' bite features such as the time since last bite and the number of bites are important, as accuracy drops drastically without them. Intuitively, we believe this feature is important because a user's bite features are a proxy for their level of hunger. We also see that without body and face features, F1 and recall slightly increase. This could be due to the fact these data streams are noisy; however, as indicated by the lower accuracy and nMCC when removing OpenPose features, these features are useful.
98
+
99
+ Effects of Model Type. Table 2 shows the outcomes of various model comparisons when trained using LOSO. We compare performance of Triplet-SoNNET against a linear SVM and TCN trained on all three diners. We call this TCN a Triplet-TCN. Triplet-TCN has all the diners' features concatenated per-timestep, and we compare this result to Triplet-SoNNET. We find that Triplet-SoNNet achieves higher accuracy and nMCC compared to all other models; however, it performs worse on recall and F1 score compared to Triplet-TCN. In our scenario, we want to ensure that the robot feeds when it should and does not feed when it should not. A bite prediction model that overfeeds or underfeeds is not ideal. A high nMCC balances the roles of recall and precision and better represents whether a classifier should or should not feed. Therefore, for our scenarios, Triplet-SoNNET is a more effective predictor of bite timing than other models trained on all three diners.
100
+
101
+ Effects of Social Scenario. We are interested in comparing the ability of models to learn social behaviors using only two co-diners' features rather than having full observability. We compare Couplet-SoNNET to a similarly-named Couplet-TCN trained on two co-diners' features and a user's bite features. As expected, Couplet-TCN and Couplet-SoNNET perform worse than their Triplet-counterparts, with Couplet-TCN being close to random prediction with an nMCC of 0.5539 while Couplet-SoNNET has an nMCC of 0.6648 . We find that Couplet-SoNNET performs better than Couplet-TCN. This result reveals Couplet-SoNNET is able to understand social signals better than a predictor that always feeds. This implies that it is possible to predict the behavior of a user using only their co-diner information, which indicates that there is social coordination in human-human commensality. These findings also suggest that social signals were captured by the interleaving structure of the SoNNET models.
102
+
103
+ Table 1: Ablation study on different modalities from various data sources. We use average over LOSO cross-validation.
104
+
105
+ max width=
106
+
107
+ Method Acc. Prec. Rec. F1 nMCC
108
+
109
+ 1-6
110
+ Triplet-SoNNET 0.820 0.861 0.871 0.862 0.772
111
+
112
+ 1-6
113
+ - Speaking Status 0.816 0.864 0.863 0.856 0.771
114
+
115
+ 1-6
116
+ - Gaze & Head Pose 0.815 0.863 0.863 0.856 0.769
117
+
118
+ 1-6
119
+ - Bite Features 0.781 0.832 0.855 0.834 0.727
120
+
121
+ 1-6
122
+ - Body & Face 0.820 0.854 0.886 0.865 0.771
123
+
124
+ 1-6
125
+
126
+ Table 2: Model comparison on LOSO cross-validation over 29 sessions.
127
+
128
+ max width=
129
+
130
+ Method Acc. Prec. Rec. F1 nMCC
131
+
132
+ 1-6
133
+ Always Feed 0.72 0.72 1 0.83 0.5
134
+
135
+ 1-6
136
+ Linear SVM (SGD) 0.68 0.82 0.77 0.74 0.64
137
+
138
+ 1-6
139
+ Triplet-TCN 0.82 0.82 0.96 0.88 0.72
140
+
141
+ 1-6
142
+ Triplet-SoNNET 0.82 0.86 0.87 0.86 0.77
143
+
144
+ 1-6
145
+ Couplet-TCN 0.73 0.73 0.98 0.83 0.55
146
+
147
+ 1-6
148
+ Couplet-SoNNET 0.76 0.78 0.96 0.85 0.66
149
+
150
+ 1-6
151
+
152
+ § 7 TRANSFERRING FROM HUMAN-HUMAN TO HUMAN-ROBOT COMMENSALITY
153
+
154
+ Our objective is to develop a bite timing strategy for a robot that feeds a user in a social dining setting. We design a study where users evaluate the effect of different bite timing strategies on their overall social dining experience. To simulate robotic caregiving scenarios for people with upper-extremity mobility limitations, we instructed users to not move their upper body. This study was approved by our Institution's IRB.
155
+
156
+ Experimental Setup. We evaluate three bite timing strategies for triggering the robot to feed a user, also depicted in Fig. 3 (middle):
157
+
158
+ 1. Learned Timing. This social, fully autonomous bite timing strategy feeds based on our Couplet-SoNNET model's output. We sample this model every three seconds with the last six seconds of preprocessed features at a rate of 15 frames per second. This approach takes into account the social context. Since we want to evaluate the generalization performance, we train Couplet-SoNNET on ${80}\%$ of the HHCD data and use the remaining ${20}\%$ of HHCD data to select early-stopping criteria.
159
+
160
+ 2. Fixed-Interval Timing. This fully autonomous bite timing strategy feeds every 44.5 seconds, which is a scaled average time a robot should take to feed a user after it has picked up a food item. To derive this value, we first find the appropriate scaling factor between human motion from the HHCD and robot motion. We note the average time for a human from the food_entered transition to food-lifted transition is 9.9 seconds. The robot end-effector motion is not designed to match the human speed but rather to be perceived as safe and comfortable to a user being fed. We find the equivalent key transitions for the robot to be ${5x}$ slower than a human. Since we define the intention to take a bite as when the food is lifted, the robot should take 49.5 seconds to feed a user after picking up a food item. Given the robot takes roughly 5 seconds to move to its wait position after picking the food, the robot waits 44.5 seconds.
161
+
162
+ 3. Mouth-Open Timing. This partially autonomous bite timing strategy feeds only when the user prompts the robot by opening their mouth. The target user is prompted each time by the robot saying "When ready, look at me and open your mouth". This approach gives the user explicit control of when the robot should feed [38].
163
+
164
+ The robot user is seated on a wheelchair mounted with a Kinova Gen3 6-DoF arm [60], which is used to feed the participant (Fig. 3, left). For implementation details of the robot study, see App. 8.3.
165
+
166
+ Experimental Procedure. In this study, participants are seated in a similar setup as that used for HHCD data collection in Sec. 5. All participants were asked to bring their own food, and each group chose who would be fed by the robot. We recruited 30 participants over 10 sessions. There were 16 female and 14 male participants, and their ages ranged from 19-70 $\left( {\mu = {27},\sigma = 9}\right)$ years.
167
+
168
+ A single trial consists of bite acquisition, followed by one of the three bite timing strategies, then bite transfer. For bite acquisition, the robot alternates feeding the user cantaloupes and strawberries. We chose these fruits due to their high acquisition success rates [24]. We used the bite acquisition strategies and bite transfer strategies from [25, 26]. All participants take a survey after each trial, which administers a forced-choice question on the participants' preferences between the previous and current conditions. Each pair of comparisons between any two conditions occurs three times, leading to ten trials. The condition orderings are counterbalanced over ten trials. Additionally,
169
+
170
+ < g r a p h i c s >
171
+
172
+ Figure 3: Left: We use a 6-DoF Kinova Gen3 robotic arm [60] on Rovi wheelchair [61]. Middle: User study conditions/bite timing strategies: Learned, Fixed-Interval, and Mouth-Open Timings. Top right: Preferences for bite timing strategies rated by users, two co-diners, and all three diners. Bottom right: Level of distraction by the robot perceived by users, two co-diners, and all three diners on a Likert scale 1-5 (agreement with "I felt distracted by the robot"), for each bite timing strategy. $* , * * , * * *$ denote statistically significant differences with ${p}_{0.05},{p}_{0.005},{p}_{0.0005}$ respectively.
173
+
174
+ we ask participants whether they felt the robot fed them too early, on-time, or too late. To avoid interruptions in social conversations due to the presence of a robot in human groups, we provide the participants with a list of questions (see App. 8.3.1), which they could optionally use to help get the conversation started at each trial, similarly to previous work [37]. The post-study questionnaire included questions after each trial or condition about bite timing appropriateness, distractions due to the robot, ability to have natural conversations, ability to feel comfortable around the robot, as well as system reliability and trust in the robot [62].
175
+
176
+ Results and Discussion. As shown in Fig. 3 (top right), users and co-diners preferred the Learned strategy for bite timing as compared to Fixed-Interval or Mouth-Open Timing. This confirms that our insight to incorporate social signals in model structure (SoNNET) improves bite timing prediction. These results using Couplet-SoNNET also imply that it is possible to predict the behavior of a user using only their co-diner information, which indicates that there is social coordination in human groups even in the presence of a robot. In Fig. 3 (bottom right) we further compared the level of distraction by the robot as perceived by participants. We performed Kruskal-Wallis H-tests and Tukey HSD post-hoc tests and found that Mouth-Open Timing distracts dining participants significantly more than Learned or Fixed-Interval Timing. We believe this is because the Mouth-Open strategy prompts the user using a voice interface, which can disrupt the rhythm of conversation. Even though the participants had a clear preference for the Learned strategy when given a forced-choice, when asked to individually rate the conditions using a 5-point Likert scale, interestingly we could not find any statistically significant differences between Mouth-Open Timing and Learned Timing. This is probably because the Mouth-Open Timing strategy provides full control of bite timing to the users themselves. Note, regardless of the conditions, the users found the system comfortable, reliable, and trustworthy. Detailed analysis is given in App. 8.3.
177
+
178
+ Limitations. There is a risk that our results from human-robot user studies with healthy adults may not generalize to those with people with mobility limitations. People with mobility limitations may have different preferences and cognitive workload associated with a robotic intervention. It remains to be investigated how our models perform with our target population. We also made multiple assumptions when transferring our results from human-human to human-robot commensality scenarios. During human-human commensality, the user was self-feeding whereas in human-robot commensality the user was being fed. We also assumed that the addition of a robot into a human-human commensality scenario does not change the social dynamics of the diners significantly. Given these assumptions, it would be an interesting exploration to see how our models perform when trained on similar human-robot commensality scenarios. Finally, it is an open question as to how these models would perform with groups of different cultures. This motivates further investigation into human-robot commensality, both from technical and societal perspectives.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/8-8e18idYLD/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,289 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Online Dynamics Learning for Predictive Control with an Application to Aerial Robots
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: In this work, we consider the task of improving the accuracy of dynamic models for model predictive control (MPC) in an online setting. Even though prediction models can be learned and applied to model-based controllers, these models are often learned offline. In this offline setting, training data is first collected and a prediction model is learned through an elaborated training procedure. After the model is trained to a desired accuracy, it is then deployed in a model predictive controller. However, since the model is learned offline, it does not adapt to disturbances or model errors observed during deployment. To improve the adaptiveness of the model and the controller, we propose an online dynamics learning framework that continually improves the accuracy of the dynamic model during deployment. We adopt knowledge-based neural ordinary differential equations (KNODE) as the dynamic models, and use techniques inspired by transfer learning to continually improve the model accuracy. We demonstrate the efficacy of our framework with a quadrotor robot, and verify the framework in both simulations and physical experiments. Results show that the proposed approach is able to account for disturbances that are possibly time-varying, while maintaining good trajectory tracking performance.
12
+
13
+ Keywords: Online Learning, Model Learning, Model Predictive Control, Aerial Robotics
14
+
15
+ ## 20 1 Introduction
16
+
17
+ In recent years, model predictive control (MPC) has shown significant potential in robotic applications. As an optimization-based approach that uses prediction models, the MPC framework leverages readily available physics models or accurate data-driven models to allow robotic systems to achieve good closed-loop performance for a variety of tasks. However, the reliance on accurate dynamic models makes it a challenging task for the controller to adapt to system changes or environmental uncertainties. In the case where the robot dynamics change during deployment or in the presence of disturbances, it becomes essential for the controller to update and adapt its dynamic model in order to maintain good performance. Recent advancements in deep learning techniques have demonstrated promising results in using neural networks to model dynamical systems. One advantage of neural networks over physics models is that they alleviate the need for bottom-up construction of dynamics, which often requires expert knowledge or physical intuition. Neural networks are also increasingly faster to optimize, owing to the development of optimization algorithms. This makes the application of neural networks to model predictive controllers more amenable, which allows the controllers to adapt to disturbances or system changes by refining its dynamic model. In this work, we propose a novel online dynamic learning algorithm using a deep learning method, knowledge-based neural ordinary differential equations. Through both simulations and physical experiments, we show that our framework, which comprises of a suite of algorithms, can improve the adaptiveness of MPC by continually refining its dynamic model of a quadrotor system, and in turn maintains desirable closed-loop performance during deployment.
18
+
19
+ Related Work. Traditional supervised machine learning paradigms consist of two phases - training and inference. The training phase requires data to be collected in advance and then takes place in an offline fashion. After the model is trained, it is then deployed for inference. Online learning, on the other hand, performs learning with data arriving in a sequential order. Notably, online learning has been used to transfer existing knowledge to assist training $\left\lbrack {1,2}\right\rbrack$ , predict general time series data $\left\lbrack {3,4}\right\rbrack$ , and learn inverse dynamics in a derivative free manner $\left\lbrack 5\right\rbrack$ .
20
+
21
+ Learning dynamic models has gathered increasing attentions because of the development in scientific machine learning. A spectrum of techniques has emerged ranging from blackbox approaches [6, 7], to methods that require some known structures $\left\lbrack {8,9}\right\rbrack$ . In the middle of this spectrum, techniques have been developed to combine physics knowledge with machine learning $\left\lbrack {{10},{11},{12},{13}}\right\rbrack$ , and hysical priors have also been incorporated into neural networks [14]. A library based on continuous-time deep learning techniques have been developed for scientific machine learning [15].
22
+
23
+ There are a number of works that use models learned with data within an MPC framework. In $\left\lbrack {{16},{17}}\right\rbrack$ , Gaussian processes are used to characterize dynamics models for MPC and they are applied to autonomous vehicles and quadrotors. Data-driven models in a control framework is model-based reinforcement learning (MBRL). In [18], the authors use feed-forward networks to learn a dynamic model, which is then integrated into a stochastic predictive control framework. The authors in [19] use MBRL to design low-level controllers using sensors on-board a robot. In [20], the authors use a hybrid model that combines a first-principles model with a neural network. Our work is hugely inspired by [20], but instead of learning the model offline, we devise a framework that allow the dynamic model to be continually improved during deployment. The motivation for online dynamics learning is so that the model is able to account for uncertainty and disturbances that are not captured by the training data. To the authors' best knowledge, there has yet been online learning techniques specifically developed for learning dynamical systems and being applied to model predictive control schemes.
24
+
25
+ ## 2 Problem Formulation
26
+
27
+ Given a robot that uses a predictive model-based controller, we aim to use data to continually improve its closed-loop performance in an online setting. Specifically, we seek to continually refine the dynamic model of the robot during deployment. Let the true model of a robot be given by
28
+
29
+ $$
30
+ \dot{\mathbf{x}} = f\left( {\mathbf{x},\mathbf{u}}\right) , \tag{1}
31
+ $$
32
+
33
+ where the function $f$ is the dynamics of the robot, the vector $\mathbf{x}$ is the state of the robot, and the vector $\mathbf{u}$ is the control input. The output of the dynamics is $\dot{\mathbf{x}}$ , the derivative of the state with respect to time. Let the time stamps of a deployment of the robot for some tasks be $T = \left\lbrack {{t}_{0},{t}_{1},\cdots ,{t}_{N}}\right\rbrack$ . The training samples generated by the true dynamics in (1) during this deployment are $\mathcal{S} = \left\lbrack {\left( {\mathbf{x}\left( {t}_{0}\right) ,\mathbf{u}\left( {t}_{0}\right) }\right) ,\left( {\mathbf{x}\left( {t}_{1}\right) ,\mathbf{u}\left( {t}_{1}\right) }\right) ,\cdots ,\left( {\mathbf{x}\left( {t}_{N}\right) ,\mathbf{u}\left( {t}_{N}\right) }\right) }\right\rbrack$ . We seek to estimate the robot dynamics given by
34
+
35
+ $$
36
+ \dot{\mathbf{x}} = \widehat{f}\left( {\mathbf{x},\mathbf{u}}\right) . \tag{2}
37
+ $$
38
+
39
+ Given some performance metric on $\mathcal{S}$ , we seek to improve it by refining $\widehat{f}$ using the streams of data collected during deployment.
40
+
41
+ ## 3 Online Dynamics Learning
42
+
43
+ ### 3.1 Knowledge-based Neural Ordinary Differential Equations
44
+
45
+ Neural ordinary differential equations (NODE) was first proposed to approximate continuous-depth residual networks [6]. It has since been used in scientific machine learning to model a wide variety of dynamical systems [11] Knowledge-based neural ordinary differential equations (KNODE) is an extension of NODE that couples physics knowledge with neural networks by leveraging its compatibility with first principles knowledge, and has demonstrated learning continuous-time models with improved generalizability. Three aspects of KNODE make it a suitable candidate for our online learning task. First, it requires less data for training, which means each data collection period during robot deployment can be short, and thus improving the adaptiveness of the MPC. Second, KNODE is a continuous-time dynamic model, which is compatible with many existing MPC frameworks. Third, many robotic systems have readily available physics models that can be used as knowledge. The original KNODE was applied only to non-controlled dynamical systems, but variants of it have been developed to incorporate control inputs to dynamic models [20]. In this work, we use a similar approach as [20], where we concatenate the state and control as $\mathbf{z} = {\left\lbrack \mathbf{x},\mathbf{u}\right\rbrack }^{T}$ . During training, the control is simply ignored when computing loss.
46
+
47
+ ![01963f75-9ec0-7c3c-abab-e0406e04c8da_2_371_205_1051_414_0.jpg](images/01963f75-9ec0-7c3c-abab-e0406e04c8da_2_371_205_1051_414_0.jpg)
48
+
49
+ Figure 1: Framework overview. (a) The training process runs in parallel with the robot carrying out some tasks. The robot states and control inputs are collected for training. The trained model is used to update the dynamic model in the controller. (b) the neural network consists of a nominal model and repeatedly added neural networks. Each neural network and the nominal model run in parallel with each other, with their output coupled to give the state derivative.
50
+
51
+ Using KNODE model, the dynamics in (2) is expressed as $\widehat{f}\left( {\mathbf{z}, t}\right) = {M}_{\psi }\left( {\widetilde{f}\left( {\mathbf{z}, t}\right) ,{f}_{\theta }\left( {\mathbf{z}, t}\right) }\right)$ , where $\widetilde{f}$ is the physics knowledge, ${f}_{\theta }$ is a neural network parametrized by $\theta$ , and ${M}_{\psi }$ is a selection matrix which couples the neural network with knowledge. A loss function is then given by
52
+
53
+ $$
54
+ \mathcal{L}\left( {\theta ,\psi }\right) = \frac{1}{m - 1}\mathop{\sum }\limits_{{i = 1}}^{{m - 1}}{\int }_{{t}_{i}}^{{t}_{i + 1}}\delta \left( {{t}_{s} - \tau }\right) \parallel \widehat{\mathbf{x}}\left( \tau \right) - \mathbf{x}\left( \tau \right) {\parallel }^{2}{d\tau } + \mathcal{R}\left( {\theta ,\psi }\right) , \tag{3}
55
+ $$
56
+
57
+ where $\delta$ is the Dirac delta function, ${t}_{s} \in T$ is any sampling time in $T$ , and $\mathcal{R}$ is the regularization on the neural network and coupling matrix parameters. The estimated state $\widehat{\mathbf{x}}\left( \tau \right)$ in (3) comes from $\widehat{\mathbf{z}}\left( \tau \right)$ , which is given by
58
+
59
+ $$
60
+ \widehat{\mathbf{z}}\left( \tau \right) = \mathbf{z}\left( {t}_{i}\right) + {\int }_{{t}_{i}}^{\tau }\widehat{f}\left( {\mathbf{z}\left( \omega \right) ,\omega }\right) {d\omega }. \tag{4}
61
+ $$
62
+
63
+ Intuitively, $\widehat{\mathbf{z}}\left( \tau \right)$ is the state at $t = \tau$ generated using $\widehat{f}$ with the initial condition $\mathbf{z}\left( {t}_{i}\right)$ , and the loss function (3) computes the mean squared error between the one-step-ahead estimates of the states and the ground truth data. The integration in (4) is computed using numerical solvers in practice.
64
+
65
+ The optimization of the neural network parameters in KNODE can be done with either backpropa-gation or adjoint sensitivity method, which is a memory efficient alternative to backpropagation. In this work, adjoint sensitivity method is used similar to [6]. In the following sections, the dynamic models are all based on KNODE.
66
+
67
+ Algorithm 1 Data collection and model updates
68
+
69
+ ---
70
+
71
+ 1: Initialize the current time, total duration, and the collection interval as ${t}_{i},{t}_{N}$ , and ${t}_{col}$
72
+
73
+ ${t}_{i} \leftarrow 0$
74
+
75
+ OnlineData $\leftarrow \left\lbrack \right\rbrack$
76
+
77
+ while ${t}_{i} < {t}_{N}$ do
78
+
79
+ if New model is available then
80
+
81
+ Controller updates new model
82
+
83
+ end if
84
+
85
+ if ${t}_{i}$ is not 0 and ${t}_{i}$ modulo ${t}_{col}$ is 0 then
86
+
87
+ Save OnlineData
88
+
89
+ OnlineData $\leftarrow \left\lbrack \right\rbrack$
90
+
91
+ end if
92
+
93
+ Robot updates state using control input
94
+
95
+ Append new robot state and control input to OnlineData
96
+
97
+ ${t}_{i} \leftarrow {t}_{i} + {dt}$
98
+
99
+ end while
100
+
101
+ ---
102
+
103
+ Algorithm 2 Online dynamics learning
104
+
105
+ ---
106
+
107
+ Initialize the current time and total duration as ${t}_{i}$ and ${t}_{N}$
108
+
109
+ ${t}_{i} \leftarrow 0$
110
+
111
+ while ${t}_{i} < {t}_{N}$ do
112
+
113
+ while No new data available do
114
+
115
+ Wait
116
+
117
+ end while
118
+
119
+ Train a new model with the newest data $\vartriangleright$ new neural network added each iterati
120
+
121
+ Save the trained model
122
+
123
+ end while
124
+
125
+ ---
126
+
127
+ ### 3.2 Online Data Collection and Learning
128
+
129
+ The basic features of the proposed online dynamics learning algorithm include the dynamic model update logic and a trainer which runs in parallel with the robot, which are described in Alg. 1 and 2 respectively.
130
+
131
+ For data collection, the robot state is repeatedly appended to a data array during deployment. This data array is saved and reset (Line 10 and 11 of Alg. 1 ) at regular time intervals (Line 9 of Alg. 1). A key hyperparameter of data collection is the collection interval ${t}_{col}$ , which dictates how much data in a batch (measured in seconds) to be collected for training. With a longer collection interval, the adaptiveness of the algorithm will decrease because it not only takes longer to collect data but also takes more time to train a new model. This leads to less frequent model updates for the controller, and therefore less adaptiveness to system/environment changes. With a shorter collection interval, while the adaptiveness improves, a model is more likely to overfit during training due to the smaller training data size.
132
+
133
+ When new data becomes available, a model is trained as described in Alg. 2. A challenge for online dynamic model learning is how to continually improve the model with newly collected data. When the model gets updated, the closed-loop trajectory of the controller changes accordingly and any new data collected will reflect the updated controller. As a result, the previous dynamic model needs to be preserved, and training the neural network already onboard will not serve the purpose as it alters the previous controller. To tackle this problem, we repeatedly add new neural networks in parallel with the existing model during robot deployment, as illustrated in Fig. 1 (b). Mathematically, given a nominal model $\widetilde{f}$ as the first dynamic model estimate, it is recursively augmented by
134
+
135
+ $$
136
+ {\widehat{f}}^{\left( i + 1\right) } = {M}_{{\psi }_{\left( i + 1\right) }}\left( {{\widehat{f}}^{\left( i\right) },{f}_{{\theta }_{\left( i + 1\right) }}}\right) , \tag{5}
137
+ $$
138
+
139
+ $$
140
+ {\widehat{f}}^{\left( 0\right) } = \widetilde{f},
141
+ $$
142
+
143
+ where the index $\left( {i + 1}\right)$ denotes the $\left( {i + 1}\right)$ th update to dynamic model. As mentioned earlier, the previous controller need to be preserved, and therefore only the parameters ${\theta }_{\left( i + 1\right) }$ and ${\psi }_{\left( i + 1\right) }$ are trained for the $\left( {i + 1}\right)$ th update, while other existing neural networks are frozen.
144
+
145
+ Also note that after each controller update, the next set of data for training must be entirely produced using the updated controller. If a set of data is collected using a mix of old and new controllers, it must be discarded so that the next set of training data represented the most up-to-date controller. This requirement is illustrated in Alg. 2 Line 4.
146
+
147
+ ### 3.3 Applying Learned Models in MPC
148
+
149
+ Inspired by the framework in [20], we apply the learned dynamic model in an MPC framework. Specifically, we solve the following constrained optimization problem in a receding horizon manner,
150
+
151
+ $$
152
+ \mathop{\operatorname{minimize}}\limits_{\substack{{{x}_{0},\ldots ,{x}_{N},} \\ {{u}_{0},\ldots ,{u}_{N - 1}} }}\mathop{\sum }\limits_{{i = 1}}^{{N - 1}}\left( {{x}_{i}^{T}Q{x}_{i} + {u}_{i}^{T}R{u}_{i}}\right) + {x}_{N}^{T}P{x}_{N} \tag{6a}
153
+ $$
154
+
155
+ $$
156
+ \text{subject to}\;{x}_{i + 1} = f\left( {{x}_{i},{u}_{i}}\right) ,\;\forall i = 0,\ldots , N - 1\text{,} \tag{6b}
157
+ $$
158
+
159
+ $$
160
+ {x}_{0} = x\left( t\right) , \tag{6c}
161
+ $$
162
+
163
+ $$
164
+ {x}_{i} \in \mathcal{X},\;{u}_{i} \in \mathcal{U},\;\forall i = 0,\ldots , N - 1, \tag{6d}
165
+ $$
166
+
167
+ $$
168
+ {x}_{N} \in {\mathcal{X}}_{f}, \tag{6e}
169
+ $$
170
+
171
+ where ${x}_{i},{u}_{i}$ are the predicted states and control inputs, $N$ is the horizon and $\mathcal{X},\mathcal{U},{\mathcal{X}}_{f}$ are the state, control input and terminal state constraint sets. $f\left( {\cdot , \cdot }\right)$ in (6b) is a discretized version of the learned KNODE model, as described in Section 3.1. This model is used to predict the future states within the horizon. A precise model gives more accurate predictions of the states, which in turn provides more effective control actions. The weighting matrices $Q$ and $R$ are designed to penalize the states and control inputs within the cost function and $P$ is the terminal state cost matrix. $x\left( t\right)$ is the state obtained at time step $t$ , which acts as an input to the optimization problem (6). Upon solving (6), the first element in the optimal control sequence ${u}_{0}^{ * }$ is applied to the robot as the control action. The robot moves according to this control action and generate new state measurements $x\left( {t + 1}\right)$ , which will be used to solve (6) at the next time step.
172
+
173
+ ## 4 Simulations
174
+
175
+ To demonstrate the efficacy of the framework in real-world robotic applications, we apply it to a quadrotor system and conduct tests in both simulations and physical experiments.
176
+
177
+ ### 4.1 Dynamics of a Quadrotor System
178
+
179
+ To apply the KNODE-MPC-Online framework, we first construct a KNODE model, by combining a nominal model derived from physics, with a neural network. For the quadrotor, the nominal model can be derived from its equations of motion,
180
+
181
+ $$
182
+ m\ddot{\mathbf{r}} = m\mathbf{g} + \mathbf{R}\eta , \tag{7a}
183
+ $$
184
+
185
+ $$
186
+ \mathbf{I}\dot{\mathbf{\omega }} = \mathbf{\tau } - \mathbf{\omega } \times \mathbf{I}\mathbf{\omega }, \tag{7b}
187
+ $$
188
+
189
+ where $\mathbf{r}$ and $\mathbf{\omega }$ are the position and angular rates of the quadrotor, $\eta ,\tau$ are the thrust and moments generated by the motors of the quadrotor. $\mathbf{g}$ is the gravity vector and $\mathbf{R}$ is the transformation matrix that maps $\eta$ to the accelerations. $m$ and $\mathbf{I}$ are the mass and inertia matrix of the quadrotor. Furthermore, by defining the state as $\mathbf{x} \mathrel{\text{:=}} {\left\lbrack \begin{array}{llll} {\mathbf{r}}^{\top } & {\dot{\mathbf{r}}}^{\top } & {\mathbf{q}}^{\top } & {\mathbf{\omega }}^{\top } \end{array}\right\rbrack }^{\top }$ and control input as $\mathbf{u} \mathrel{\text{:=}} {\left\lbrack \begin{array}{ll} \eta & {\mathbf{\tau }}^{\top } \end{array}\right\rbrack }^{\top }$ , where $\mathbf{q}$ denotes the quaternions representing its orientation, the KNODE model for the quadrotor system can then be expressed as
190
+
191
+ $$
192
+ \dot{\mathbf{x}} = \widehat{f}\left( {\mathbf{x},\mathbf{u}}\right) \mathrel{\text{:=}} \widetilde{f}\left( {\mathbf{x},\mathbf{u}}\right) + {f}_{\mathbf{\theta }}\left( {\mathbf{x},\mathbf{u}}\right) , \tag{8}
193
+ $$
194
+
195
+ where $f\left( {\cdot , \cdot }\right)$ is the nominal model obtained from (7) and ${f}_{\theta }\left( {\cdot , \cdot }\right)$ is the neural network.
196
+
197
+ ### 4.2 Simulation Setup
198
+
199
+ A simulation environment for the quadrotor system is implemented based on the equations of motion given in (7). An explicit Runge-Kutta method (RK45) with a sampling time of 2 milliseconds is used for numerical integration to simulate dynamic responses of the quadrotor. The predictive controller described in Section 3.3 is assumed to have full measurements of the dynamics of the quadrotor. The optimization problem (6) is implemented and solved using CasADi [21], and its solution provides control commands to the quadrotor model to generate the closed-loop responses. In these simulations, we consider circular trajectories, with a range of target speeds of 0.8, 1 and 1.2 $\mathrm{m}/\mathrm{s}$ and radii of 2,3 and $4\mathrm{\;m}$ . Each simulation run lasts 8 seconds. To test the adaptiveness of our framework, we consider two time instances during the simulation where we change the mass of the quadrotor mid-flight. At time $= 2$ seconds, the mass of the quadrotor is reduced by ${50}\%$ and at time $= 5$ seconds, it is increased to ${133}\%$ of the original mass.
200
+
201
+ Benchmarks. We verify the performance of our proposed framework, KNODE-MPC-Online, by comparing against two benchmarks. First, we consider a standard nonlinear model predictive control (MPC) framework, where we use a discretized version of the nominal model $\widetilde{f}\left( {\mathbf{x},\mathbf{u}}\right)$ as the prediction model in (6b). Comparing against this benchmark provides insights on the role of the neural network, in the presence of unknown residual dynamics. As a second benchmark, we consider the approach taken in [20], where the KNODE model (8) is learned offline. This approach consists of two phases; data is first collected using a nominal controller and a KNODE model is trained using this collected data. The KNODE model is then deployed as the prediction model in (6b). We denote this approach as KNODE-MPC. By comparing our approach to KNODE-MPC, we show the effectiveness of online learning against residual dynamics and uncertainty that are possibly time-varying. In particular, to highlight the adaptive and generalization abilities of our approach, for the KNODE-MPC approach, we collect training data such that it only accounts for the first mass change at time $= 2$ seconds, but not the second mass change at time $= 5$ seconds.
202
+
203
+ ### 4.3 Simulation Results
204
+
205
+ Figure 2 gives a comparison of our framework, KNODE-MPC-Online, against the benchmarks, MPC and KNODE-MPC. The plotted overall mean squared errors (MSE) are calculated by considering the difference between the time histories of the reference trajectory and the quadrotor position for each simulation run, and along each axis. They are calculated element-wise and indicate the overall trajectory tracking performance for each run. As shown in the color plots, KNODE-MPC-Online provides the best overall performance across different target speeds and radii. Notably, by considering the median across the runs, KNODE-MPC-Online outperforms MPC and KNODE-MPC by ${25.5}\%$ and ${39.5}\%$ respectively. KNODE-MPC-Online performs well as it is able to account for the mass changes that occur during mid-flight. On the other hand, KNODE-MPC, with the KNODE model trained offline, is only able to account for the first mass change, but it is unable to account for the second change, as the effects of the second mass change was not observed from its training data. Figure 3 illustrates the trajectories taken by the quadrotor when the three frameworks are deployed. The quadrotor is required to track a circular reference trajectory of radius $3\mathrm{\;m}$ , shown in red. Evidently, from both the top and side views, the KNODE-MPC-Online framework allows the quadrotor to track the reference trajectory more closely, as compared to the two frameworks.
206
+
207
+ ## 5 Physical Experiments
208
+
209
+ ### 5.1 Experimental Setup
210
+
211
+ The open-source Crazyflie 2.1 quadrotor [22] is used as the platform for our physical experiments. The Crazyflie has a mass of ${0.032}\mathrm{\;{kg}}$ and has a size of $9{\mathrm{\;{cm}}}^{2}$ . A computer running on Intel i7 CPU is used as the base station and communication with the Crazyflie is established with the Crazyradio PA and at a nominal rate of ${400}\mathrm{\;{Hz}}$ . The software architecture is implemented using the CrazyROS position [23]. Position measurements of the quadrotor are obtained with a VICON motion capture system which communicates with the base station. Linear velocities are estimated from the positions obtained from VICON, while accelerations and angular velocities are measured from the on-board accelerometers and gyroscope sensors. For these experiments, the Crazyflie is commanded at a target speed of ${0.4}\mathrm{\;m}/\mathrm{s}$ and to track a circular trajectory of radius ${0.5}\mathrm{\;m}$ . Each run has a duration of at least 40 seconds. Similar to the mass changes done in simulations, we test the adaptiveness of the proposed framework by attaching an object of mass ${0.003}\mathrm{\;{kg}}$ (10% of the mass of Crazyflie) mid-flight. We compare our approach with the two benchmarks described in Section 4.2. For the KNODE-MPC approach, training data is collected with the object attached, so that its effects can be accounted for in the model that is trained offline.
212
+
213
+ ![01963f75-9ec0-7c3c-abab-e0406e04c8da_6_309_202_1189_359_0.jpg](images/01963f75-9ec0-7c3c-abab-e0406e04c8da_6_309_202_1189_359_0.jpg)
214
+
215
+ Figure 2: Performance of KNODE-MPC-Online in simulations. Color maps for the overall trajectory tracking mean squared errors (MSE) for the two benchmarks, MPC and KNODE-MPC, as well as our approach, KNODE-MPC-Online, are plotted.
216
+
217
+ ![01963f75-9ec0-7c3c-abab-e0406e04c8da_6_410_731_964_464_0.jpg](images/01963f75-9ec0-7c3c-abab-e0406e04c8da_6_410_731_964_464_0.jpg)
218
+
219
+ Figure 3: Trajectory plots for MPC, KNODE-MPC and KNODE-MPC-Online (ours), from both top and side views. The reference trajectory is shown in red and the quadrotor position is initialized at $\left( {x, y, z}\right) = \left( {3,0,0}\right)$ .
220
+
221
+ ### 5.2 Results and Discussion
222
+
223
+ Results from the physical experiments are illustrated in Figure 4. The MSE shown is defined in the same way, as described in Section 4.3. They are computed with 30 seconds of flight data, approximately starting from the time when the object is attached. As shown in the figure, our proposed approach, KNODE-MPC-Online, outperforms both benchmarks in terms of the overall MSE. In particular, the overall MSE for KNODE-MPC-Online improves by 44.1% as compared to nominal MPC and by 12.6%, compared to KNODE-MPC. Furthermore, it is observed that results of KNODE-MPC-Online have a smaller variance as compared to the benchmarks. This implies that the approach is consistent in terms of control performance across all runs. Since the nominal MPC framework is unable to account for the mass perturbation mid-flight, the MSEs are observed to be generally larger, especially in the $\mathrm{z}$ axis. More notably, from the MSEs for each of the axes, we observe that the offline KNODE-MPC approach has larger errors in the $\mathrm{x}$ and $\mathrm{y}$ axes and smaller errors in the $\mathrm{z}$ axis. This suggests that the KNODE model trained offline is able to compensate for the mass perturbation induced in flight, since its effect is present in the training data. However, it is unable to compensate for errors in the x-y plane, which are likely not reflected in the training data. In contrast, the KNODE-MPC-Online approach achieves more consistent MSE in all three axes and it is also to adapt to uncertainty during flight.
224
+
225
+ ![01963f75-9ec0-7c3c-abab-e0406e04c8da_7_430_199_937_476_0.jpg](images/01963f75-9ec0-7c3c-abab-e0406e04c8da_7_430_199_937_476_0.jpg)
226
+
227
+ Figure 4: Performance of KNODE-MPC-Online in physical experiments. Statistics for trajectory tracking mean squared errors (MSE) for the two benchmarks, nominal MPC, KNODE-MPC and our approach, KNODE-MPC-Online, are shown. The top of the bars denote the median, while the ends of the error bars represent the ${25}^{\text{th }}$ and ${75}^{\text{th }}$ percentiles. The top plot depicts the overall MSE and the bottom three plots shows the MSEs in separate axes.
228
+
229
+ ## 6 Conclusion
230
+
231
+ In this work, we propose a novel and sample-efficient framework, KNODE-MPC-Online, that learns the dynamics of a quadrotor robot in an online setting. We then apply the learned KNODE model in an MPC scheme and adaptively update the dynamic model during deployment. Results from simulations and real-world experiments show that the proposed framework allows the quadrotor to adapt and compensate for uncertainty and disturbances during flight and improves the closed-loop trajectory tracking performance. Future work includes applying this framework to other robotic applications where dynamic models can be learned to achieve enhanced control performance.
232
+
233
+ ## 7 Limitations
234
+
235
+ A fundamental assumption of our framework is the continuous-time nature of system dynamics. This means our framework has limited applicability to stochastic systems. However, there has been variants of NODE which models stochastic differential equations. For future work, we hope to extend our algorithm to incorporate stochastic models to improve its applicability.
236
+
237
+ Another limitation with the proposed framework is its increasing computational load as the model gets updated. As more neural networks get added to the KNODE model, the latency increases. For future work, we plan to perform network pruning and reduced order modeling on the learnt dynamic models, in order to keep the computation overhead constant.
238
+
239
+ In our physical experiments, there has been occasional model updates failures due to reading and writing data using ROS1 input-output (I/O) stream. In the future, we may implement the algorithm without ROS to avoid such failure cases.
240
+
241
+ References
242
+
243
+ [1] P. Zhao, S. C. Hoi, J. Wang, and B. Li. Online transfer learning. Artificial Intelligence, 216:76-102, 2014. ISSN 0004-3702. doi:https://doi.org/10.1016/j.artint.2014.06.003.URL https://www.sciencedirect.com/science/article/pii/S0004370214000800.
244
+
245
+ [2] S. C. Hoi, D. Sahoo, J. Lu, and P. Zhao. Online learning: A comprehensive survey. Neurocomputing, 459:249-289, 2021. ISSN 0925-2312. doi:https://doi.org/10.1016/j.neucom.2021.04.112.URL https://www.sciencedirect.com/science/article/pii/ S0925231221006706.
246
+
247
+ [3] O. Anava, E. Hazan, S. Mannor, and O. Shamir. Online learning for time series prediction, 2013. URL https://arxiv.org/abs/1302.6927.
248
+
249
+ [4] V. Kuznetsov and M. Mohri. Time series prediction and online learning. In V. Feldman, A. Rakhlin, and O. Shamir, editors, 29th Annual Conference on Learning Theory, volume 49 of Proceedings of Machine Learning Research, pages 1190-1213, Columbia University, New York, New York, USA, 23-26 Jun 2016. PMLR. URL https://proceedings.mlr.press/ v49/kuznetsov16.html.
250
+
251
+ [5] D. Romeres, M. Zorzi, R. Camoriano, S. Traversaro, and A. Chiuso. Derivative-free online learning of inverse dynamics models. IEEE Transactions on Control Systems Technology, 28 (3):816-830, 2020. doi:10.1109/TCST.2019.2891222.
252
+
253
+ [6] R. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud. Neural ordinary differential equations. In Advances in Neural Inf. Process. Syst., pages 6571-6583. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/ 7892-neural-ordinary-differential-equations.pdf.
254
+
255
+ [7] J. Pathak, B. Hunt, M. Girvan, Z. Lu, and E. Ott. Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach. Phys. Rev. Lett., 120: 024102, Jan 2018. doi:10.1103/PhysRevLett.120.024102. URL https://link.aps.org/ doi/10.1103/PhysRevLett. 120.024102.
256
+
257
+ [8] S. Brunton, J. Proctor, and J. Kutz. Discovering governing equations from data: Sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113:3932-3937, 09 2015. doi:10.1073/pnas.1517384113.
258
+
259
+ [9] A. A. AlMomani, J. Sun, and E. Bollt. How entropic regression beats the outliers problem in nonlinear system identification. Chaos, 30:013107, 01 2020.
260
+
261
+ [10] A. Wikner, J. Pathak, B. Hunt, M. Girvan, T. Arcomano, I. Szunyogh, A. Pomerance, and E. Ott. Combining machine learning with knowledge-based modeling for scalable forecasting and subgrid-scale closure of large, complex, spatiotemporal systems. Chaos: An Interdisciplinary Journal of Nonlinear Science, 30:053111, 05 2020. doi:10.1063/5.0005541.
262
+
263
+ [11] T. Z. Jiahao, M. A. Hsieh, and E. Forgoston. Knowledge-based learning of nonlinear dynamics and chaos. Chaos: An Interdisciplinary Journal of Nonlinear Science, 31(11):111101, 2021. doi:10.1063/5.0065617.
264
+
265
+ [12] S. Greydanus, M. Dzamba, and J. Yosinski. Hamiltonian neural networks, 2019. URL https: //arxiv.org/abs/1906.01563.
266
+
267
+ [13] M. Cranmer, S. Greydanus, S. Hoyer, P. Battaglia, D. Spergel, and S. Ho. Lagrangian neural networks, 2020. URL https://arxiv.org/abs/2003.04630.
268
+
269
+ [14] M. Finzi, G. Benton, and A. G. Wilson. Residual pathway priors for soft equivariance constraints. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors,
270
+
271
+ Advances in Neural Information Processing Systems, volume 34, pages 30037-30049. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/ fc394e9935fbd62c8aedc372464e1965-Paper.pdf.
272
+
273
+ [15] C. Rackauckas, Y. Ma, J. Martensen, C. Warner, K. Zubov, R. Supekar, D. Skinner, A. Ramad-han, and A. Edelman. Universal differential equations for scientific machine learning, 2020. URL https://arxiv.org/abs/2001.04385.
274
+
275
+ [16] J. Kabzan, L. Hewing, A. Liniger, and M. N. Zeilinger. Learning-based model predictive control for autonomous racing. 4(4):3363-3370, 2019.
276
+
277
+ [17] G. Torrente, E. Kaufmann, P. Föhn, and D. Scaramuzza. Data-driven mpc for quadrotors. 6 (2):3769-3776, 2021.
278
+
279
+ [18] G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou. Information theoretic mpc for model-based reinforcement learning. In 2017 IEEE Int. Conf. on Robot. and Automat., pages 1714-1721. IEEE, 2017.
280
+
281
+ [19] N. O. Lambert, D. S. Drew, J. Yaconelli, S. Levine, R. Calandra, and K. S. J. Pister. Low-level control of a quadrotor with deep model-based reinforcement learning. 4(4):4224-4230, 2019. doi:10.1109/LRA.2019.2930489.
282
+
283
+ [20] K. Y. Chee, T. Z. Jiahao, and M. A. Hsieh. Knode-mpc: A knowledge-based data-driven predictive control framework for aerial robots. IEEE Robotics and Automation Letters, 7(2): 2819-2826, 2022. doi:10.1109/LRA.2022.3144787.
284
+
285
+ [21] J. A. E. Andersson, J. Gillis, G. Horn, J. B. Rawlings, and M. Diehl. CasADi - A software framework for nonlinear optimization and optimal control. Math. Program. Computation, 11 (1):1-36, 2019.
286
+
287
+ [22] Bitcraze. Crazyflie 2.1. URL https://www.bitcraze.io/products/crazyflie-2-1/.
288
+
289
+ [23] W. Hönig and N. Ayanian. Flying Multiple UAVs Using ROS, pages 83-118. Springer Int. Publishing, 2017. ISBN 978-3-319-54927-9. doi:10.1007/978-3-319-54927-9_3.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/8-8e18idYLD/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,231 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § ONLINE DYNAMICS LEARNING FOR PREDICTIVE CONTROL WITH AN APPLICATION TO AERIAL ROBOTS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: In this work, we consider the task of improving the accuracy of dynamic models for model predictive control (MPC) in an online setting. Even though prediction models can be learned and applied to model-based controllers, these models are often learned offline. In this offline setting, training data is first collected and a prediction model is learned through an elaborated training procedure. After the model is trained to a desired accuracy, it is then deployed in a model predictive controller. However, since the model is learned offline, it does not adapt to disturbances or model errors observed during deployment. To improve the adaptiveness of the model and the controller, we propose an online dynamics learning framework that continually improves the accuracy of the dynamic model during deployment. We adopt knowledge-based neural ordinary differential equations (KNODE) as the dynamic models, and use techniques inspired by transfer learning to continually improve the model accuracy. We demonstrate the efficacy of our framework with a quadrotor robot, and verify the framework in both simulations and physical experiments. Results show that the proposed approach is able to account for disturbances that are possibly time-varying, while maintaining good trajectory tracking performance.
12
+
13
+ Keywords: Online Learning, Model Learning, Model Predictive Control, Aerial Robotics
14
+
15
+ § 20 1 INTRODUCTION
16
+
17
+ In recent years, model predictive control (MPC) has shown significant potential in robotic applications. As an optimization-based approach that uses prediction models, the MPC framework leverages readily available physics models or accurate data-driven models to allow robotic systems to achieve good closed-loop performance for a variety of tasks. However, the reliance on accurate dynamic models makes it a challenging task for the controller to adapt to system changes or environmental uncertainties. In the case where the robot dynamics change during deployment or in the presence of disturbances, it becomes essential for the controller to update and adapt its dynamic model in order to maintain good performance. Recent advancements in deep learning techniques have demonstrated promising results in using neural networks to model dynamical systems. One advantage of neural networks over physics models is that they alleviate the need for bottom-up construction of dynamics, which often requires expert knowledge or physical intuition. Neural networks are also increasingly faster to optimize, owing to the development of optimization algorithms. This makes the application of neural networks to model predictive controllers more amenable, which allows the controllers to adapt to disturbances or system changes by refining its dynamic model. In this work, we propose a novel online dynamic learning algorithm using a deep learning method, knowledge-based neural ordinary differential equations. Through both simulations and physical experiments, we show that our framework, which comprises of a suite of algorithms, can improve the adaptiveness of MPC by continually refining its dynamic model of a quadrotor system, and in turn maintains desirable closed-loop performance during deployment.
18
+
19
+ Related Work. Traditional supervised machine learning paradigms consist of two phases - training and inference. The training phase requires data to be collected in advance and then takes place in an offline fashion. After the model is trained, it is then deployed for inference. Online learning, on the other hand, performs learning with data arriving in a sequential order. Notably, online learning has been used to transfer existing knowledge to assist training $\left\lbrack {1,2}\right\rbrack$ , predict general time series data $\left\lbrack {3,4}\right\rbrack$ , and learn inverse dynamics in a derivative free manner $\left\lbrack 5\right\rbrack$ .
20
+
21
+ Learning dynamic models has gathered increasing attentions because of the development in scientific machine learning. A spectrum of techniques has emerged ranging from blackbox approaches [6, 7], to methods that require some known structures $\left\lbrack {8,9}\right\rbrack$ . In the middle of this spectrum, techniques have been developed to combine physics knowledge with machine learning $\left\lbrack {{10},{11},{12},{13}}\right\rbrack$ , and hysical priors have also been incorporated into neural networks [14]. A library based on continuous-time deep learning techniques have been developed for scientific machine learning [15].
22
+
23
+ There are a number of works that use models learned with data within an MPC framework. In $\left\lbrack {{16},{17}}\right\rbrack$ , Gaussian processes are used to characterize dynamics models for MPC and they are applied to autonomous vehicles and quadrotors. Data-driven models in a control framework is model-based reinforcement learning (MBRL). In [18], the authors use feed-forward networks to learn a dynamic model, which is then integrated into a stochastic predictive control framework. The authors in [19] use MBRL to design low-level controllers using sensors on-board a robot. In [20], the authors use a hybrid model that combines a first-principles model with a neural network. Our work is hugely inspired by [20], but instead of learning the model offline, we devise a framework that allow the dynamic model to be continually improved during deployment. The motivation for online dynamics learning is so that the model is able to account for uncertainty and disturbances that are not captured by the training data. To the authors' best knowledge, there has yet been online learning techniques specifically developed for learning dynamical systems and being applied to model predictive control schemes.
24
+
25
+ § 2 PROBLEM FORMULATION
26
+
27
+ Given a robot that uses a predictive model-based controller, we aim to use data to continually improve its closed-loop performance in an online setting. Specifically, we seek to continually refine the dynamic model of the robot during deployment. Let the true model of a robot be given by
28
+
29
+ $$
30
+ \dot{\mathbf{x}} = f\left( {\mathbf{x},\mathbf{u}}\right) , \tag{1}
31
+ $$
32
+
33
+ where the function $f$ is the dynamics of the robot, the vector $\mathbf{x}$ is the state of the robot, and the vector $\mathbf{u}$ is the control input. The output of the dynamics is $\dot{\mathbf{x}}$ , the derivative of the state with respect to time. Let the time stamps of a deployment of the robot for some tasks be $T = \left\lbrack {{t}_{0},{t}_{1},\cdots ,{t}_{N}}\right\rbrack$ . The training samples generated by the true dynamics in (1) during this deployment are $\mathcal{S} = \left\lbrack {\left( {\mathbf{x}\left( {t}_{0}\right) ,\mathbf{u}\left( {t}_{0}\right) }\right) ,\left( {\mathbf{x}\left( {t}_{1}\right) ,\mathbf{u}\left( {t}_{1}\right) }\right) ,\cdots ,\left( {\mathbf{x}\left( {t}_{N}\right) ,\mathbf{u}\left( {t}_{N}\right) }\right) }\right\rbrack$ . We seek to estimate the robot dynamics given by
34
+
35
+ $$
36
+ \dot{\mathbf{x}} = \widehat{f}\left( {\mathbf{x},\mathbf{u}}\right) . \tag{2}
37
+ $$
38
+
39
+ Given some performance metric on $\mathcal{S}$ , we seek to improve it by refining $\widehat{f}$ using the streams of data collected during deployment.
40
+
41
+ § 3 ONLINE DYNAMICS LEARNING
42
+
43
+ § 3.1 KNOWLEDGE-BASED NEURAL ORDINARY DIFFERENTIAL EQUATIONS
44
+
45
+ Neural ordinary differential equations (NODE) was first proposed to approximate continuous-depth residual networks [6]. It has since been used in scientific machine learning to model a wide variety of dynamical systems [11] Knowledge-based neural ordinary differential equations (KNODE) is an extension of NODE that couples physics knowledge with neural networks by leveraging its compatibility with first principles knowledge, and has demonstrated learning continuous-time models with improved generalizability. Three aspects of KNODE make it a suitable candidate for our online learning task. First, it requires less data for training, which means each data collection period during robot deployment can be short, and thus improving the adaptiveness of the MPC. Second, KNODE is a continuous-time dynamic model, which is compatible with many existing MPC frameworks. Third, many robotic systems have readily available physics models that can be used as knowledge. The original KNODE was applied only to non-controlled dynamical systems, but variants of it have been developed to incorporate control inputs to dynamic models [20]. In this work, we use a similar approach as [20], where we concatenate the state and control as $\mathbf{z} = {\left\lbrack \mathbf{x},\mathbf{u}\right\rbrack }^{T}$ . During training, the control is simply ignored when computing loss.
46
+
47
+ < g r a p h i c s >
48
+
49
+ Figure 1: Framework overview. (a) The training process runs in parallel with the robot carrying out some tasks. The robot states and control inputs are collected for training. The trained model is used to update the dynamic model in the controller. (b) the neural network consists of a nominal model and repeatedly added neural networks. Each neural network and the nominal model run in parallel with each other, with their output coupled to give the state derivative.
50
+
51
+ Using KNODE model, the dynamics in (2) is expressed as $\widehat{f}\left( {\mathbf{z},t}\right) = {M}_{\psi }\left( {\widetilde{f}\left( {\mathbf{z},t}\right) ,{f}_{\theta }\left( {\mathbf{z},t}\right) }\right)$ , where $\widetilde{f}$ is the physics knowledge, ${f}_{\theta }$ is a neural network parametrized by $\theta$ , and ${M}_{\psi }$ is a selection matrix which couples the neural network with knowledge. A loss function is then given by
52
+
53
+ $$
54
+ \mathcal{L}\left( {\theta ,\psi }\right) = \frac{1}{m - 1}\mathop{\sum }\limits_{{i = 1}}^{{m - 1}}{\int }_{{t}_{i}}^{{t}_{i + 1}}\delta \left( {{t}_{s} - \tau }\right) \parallel \widehat{\mathbf{x}}\left( \tau \right) - \mathbf{x}\left( \tau \right) {\parallel }^{2}{d\tau } + \mathcal{R}\left( {\theta ,\psi }\right) , \tag{3}
55
+ $$
56
+
57
+ where $\delta$ is the Dirac delta function, ${t}_{s} \in T$ is any sampling time in $T$ , and $\mathcal{R}$ is the regularization on the neural network and coupling matrix parameters. The estimated state $\widehat{\mathbf{x}}\left( \tau \right)$ in (3) comes from $\widehat{\mathbf{z}}\left( \tau \right)$ , which is given by
58
+
59
+ $$
60
+ \widehat{\mathbf{z}}\left( \tau \right) = \mathbf{z}\left( {t}_{i}\right) + {\int }_{{t}_{i}}^{\tau }\widehat{f}\left( {\mathbf{z}\left( \omega \right) ,\omega }\right) {d\omega }. \tag{4}
61
+ $$
62
+
63
+ Intuitively, $\widehat{\mathbf{z}}\left( \tau \right)$ is the state at $t = \tau$ generated using $\widehat{f}$ with the initial condition $\mathbf{z}\left( {t}_{i}\right)$ , and the loss function (3) computes the mean squared error between the one-step-ahead estimates of the states and the ground truth data. The integration in (4) is computed using numerical solvers in practice.
64
+
65
+ The optimization of the neural network parameters in KNODE can be done with either backpropa-gation or adjoint sensitivity method, which is a memory efficient alternative to backpropagation. In this work, adjoint sensitivity method is used similar to [6]. In the following sections, the dynamic models are all based on KNODE.
66
+
67
+ Algorithm 1 Data collection and model updates
68
+
69
+ 1: Initialize the current time, total duration, and the collection interval as ${t}_{i},{t}_{N}$ , and ${t}_{col}$
70
+
71
+ ${t}_{i} \leftarrow 0$
72
+
73
+ OnlineData $\leftarrow \left\lbrack \right\rbrack$
74
+
75
+ while ${t}_{i} < {t}_{N}$ do
76
+
77
+ if New model is available then
78
+
79
+ Controller updates new model
80
+
81
+ end if
82
+
83
+ if ${t}_{i}$ is not 0 and ${t}_{i}$ modulo ${t}_{col}$ is 0 then
84
+
85
+ Save OnlineData
86
+
87
+ OnlineData $\leftarrow \left\lbrack \right\rbrack$
88
+
89
+ end if
90
+
91
+ Robot updates state using control input
92
+
93
+ Append new robot state and control input to OnlineData
94
+
95
+ ${t}_{i} \leftarrow {t}_{i} + {dt}$
96
+
97
+ end while
98
+
99
+ Algorithm 2 Online dynamics learning
100
+
101
+ Initialize the current time and total duration as ${t}_{i}$ and ${t}_{N}$
102
+
103
+ ${t}_{i} \leftarrow 0$
104
+
105
+ while ${t}_{i} < {t}_{N}$ do
106
+
107
+ while No new data available do
108
+
109
+ Wait
110
+
111
+ end while
112
+
113
+ Train a new model with the newest data $\vartriangleright$ new neural network added each iterati
114
+
115
+ Save the trained model
116
+
117
+ end while
118
+
119
+ § 3.2 ONLINE DATA COLLECTION AND LEARNING
120
+
121
+ The basic features of the proposed online dynamics learning algorithm include the dynamic model update logic and a trainer which runs in parallel with the robot, which are described in Alg. 1 and 2 respectively.
122
+
123
+ For data collection, the robot state is repeatedly appended to a data array during deployment. This data array is saved and reset (Line 10 and 11 of Alg. 1 ) at regular time intervals (Line 9 of Alg. 1). A key hyperparameter of data collection is the collection interval ${t}_{col}$ , which dictates how much data in a batch (measured in seconds) to be collected for training. With a longer collection interval, the adaptiveness of the algorithm will decrease because it not only takes longer to collect data but also takes more time to train a new model. This leads to less frequent model updates for the controller, and therefore less adaptiveness to system/environment changes. With a shorter collection interval, while the adaptiveness improves, a model is more likely to overfit during training due to the smaller training data size.
124
+
125
+ When new data becomes available, a model is trained as described in Alg. 2. A challenge for online dynamic model learning is how to continually improve the model with newly collected data. When the model gets updated, the closed-loop trajectory of the controller changes accordingly and any new data collected will reflect the updated controller. As a result, the previous dynamic model needs to be preserved, and training the neural network already onboard will not serve the purpose as it alters the previous controller. To tackle this problem, we repeatedly add new neural networks in parallel with the existing model during robot deployment, as illustrated in Fig. 1 (b). Mathematically, given a nominal model $\widetilde{f}$ as the first dynamic model estimate, it is recursively augmented by
126
+
127
+ $$
128
+ {\widehat{f}}^{\left( i + 1\right) } = {M}_{{\psi }_{\left( i + 1\right) }}\left( {{\widehat{f}}^{\left( i\right) },{f}_{{\theta }_{\left( i + 1\right) }}}\right) , \tag{5}
129
+ $$
130
+
131
+ $$
132
+ {\widehat{f}}^{\left( 0\right) } = \widetilde{f},
133
+ $$
134
+
135
+ where the index $\left( {i + 1}\right)$ denotes the $\left( {i + 1}\right)$ th update to dynamic model. As mentioned earlier, the previous controller need to be preserved, and therefore only the parameters ${\theta }_{\left( i + 1\right) }$ and ${\psi }_{\left( i + 1\right) }$ are trained for the $\left( {i + 1}\right)$ th update, while other existing neural networks are frozen.
136
+
137
+ Also note that after each controller update, the next set of data for training must be entirely produced using the updated controller. If a set of data is collected using a mix of old and new controllers, it must be discarded so that the next set of training data represented the most up-to-date controller. This requirement is illustrated in Alg. 2 Line 4.
138
+
139
+ § 3.3 APPLYING LEARNED MODELS IN MPC
140
+
141
+ Inspired by the framework in [20], we apply the learned dynamic model in an MPC framework. Specifically, we solve the following constrained optimization problem in a receding horizon manner,
142
+
143
+ $$
144
+ \mathop{\operatorname{minimize}}\limits_{\substack{{{x}_{0},\ldots ,{x}_{N},} \\ {{u}_{0},\ldots ,{u}_{N - 1}} }}\mathop{\sum }\limits_{{i = 1}}^{{N - 1}}\left( {{x}_{i}^{T}Q{x}_{i} + {u}_{i}^{T}R{u}_{i}}\right) + {x}_{N}^{T}P{x}_{N} \tag{6a}
145
+ $$
146
+
147
+ $$
148
+ \text{ subject to }\;{x}_{i + 1} = f\left( {{x}_{i},{u}_{i}}\right) ,\;\forall i = 0,\ldots ,N - 1\text{ , } \tag{6b}
149
+ $$
150
+
151
+ $$
152
+ {x}_{0} = x\left( t\right) , \tag{6c}
153
+ $$
154
+
155
+ $$
156
+ {x}_{i} \in \mathcal{X},\;{u}_{i} \in \mathcal{U},\;\forall i = 0,\ldots ,N - 1, \tag{6d}
157
+ $$
158
+
159
+ $$
160
+ {x}_{N} \in {\mathcal{X}}_{f}, \tag{6e}
161
+ $$
162
+
163
+ where ${x}_{i},{u}_{i}$ are the predicted states and control inputs, $N$ is the horizon and $\mathcal{X},\mathcal{U},{\mathcal{X}}_{f}$ are the state, control input and terminal state constraint sets. $f\left( {\cdot , \cdot }\right)$ in (6b) is a discretized version of the learned KNODE model, as described in Section 3.1. This model is used to predict the future states within the horizon. A precise model gives more accurate predictions of the states, which in turn provides more effective control actions. The weighting matrices $Q$ and $R$ are designed to penalize the states and control inputs within the cost function and $P$ is the terminal state cost matrix. $x\left( t\right)$ is the state obtained at time step $t$ , which acts as an input to the optimization problem (6). Upon solving (6), the first element in the optimal control sequence ${u}_{0}^{ * }$ is applied to the robot as the control action. The robot moves according to this control action and generate new state measurements $x\left( {t + 1}\right)$ , which will be used to solve (6) at the next time step.
164
+
165
+ § 4 SIMULATIONS
166
+
167
+ To demonstrate the efficacy of the framework in real-world robotic applications, we apply it to a quadrotor system and conduct tests in both simulations and physical experiments.
168
+
169
+ § 4.1 DYNAMICS OF A QUADROTOR SYSTEM
170
+
171
+ To apply the KNODE-MPC-Online framework, we first construct a KNODE model, by combining a nominal model derived from physics, with a neural network. For the quadrotor, the nominal model can be derived from its equations of motion,
172
+
173
+ $$
174
+ m\ddot{\mathbf{r}} = m\mathbf{g} + \mathbf{R}\eta , \tag{7a}
175
+ $$
176
+
177
+ $$
178
+ \mathbf{I}\dot{\mathbf{\omega }} = \mathbf{\tau } - \mathbf{\omega } \times \mathbf{I}\mathbf{\omega }, \tag{7b}
179
+ $$
180
+
181
+ where $\mathbf{r}$ and $\mathbf{\omega }$ are the position and angular rates of the quadrotor, $\eta ,\tau$ are the thrust and moments generated by the motors of the quadrotor. $\mathbf{g}$ is the gravity vector and $\mathbf{R}$ is the transformation matrix that maps $\eta$ to the accelerations. $m$ and $\mathbf{I}$ are the mass and inertia matrix of the quadrotor. Furthermore, by defining the state as $\mathbf{x} \mathrel{\text{ := }} {\left\lbrack \begin{array}{llll} {\mathbf{r}}^{\top } & {\dot{\mathbf{r}}}^{\top } & {\mathbf{q}}^{\top } & {\mathbf{\omega }}^{\top } \end{array}\right\rbrack }^{\top }$ and control input as $\mathbf{u} \mathrel{\text{ := }} {\left\lbrack \begin{array}{ll} \eta & {\mathbf{\tau }}^{\top } \end{array}\right\rbrack }^{\top }$ , where $\mathbf{q}$ denotes the quaternions representing its orientation, the KNODE model for the quadrotor system can then be expressed as
182
+
183
+ $$
184
+ \dot{\mathbf{x}} = \widehat{f}\left( {\mathbf{x},\mathbf{u}}\right) \mathrel{\text{ := }} \widetilde{f}\left( {\mathbf{x},\mathbf{u}}\right) + {f}_{\mathbf{\theta }}\left( {\mathbf{x},\mathbf{u}}\right) , \tag{8}
185
+ $$
186
+
187
+ where $f\left( {\cdot , \cdot }\right)$ is the nominal model obtained from (7) and ${f}_{\theta }\left( {\cdot , \cdot }\right)$ is the neural network.
188
+
189
+ § 4.2 SIMULATION SETUP
190
+
191
+ A simulation environment for the quadrotor system is implemented based on the equations of motion given in (7). An explicit Runge-Kutta method (RK45) with a sampling time of 2 milliseconds is used for numerical integration to simulate dynamic responses of the quadrotor. The predictive controller described in Section 3.3 is assumed to have full measurements of the dynamics of the quadrotor. The optimization problem (6) is implemented and solved using CasADi [21], and its solution provides control commands to the quadrotor model to generate the closed-loop responses. In these simulations, we consider circular trajectories, with a range of target speeds of 0.8, 1 and 1.2 $\mathrm{m}/\mathrm{s}$ and radii of 2,3 and $4\mathrm{\;m}$ . Each simulation run lasts 8 seconds. To test the adaptiveness of our framework, we consider two time instances during the simulation where we change the mass of the quadrotor mid-flight. At time $= 2$ seconds, the mass of the quadrotor is reduced by ${50}\%$ and at time $= 5$ seconds, it is increased to ${133}\%$ of the original mass.
192
+
193
+ Benchmarks. We verify the performance of our proposed framework, KNODE-MPC-Online, by comparing against two benchmarks. First, we consider a standard nonlinear model predictive control (MPC) framework, where we use a discretized version of the nominal model $\widetilde{f}\left( {\mathbf{x},\mathbf{u}}\right)$ as the prediction model in (6b). Comparing against this benchmark provides insights on the role of the neural network, in the presence of unknown residual dynamics. As a second benchmark, we consider the approach taken in [20], where the KNODE model (8) is learned offline. This approach consists of two phases; data is first collected using a nominal controller and a KNODE model is trained using this collected data. The KNODE model is then deployed as the prediction model in (6b). We denote this approach as KNODE-MPC. By comparing our approach to KNODE-MPC, we show the effectiveness of online learning against residual dynamics and uncertainty that are possibly time-varying. In particular, to highlight the adaptive and generalization abilities of our approach, for the KNODE-MPC approach, we collect training data such that it only accounts for the first mass change at time $= 2$ seconds, but not the second mass change at time $= 5$ seconds.
194
+
195
+ § 4.3 SIMULATION RESULTS
196
+
197
+ Figure 2 gives a comparison of our framework, KNODE-MPC-Online, against the benchmarks, MPC and KNODE-MPC. The plotted overall mean squared errors (MSE) are calculated by considering the difference between the time histories of the reference trajectory and the quadrotor position for each simulation run, and along each axis. They are calculated element-wise and indicate the overall trajectory tracking performance for each run. As shown in the color plots, KNODE-MPC-Online provides the best overall performance across different target speeds and radii. Notably, by considering the median across the runs, KNODE-MPC-Online outperforms MPC and KNODE-MPC by ${25.5}\%$ and ${39.5}\%$ respectively. KNODE-MPC-Online performs well as it is able to account for the mass changes that occur during mid-flight. On the other hand, KNODE-MPC, with the KNODE model trained offline, is only able to account for the first mass change, but it is unable to account for the second change, as the effects of the second mass change was not observed from its training data. Figure 3 illustrates the trajectories taken by the quadrotor when the three frameworks are deployed. The quadrotor is required to track a circular reference trajectory of radius $3\mathrm{\;m}$ , shown in red. Evidently, from both the top and side views, the KNODE-MPC-Online framework allows the quadrotor to track the reference trajectory more closely, as compared to the two frameworks.
198
+
199
+ § 5 PHYSICAL EXPERIMENTS
200
+
201
+ § 5.1 EXPERIMENTAL SETUP
202
+
203
+ The open-source Crazyflie 2.1 quadrotor [22] is used as the platform for our physical experiments. The Crazyflie has a mass of ${0.032}\mathrm{\;{kg}}$ and has a size of $9{\mathrm{\;{cm}}}^{2}$ . A computer running on Intel i7 CPU is used as the base station and communication with the Crazyflie is established with the Crazyradio PA and at a nominal rate of ${400}\mathrm{\;{Hz}}$ . The software architecture is implemented using the CrazyROS position [23]. Position measurements of the quadrotor are obtained with a VICON motion capture system which communicates with the base station. Linear velocities are estimated from the positions obtained from VICON, while accelerations and angular velocities are measured from the on-board accelerometers and gyroscope sensors. For these experiments, the Crazyflie is commanded at a target speed of ${0.4}\mathrm{\;m}/\mathrm{s}$ and to track a circular trajectory of radius ${0.5}\mathrm{\;m}$ . Each run has a duration of at least 40 seconds. Similar to the mass changes done in simulations, we test the adaptiveness of the proposed framework by attaching an object of mass ${0.003}\mathrm{\;{kg}}$ (10% of the mass of Crazyflie) mid-flight. We compare our approach with the two benchmarks described in Section 4.2. For the KNODE-MPC approach, training data is collected with the object attached, so that its effects can be accounted for in the model that is trained offline.
204
+
205
+ < g r a p h i c s >
206
+
207
+ Figure 2: Performance of KNODE-MPC-Online in simulations. Color maps for the overall trajectory tracking mean squared errors (MSE) for the two benchmarks, MPC and KNODE-MPC, as well as our approach, KNODE-MPC-Online, are plotted.
208
+
209
+ < g r a p h i c s >
210
+
211
+ Figure 3: Trajectory plots for MPC, KNODE-MPC and KNODE-MPC-Online (ours), from both top and side views. The reference trajectory is shown in red and the quadrotor position is initialized at $\left( {x,y,z}\right) = \left( {3,0,0}\right)$ .
212
+
213
+ § 5.2 RESULTS AND DISCUSSION
214
+
215
+ Results from the physical experiments are illustrated in Figure 4. The MSE shown is defined in the same way, as described in Section 4.3. They are computed with 30 seconds of flight data, approximately starting from the time when the object is attached. As shown in the figure, our proposed approach, KNODE-MPC-Online, outperforms both benchmarks in terms of the overall MSE. In particular, the overall MSE for KNODE-MPC-Online improves by 44.1% as compared to nominal MPC and by 12.6%, compared to KNODE-MPC. Furthermore, it is observed that results of KNODE-MPC-Online have a smaller variance as compared to the benchmarks. This implies that the approach is consistent in terms of control performance across all runs. Since the nominal MPC framework is unable to account for the mass perturbation mid-flight, the MSEs are observed to be generally larger, especially in the $\mathrm{z}$ axis. More notably, from the MSEs for each of the axes, we observe that the offline KNODE-MPC approach has larger errors in the $\mathrm{x}$ and $\mathrm{y}$ axes and smaller errors in the $\mathrm{z}$ axis. This suggests that the KNODE model trained offline is able to compensate for the mass perturbation induced in flight, since its effect is present in the training data. However, it is unable to compensate for errors in the x-y plane, which are likely not reflected in the training data. In contrast, the KNODE-MPC-Online approach achieves more consistent MSE in all three axes and it is also to adapt to uncertainty during flight.
216
+
217
+ < g r a p h i c s >
218
+
219
+ Figure 4: Performance of KNODE-MPC-Online in physical experiments. Statistics for trajectory tracking mean squared errors (MSE) for the two benchmarks, nominal MPC, KNODE-MPC and our approach, KNODE-MPC-Online, are shown. The top of the bars denote the median, while the ends of the error bars represent the ${25}^{\text{ th }}$ and ${75}^{\text{ th }}$ percentiles. The top plot depicts the overall MSE and the bottom three plots shows the MSEs in separate axes.
220
+
221
+ § 6 CONCLUSION
222
+
223
+ In this work, we propose a novel and sample-efficient framework, KNODE-MPC-Online, that learns the dynamics of a quadrotor robot in an online setting. We then apply the learned KNODE model in an MPC scheme and adaptively update the dynamic model during deployment. Results from simulations and real-world experiments show that the proposed framework allows the quadrotor to adapt and compensate for uncertainty and disturbances during flight and improves the closed-loop trajectory tracking performance. Future work includes applying this framework to other robotic applications where dynamic models can be learned to achieve enhanced control performance.
224
+
225
+ § 7 LIMITATIONS
226
+
227
+ A fundamental assumption of our framework is the continuous-time nature of system dynamics. This means our framework has limited applicability to stochastic systems. However, there has been variants of NODE which models stochastic differential equations. For future work, we hope to extend our algorithm to incorporate stochastic models to improve its applicability.
228
+
229
+ Another limitation with the proposed framework is its increasing computational load as the model gets updated. As more neural networks get added to the KNODE model, the latency increases. For future work, we plan to perform network pruning and reduced order modeling on the learnt dynamic models, in order to keep the computation overhead constant.
230
+
231
+ In our physical experiments, there has been occasional model updates failures due to reading and writing data using ROS1 input-output (I/O) stream. In the future, we may implement the algorithm without ROS to avoid such failure cases.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/80vpxjt3vq/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,271 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Learning an Explainable Planner for Autonomous Driving
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Planning an optimal route in a complex environment requires efficient reasoning about the surrounding scene. While human drivers prioritize important objects and ignore details not relevant to the decision, learning-based planners typically extract features from dense, high-dimensional grid representations of the scene containing all vehicle and road context information. In this paper, we propose PlanT, a novel approach for planning in the context of self-driving that uses a standard transformer architecture. PlanT is based on imitation learning with a compact object-level input representation. With this representation, we demonstrate that information regarding the ego vehicle's route provides sufficient context regarding the road layout for planning. On the challenging Longest6 benchmark for CARLA, PlanT outperforms all prior methods (matching the driving score of the expert) while being ${5.3} \times$ faster than equivalent pixel-based planning baselines during inference. Furthermore, we propose an evaluation protocol to quantify the ability of planners to identify relevant objects, providing insights regarding their decision making. Our results indicate that PlanT can reliably focus on the most relevant object in the scene, even when this object is geometrically distant.
12
+
13
+ Keywords: Autonomous Driving, Transformers, Explainability
14
+
15
+ ## 1 Introduction
16
+
17
+ The ability to plan is an important aspect of human intelligence, allowing us to solve complex navigation tasks. For example, to change lanes on a busy highway, a driver must wait for sufficient space in the new lane and adjust the speed based on the expected behavior of the other vehicles. Humans quickly learn this and can generalize to new scenarios, a trait we would also like autonomous agents to have. Due to the difficulty of the planning task, the field of autonomous driving is shifting away from traditional rule-based algorithms $\left\lbrack {1,2,3,4,5,6,7,8}\right\rbrack$ towards learning-based solutions $\left\lbrack {9,{10},{11},{12},{13}}\right\rbrack$ . Learning-based planners directly map the environmental state representation (e.g., HD maps and object bounding boxes) to waypoints or vehicle controls. They emerged as a scalable alternative to rule-based planners which require significant manual effort to design.
18
+
19
+ Interestingly, while humans reason about the world in terms of objects [14, 15, 16], most existing learned planners $\left\lbrack {9,{11},{17}}\right\rbrack$ choose a high-dimensional pixel-level input representation by rendering bird's eye view (BEV) images of detailed HD maps (Fig. 1 left). It is widely believed that this kind of accurate scene understanding is key for robust self-driving vehicles, leading to significant interest in recovering pixel-level BEV information from sensor inputs [18, 19, 20, 21, 22, 23]. In this paper, we investigate whether such detailed representations are actually necessary to achieve convincing planning performance. We propose PlanT, a learning-based planner that leverages an object-level representation (Fig. 1 right) as an input to a transformer encoder [24]. We represent a scene as a set of features corresponding to (1) nearby vehicles and (2) the route the planner must follow. Each feature is low-dimensional, e.g., position, extent, and orientation (details in Section 3). We show that despite the low feature dimensionality, our model achieves state-of-the-art results using this representation. We then propose a novel evaluation scheme and metric to analyze explainability which is generally applicable to any learning-based planner. Specifically, we test the ability of a planner to identify the objects that are the most relevant to account for to plan a collision-free route.
20
+
21
+ ![01963f14-6b95-77cd-96c8-b8739cb456ed_1_310_197_1181_389_0.jpg](images/01963f14-6b95-77cd-96c8-b8739cb456ed_1_310_197_1181_389_0.jpg)
22
+
23
+ Figure 1: Scene Representations for Planning. As an alternative to the dominant paradigm of using detailed pixel-level representations for planning (left), we show the effectiveness of planners based on compact object-level representations (right).
24
+
25
+ We perform a detailed empirical analysis of learning-based planning on the Longest6 benchmark [25] of the CARLA simulator [26]. We first identify the key missing elements in the design of existing learned planners such as their incomplete field of view and sub-optimal dataset and model sizes. Addressing these issues leads to expert-level driving scores for planning with convolutional neural networks (CNNs), which is the dominant paradigm of this field, significantly advancing the state of the art. We then show the added advantages of our proposed transformer architecture, including further improvements in performance and significantly faster inference times. Finally, we show that the attention weights of the transformer, which are readily accessible, can be used to represent object relevance. Our qualitative and quantitative results on explainability confirm that PlanT attends to the objects that match our intuition for the relevance of objects for safe driving.
26
+
27
+ Contributions. (1) We demonstrate that a simple object-level representation is sufficient to encode all the information relevant for planning in urban driving environments. (2) With this representation, we significantly improve upon the previous state of the art on CARLA when we use a standard CNN architecture, and obtain further gains in performance and runtime via PlanT, our novel transformer-based approach. (3) We propose a protocol and metric for evaluating a planner's prioritization of obstacles in a scene, and show that PlanT is more explainable than CNN-based methods, i.e., the attention weights of the transformer identify the most relevant objects more reliably.
28
+
29
+ ## 2 Related Work
30
+
31
+ Intermediate Representations for Driving. Early work on decoupling end-to-end driving into two stages predicts a set of low-dimensional affordances from sensor inputs with CNNs which are then input to a rule-based planner [27]. These affordances are scene-descriptive attributes (e.g. emergency brake, red light, center-line distance, angle) that are compact, yet comprehensive enough to enable simple driving tasks, such as urban driving on the initial version of CARLA [26]. Unfortunately, methods based on affordances perform poorly on subsequent benchmarks in CARLA which involve higher task complexity [28]. Most state-of-the-art driving models instead rely heavily on annotated 2D data either as intermediate representations or auxiliary training objectives [25, 29]. Several studies show that using semantic segmentation as an intermediate representation helps for navigational tasks $\left\lbrack {{30},{31},{32},{33}}\right\rbrack$ . More recently, there has been a rapid growth in interest on using BEV semantic segmentation maps as the input representation to planners $\left\lbrack {9,{11},{29},{17}}\right\rbrack$ . To reduce the immense labeling cost of such segmentation methods, Behl et al. [34] propose visual abstractions, which are label-efficient alternatives to dense 2D semantic segmentation maps. They show that reduced class counts and the use of bounding boxes instead of pixel-accurate masks for certain classes is sufficient. Wang et al. [35] explicitly extract objects and render them into a BEV image for planning. Instead of rendering objects onto a 2D image, we keep our representation low-dimensional and compact by directly considering the set of objects as inputs to our model. In addition, instead of using a CNN-based architecture to process the rendered images, we show that using a transformer leads to improved performance, efficiency, and explainability.
32
+
33
+ Transformers for Control. Transformers obtain impressive results in several research areas [24, 0 36,37,38]. To make use of this architecture, the input needs to be represented as a sequence or set. Using transformers by describing the environment state as a sequence has been successful for
34
+
35
+ Atari games [39, 40, 41, 42, 43]. In the field of motion forecasting for driving, Gao et al. [44] show the advantages of representing an HD map as a set of vectors instead of a BEV image. Several follow-up works use this representation in combination with transformer-based architectures [45, 46]. However, they only tackle motion forecasting and do not address the downstream planning task considered by our work. Moreover, while all of these methods rely on an HD map with detailed information regarding lane markings and road boundaries as inputs, we show that the ego vehicle's route is a more compact yet sufficient representation.
36
+
37
+ Explainability. Explaining the decisions of neural networks is a rapidly evolving research field $\left\lbrack {{47},{48},{49},{50},{51},{52},{53}}\right\rbrack$ . Prior work on explaining policies focuses on reward decomposition [54], identifying the salient areas in an image using perturbations [55], approximating a deep neural network with a more interpretable model [56], or using interpretable attention mechanisms for predicting the correct action $\left\lbrack {{57},{58}}\right\rbrack$ . In the context of self-driving cars, existing work uses text [59] or heatmaps [60] to explain decisions. In contrast, we can directly obtain post hoc explanations for decisions of our learning-based PlanT architecture by considering its learned attention. Furthermore, we introduce a simple approach to measure the quality of explanations for a planner by considering the change in driving performance when only considering the object that the explanation deemed most relevant in a scene for the planner's prediction.
38
+
39
+ ## 3 Planning Transformers
40
+
41
+ In this section, we provide details about our task setup, novel scene representation, simple but effective architecture, and training strategy resulting in state-of-the-art performance.
42
+
43
+ Task. We consider the task of point-to-point navigation in an urban setting where the goal is to drive from a start to a goal location while reacting to other dynamic agents and following traffic rules. We use Imitation Learning (IL) to train the driving agent. The goal of IL is to learn a policy $\pi$ that imitates the behavior of an expert ${\pi }^{ * }$ . In our setup, the policy is a mapping $\pi : \mathcal{X} \rightarrow \mathcal{W}$ from our novel object-level input representation $\mathcal{X}$ to the future trajectory $\mathcal{W}$ of an expert driver. We describe the expert and data collection procedure in Section 4. For following traffic rules, we assume access to the traffic light state $l \in \{$ green, red $\}$ of the next traffic light relevant to the ego vehicle.
44
+
45
+ Object-level Representation. In order to drive safely, the input representation needs to encode all the task-specific information about the scene. To this end, we represent the scene as a set of objects, with vehicles and segments of the route each being assigned an oriented bounding box in BEV space (Fig. 1 right). Let $\mathcal{X} \in {\mathbb{R}}^{\left( {V + R}\right) \times A}$ be a set of $V$ vehicles and $R$ route segments with $A$ attributes each. These attributes include the position of the bounding box $\left( {{x}_{i},{y}_{i}}\right)$ relative to the ego vehicle, the extent $\left( {{w}_{i},{h}_{i}}\right)$ , the orientation ${\varphi }_{i} \in \left\lbrack {0,{2\pi }}\right\rbrack$ and a type-specific attribute ${v}_{i}$ , giving $A = 6$ dimensions. Each object can be described as a vector ${\mathbf{o}}_{i} = \left\lbrack {{x}_{i},{y}_{i},{w}_{i},{h}_{i},{\varphi }_{i},{v}_{i}}\right\rbrack$ . For a vehicle, we use ${v}_{i}$ to represent the speed. For segments of the route, ${v}_{i}$ denotes the ordering, starting from 0 for the segment closest to the ego vehicle. To obtain these route segments we subsample the dense set of points along the route provided by CARLA using the Ramer-Douglas-Peucker algorithm [61, 62]. One segment spans the area between two of these subsampled route points, with the length equal to the distance between the points and the width equal to the lane width. In addition, we restrict the maximum length of any single segment to ${L}_{\max }$ and always input a fixed number of segments ${N}_{s}$ to our policy. Since the primary focus of our work is not perception but planning, we extract the object attributes directly from the simulator. We consider objects up to a distance ${D}_{\max }$ from the ego vehicle without any further filtering. Note that our representation contains similar information as that in the widely used BEV map representation (Fig. 1 left), but only encodes the route relevant to the ego vehicle instead of the whole map. In practice, we find that this is sufficient for current CARLA benchmarks (Table 2a). More details regarding the object extraction and visualizations of the route representation are provided in the supplementary material.
46
+
47
+ Output. We predict the future trajectory $\mathcal{W}$ of the ego vehicle, centered at the coordinate frame of the current time-step $t$ . The trajectory is represented by a sequence of $2\mathrm{D}$ waypoints in $\mathrm{{BEV}}$ space, ${\left\{ {\mathbf{w}}_{t} = \left( {x}_{t},{y}_{t}\right) \right\} }_{t = 1}^{T}$ for $T = 4$ future time-steps. These waypoints are provided to lateral and longitudinal PID controllers to output actions. Details are provided in the supplementary material.
48
+
49
+ Planning Transformer (PlanT). Our model is illustrated in Fig. 2. The main building block for the IL policy $\pi$ is a standard transformer encoder, based on the BERT architecture [36]. While our architecture does not deviate from the original transformer implementation [24], adapting it to self-driving requires the tokenization of our proposed input representation. For this, we linearly project the attributes of each object to one token embedding. We then add the token embeddings to learnable object type embeddings indicating to which type the token belongs: [CLS], vehicle or route. The [CLS] token (based on [36, 38]) is a learnable token prepended to the input. This token's processing involves an attention-based aggregation of the features from all other tokens, and is used for generating the waypoint predictions. At the final layer of the transformer, we extract the feature vector from the [CLS] token and reduce its dimensionality via a linear layer. As in previous work [17], we concatenate the binary traffic light flag and the feature vector before passing it to an auto-regressive waypoint decoder that makes use of GRUs [63]. We input the current ego position and goal location to the GRU, which predicts $T = 4$ differential waypoints ${\left\{ \delta {\mathbf{w}}_{t}\right\} }_{t = 1}^{T}$ , where $t$ is the future time-step. With these differential waypoints we obtain the future waypoints as ${\left\{ {\mathbf{w}}_{t} = {\mathbf{w}}_{t - 1} + \delta {\mathbf{w}}_{t}\right\} }_{t = 1}^{T}$ . For a detailed description of the waypoint decoder, see [17,25].
50
+
51
+ ![01963f14-6b95-77cd-96c8-b8739cb456ed_3_318_200_1161_553_0.jpg](images/01963f14-6b95-77cd-96c8-b8739cb456ed_3_318_200_1161_553_0.jpg)
52
+
53
+ Figure 2: Planning Transformer (PlanT). We represent a scene (bottom left) using a set of objects containing the vehicles and route to follow (green arrows). We embed these via a linear projection (bottom right) and process them with a transformer encoder. PlanT outputs future waypoints with a GRU decoder. We use a self-supervised auxiliary task of predicting the future of other vehicles. Further, extracting and visualizing the attention weights yields an explainable decision (top left).
54
+
55
+ Training. Following recent driving models [9,29,25], we leverage the ${L}_{1}$ loss to ground truth future waypoints ${\mathbf{w}}^{gt}$ as our main training objective. Besides this, we propose the auxiliary task of predicting the future attributes of other vehicles. This is aligned with the overall driving goal in two ways. (1) The ability to reason about the future of other vehicles is important in an urban environment as it heavily influences the ego vehicle's own future. (2) Our main task is to predict the ego vehicle's future trajectory, which means the output feature of the transformer needs to encode all the information necessary to predict the future. Supervising the outputs of all vehicles on a similar task (i.e., predicting the location, pose and velocity of a future time-step) exploits synergies between the task of the ego vehicle and the other vehicles [64, 29]. We input the vehicle tokens for a given time-step ${T}_{in}$ and predict class probabilities $\widehat{\mathbf{p}}$ for every attribute of each vehicle of a future time-step ${T}_{in} + {\delta t}$ using a linear layer per attribute type on top of the corresponding output embedding. We choose to discretize the output attributes to allow uncertainty in the predictions since the future is non-deterministic. This is also better aligned with how humans drive without predicting exact locations and velocities, where a rough estimate is sufficient to make a safe decision. We calculate the cross-entropy loss ${\mathcal{L}}_{CE}$ using the one-hot encoded representation of the attribute ${\mathbf{p}}^{gt}$ as the ground truth class label. We train the model in a multi-task setting using a weighted combination of these loss functions with a weighting factor $\lambda$ :
56
+
57
+ $$
58
+ \mathcal{L} = \underset{{\mathcal{L}}^{\text{waypoints }}}{\underbrace{\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\begin{Vmatrix}{\mathbf{w}}_{t} - {\mathbf{w}}_{t}^{gt}\end{Vmatrix}}_{1}}} + \underset{{\mathcal{L}}^{\text{vehicles }}}{\underbrace{\frac{\lambda }{V}\mathop{\sum }\limits_{{a = 1}}^{A}\mathop{\sum }\limits_{{i = 1}}^{V}{\mathcal{L}}_{CE}\left( {{\widehat{\mathbf{p}}}_{a, i},{\mathbf{p}}_{a, i}^{gt}}\right) }}. \tag{1}
59
+ $$
60
+
61
+ 4 Experiments
62
+
63
+ In this section, we describe our experimental setup, evaluate the driving performance of our approach, analyze the explainability of its driving decisions, and finally discuss limitations.
64
+
65
+ Dataset and Benchmark. We use the expert, dataset and evaluation benchmark Longest 6 proposed by [25]. The expert policy is a rule-based algorithm with access to ground truth locations of the vehicles as well as privileged information that is not available to PlanT such as their actions and dynamics. Using this information, the expert determines the future position of all vehicles and estimates intersections between its own future position and those of the other vehicles to prevent most collisions. The dataset collected with this expert contains ${228}\mathrm{k}$ frames. We use this as our reference point denoted by $1 \times$ . For our analysis, we also generate additional data following [25], but use different seeds to obtain different initializations of the traffic. The data quantities we use are always relative to the original dataset (i.e. $2 \times$ contains double the data, $3 \times$ contains triple). We refer the reader to [25] for a detailed description of the expert algorithm and dataset collection.
66
+
67
+ Metrics. We report the established metrics of the CARLA leaderboard [65]: Route Completion (RC), Infraction Score (IS), and Driving Score (DS), which is the weighted average of the RC and IS. In addition, we show Collisions with Vehicles per kilometer (CV) and Inference Time (IT) for one forward pass of the model, measured in milliseconds on a single RTX 3080 GPU. A detailed description of the metrics is included in the supplementary material.
68
+
69
+ Baselines. To highlight the advantages of learning-based planning, we include a rule-based planning baseline that uses the same inputs as PlanT. It follows the same high-level algorithm as the expert but estimates the future of other vehicles using a constant speed assumption since it does not have access to their actions. AIM-BEV [17] is a recent privileged agent trained using IL. It uses a BEV semantic map input with channels for the road, lane markings, vehicles, and pedestrians, and a GRU identical to PlanT to predict a trajectory for the ego vehicle which is executed using lateral and longitudinal PID controllers. Roach [11] is a Reinforcement Learning (RL) based agent with a similar input representation as AIM-BEV that directly outputs driving actions. Roach and AIM-BEV are the closest existing methods to PlanT. However, they use a different input field of view in their representation leading to sub-optimal performance. We additionally build PlanCNN, a more competitive CNN-based approach for planning with the same training data and input information as PlanT, which is adapted from AIM-BEV to input a rasterized version of our object-level representation. We render the oriented vehicle bounding boxes in one channel, represent the speed of each pixel in a second channel, and render the oriented bounding boxes of the route in the third channel. We provide detailed descriptions of the baselines in the supplementary material.
70
+
71
+ Implementation. Our analysis includes three BERT encoder variants taken from [66]: MINI, SMALL, and MEDIUM with 11.2M, 28.8M and 41.4M parameters respectively. For PlanCNN, we experiment with two backbones: ResNet-18 and ResNet-34. We choose these architectures to maintain an IT which enables real-time execution. We train the models from scratch on 4RTX 2080Ti GPUs with a total batch size of 128. Optimization is done with AdamW [67] for 47 epochs with an initial learning rate of ${10}^{-4}$ which we decay by 0.1 after 45 epochs. Training takes approximately 2.8,3.4, and 4 hours for the three BERT variants on the $1 \times$ dataset. We set the weight decay to 0.1 and clip the gradient norm at 1.0. For the auxiliary objective we use quantization precisions of ${0.5}\mathrm{\;m}$ for the position and extent, ${3.75}\mathrm{\;{km}}/\mathrm{h}$ for the speed and ${11.25}^{ \circ }$ for the orientation of the vehicles. We use ${T}_{in} = 0$ and ${\delta t} = 1$ for auxiliary supervision. The loss weight $\lambda$ is set to 0.2 . By default, we use ${D}_{\max } = {30}\mathrm{\;m},{N}_{s} = 2$ , and ${L}_{\max } = {30}\mathrm{\;m}$ . Additional details regarding the model configurations as well as detailed ablation studies on the multi-task training and input representation hyperparameters are provided in the supplementary material.
72
+
73
+ ### 4.1 Obtaining Expert-Level Driving Performance
74
+
75
+ In the following, we discuss the key findings of our study which enable expert-level driving with learned planners. Unless otherwise specified, the experiments consider the largest version of our dataset $\left( {3 \times }\right)$ and models (MEDIUM for PlanT, ResNet-34 for PlanCNN).
76
+
77
+ We first analyze LAV [29] and TransFuser [25], which are recent state-of-the-art sensor-based methods on CARLA. Since PlanT accesses additional input information, it is not directly comparable to these baselines. However, we include them to put our results into perspective and to demonstrate the
78
+
79
+ <table><tr><td>$\mathbf{{Method}}$</td><td>$\mathbf{{Input}}$</td><td>DS↑</td><td>RC↑</td><td>IS↑</td><td>CV $\downarrow$</td><td>IT $\downarrow$</td></tr><tr><td>Rule-based</td><td>Obj. + Route</td><td>${29.09} \pm {2.12}$</td><td>${38.00} \pm {1.64}$</td><td>${0.84} \pm {0.00}$</td><td>${0.64} \pm {0.07}$</td><td>-</td></tr><tr><td>LAV* [29]</td><td>Camera + LiDAR</td><td>${32.74} \pm {1.45}$</td><td>70.36±3.14</td><td>${0.51} \pm {0.02}$</td><td>${0.84} \pm {0.11}$</td><td>-</td></tr><tr><td>TransFuser [25]</td><td>Camera + LiDAR</td><td>47.30±5.72</td><td>93.38±1.20</td><td>${0.50} \pm {0.60}$</td><td>${2.44} \pm {0.64}$</td><td>-</td></tr><tr><td>AIM-BEV* [17]</td><td>Rast. Obj. + HD Map</td><td>${45.06} \pm {1.68}$</td><td>${78.31} \pm {1.12}$</td><td>${0.55} \pm {0.01}$</td><td>${1.67} \pm {0.16}$</td><td>${18.14} \pm {1.08}$</td></tr><tr><td>Roach* [11]</td><td>Rast. Obj. + HD Map</td><td>${55.27} \pm {1.43}$</td><td>${88.16} \pm {1.52}$</td><td>${0.62} \pm {0.02}$</td><td>${0.76} \pm {0.07}$</td><td>${3.24} \pm {0.15}$</td></tr><tr><td>PlanCNN</td><td>Rast. Obj. + Rast. Route</td><td>${77.47} \pm {1.34}$</td><td>${94.53} \pm {2.59}$</td><td>${0.81} \pm {0.03}$</td><td>${0.43} \pm {0.05}$</td><td>${28.94} \pm {1.24}$</td></tr><tr><td>PlanT</td><td>Obj. + Route</td><td>$\mathbf{{81.36}} \pm {6.54}$</td><td>${93.55} \pm {2.62}$</td><td>${0.87} \pm {0.05}$</td><td>${0.31} \pm {0.12}$</td><td>${10.79} \pm {0.47}$</td></tr><tr><td>Expert [25]</td><td>Obj. + Route + Actions</td><td>${76.91} \pm {2.23}$</td><td>${88.67} \pm {0.56}$</td><td>${0.86} \pm {0.03}$</td><td>${0.28} \pm {0.06}$</td><td>-</td></tr></table>
80
+
81
+ Table 1: Longest6 Results. We show the mean $\pm$ std for 3 evaluations. PlanT reaches expert-level performance and requires significantly less inference time than the baselines. *We evaluate the author-provided pre-trained models for LAV, AIM-BEV, and Roach.
82
+
83
+ potential gain for future sensor-based models. The results are shown in Table 1. Interestingly, both sensor-based methods outperform the rule-based planning algorithm despite their lack of ground-truth object and route information, showing the limitations of rule-based systems in complex environments. In contrast, PlanT is over 30 points better than TransFuser in terms of DS (81.36 vs. 47.30), combining the high IS of the rule-based approach and high RC of TransFuser. Importantly, this improvement is achieved using the same expert policy and IL-based training objective as TransFuser. Additionally, TransFuser incorporates manually designed heuristics in its controller to creep forward if the policy gets stuck without cause [25], which are unnecessary for PlanT.
84
+
85
+ Input Representation. In Table 1, we observe that both PlanCNN and PlanT significantly outperform AIM-BEV [17] and Roach [11]. We systematically break down the factors leading to this in Table 2a by studying the following: (1) the representation used for the road layout, (2) the horizontal field of view, (3) whether objects behind the ego vehicle are part of the representation, and (4) whether the input representation incorporates speed.
86
+
87
+ Roach uses the same maximum distance to the sides of the vehicle as AIM-BEV but additionally includes $8\mathrm{\;m}$ to the back and multiple input frames to reason about speed. We see in Table 2a that training PlanCNN in a configuration close to Roach (with the key differences being the removal of details from the map and a $0\mathrm{\;m}$ back view) results in a higher DS (59.97 vs. 55.27), showing that it is possible to replace the lane marking and road boundary information in the HD map with our compact representation. Increasing the side view from 19.2 to ${30}\mathrm{\;m}$ improves PlanCNN from 59.97 to 70.72 (Table 2a). This indicates that reasoning about distant vehicles is important. Furthermore, including vehicles to the rear further boosts PlanCNN's DS to 77.47. This also holds true for PlanT, where adding the rear improves the DS from 72.86 to 81.36. These results show that a full ${360}^{ \circ }$ field of view is helpful to handle certain situations encountered during our evaluation (e.g. changing lanes). Finally, removing the vehicle speed input significantly reduces the DS for both PlanCNN and PlanT (Table 2a), showing the importance of cues regarding motion.
88
+
89
+ PlanT vs. PlanCNN. In Table 2b, we show the impact of scaling the dataset size and the model size for PlanT and PlanCNN. The circle size indicates the inference time (IT) needed for that model. First, we observe that PlanT demonstrates better data efficiency than PlanCNN, e.g., using the $1 \times$ data setting is sufficient to reach the same performance as PlanCNN with $2 \times$ . Interestingly, scaling the data from $1 \times$ to $3 \times$ leads to expert-level performance with all the models other than PlanCNN with ResNet-18, showing the effectiveness of scaling. In fact, PlanT ${}_{\text{MEDIUM }}\left( {81.36}\right)$ outperforms the expert (76.91) in some evaluation runs. We visualize one consistent failure mode of the expert that leads to this discrepancy in Fig. 3a. We observe that the expert sometimes stops once it has already entered an intersection if it anticipates a collision, which then leads to collisions or blocked traffic. On the other hand, PlanT learns to wait further outside an intersection before entering which is a smoother function than the discrete rule-based expert, and subsequently avoids these infractions. Importantly, in our final setting, Plan ${\mathrm{T}}_{\mathrm{{MEDIUM}}}$ is around $3 \times$ as fast as PlanCNN while being 4 points better in terms of the DS, and Plan ${\mathrm{T}}_{\mathrm{{MINI}}}$ is ${5.3} \times$ as fast (IT=5.46 ms) while reaching the same DS as PlanCNN. This shows that PlanT is suitable for systems where fast inference time is a requirement. To demonstrate the reproducibility of our results with different initializations, we report driving scores of PlanCNN and PlanT with multiple training seeds in the supplementary material.
90
+
91
+ Loss. A detailed study of the training strategy for PlanT can be found in the supplementary material, where we show that the auxiliary loss proposed in Eq. (1) is crucial to its performance. However, since this is a self-supervised objective, it can be incorporated without additional annotation costs.
92
+
93
+ <table><tr><td>$\mathbf{{Method}}$</td><td>Map</td><td>Side (m)</td><td>Back (m)</td><td>Speed</td><td>DS↑</td></tr><tr><td>AIM-BEV [17]</td><td>✓</td><td>19.2</td><td>0</td><td>-</td><td>${45.06} \pm {1.68}$</td></tr><tr><td>Roach [11]</td><td>✓</td><td>19.2</td><td>8</td><td>✓</td><td>${55.27} \pm {1.43}$</td></tr><tr><td rowspan="4">PlanCNN</td><td>-</td><td>19.2</td><td>0</td><td>✓</td><td>${59.97} \pm {4.47}$</td></tr><tr><td>-</td><td>30</td><td>0</td><td>✓</td><td>${70.72} \pm {2.99}$</td></tr><tr><td>-</td><td>30</td><td>30</td><td>✓</td><td>77.47±1.34</td></tr><tr><td>-</td><td>30</td><td>30</td><td>-</td><td>${69.13} \pm {1.43}$</td></tr><tr><td rowspan="3">PlanT</td><td>-</td><td>30</td><td>0</td><td>✓</td><td>${72.86} \pm {5.56}$</td></tr><tr><td>-</td><td>30</td><td>30</td><td>✓</td><td>81.36±6.54</td></tr><tr><td>-</td><td>30</td><td>30</td><td>-</td><td>${72.34} \pm {3.30}$</td></tr></table>
94
+
95
+ (a) Input Representation. DS on Longest6 (3 evaluation runs) with different input properties.
96
+
97
+ ![01963f14-6b95-77cd-96c8-b8739cb456ed_6_903_257_567_443_0.jpg](images/01963f14-6b95-77cd-96c8-b8739cb456ed_6_903_257_567_443_0.jpg)
98
+
99
+ Table 2: We investigate the choices of the input representation (Table 2a) and architecture (Table 2b) for learning-based planners. Including vehicles to the back of the ego vehicle, encoding vehicle speeds, and scaling to large models/datasets is crucial for performance of both PlanCNN and PlanT.
100
+
101
+ This is in line with recent findings on training transformers that show the effectiveness of supervising multiple output tokens instead of just a single [CLS] token [68].
102
+
103
+ ### 4.2 Explainability: Identification of Most Relevant Objects
104
+
105
+ Next, we investigate the explainability of PlanT and PlanCNN by analyzing the objects in the scene that are relevant and crucial for the agent's decision. In particular, we measure the relevance of an object in terms of the learned attention for PlanT and by considering the impact that the removal of each object has on the output predictions for PlanCNN. To quantify the ability to reason about the most relevant objects, we propose a novel evaluation scheme together with the Relative Filtered Driving Score (RFDS). For the rule-based expert algorithm, collision avoidance depends on a single vehicle which it identifies as the reason for braking. To measure the RFDS of a learned planner, we run one forward pass of the planner (without executing the actions) to obtain a scalar relevance score for each vehicle in the scene. We then execute the expert algorithm while restricting its observations to the (single) vehicle with the highest relevance score. The RFDS is defined as the relative DS of this restricted version of the expert compared to the default version which checks for collisions against all vehicles. We describe the extraction of the relevance score for PlanT and PlanCNN in the following. Our protocol leads to a fair comparison of different agents as the RFDS does not depend on the ability to drive itself but only on the obtained ranking of object relevance.
106
+
107
+ Baselines. As a naïve baseline, we consider the inverse distance to the ego vehicle as a vehicle's relevance score, such that the expert only sees the closest vehicle. For PlanT, we extract the relevance score by adding the attention weights of all layers and heads for the [CLS] token. This only requires a single forward pass of PlanT. Since PlanCNN does not use attention, we choose a masking method to find the most salient region in the image, using the same principle as $\left\lbrack {{51},{50},{49}}\right\rbrack$ . We remove one object at a time from the input image and compute the ${L}_{1}$ distance to the predicted waypoints for the full image. The objects are then ranked based on how much their absence affects the total distance. In the supplementary material, we provide additional results using GradCAM [47] adapted for PlanCNN, which we found to not work as well as the proposed masking approach.
108
+
109
+ Results. We provide results for the reasoning of the planners about relevant objects in Table 3. Both planners significantly outperform the distance-based baseline, with PlanT obtaining a mean RFDS of 96.82 compared to 82.83 for PlanCNN. We show qualitative examples in Figure $3\mathrm{\;b}$ where we highlight the vehicle with the highest relevance score using a red bounding rectangle. Both planners correctly identify the most important object in simple scenarios when only a few vehicles are in the field of view or the relevant vehicle is the only one close to the ego vehicle's path. However, PlanT is also able to correctly identify the most important object in complex scenes. When merging into a lane (examples 1 & 2 from the left) it correctly looks at the moving vehicles coming from the rear to avoid collisions. Example 3 shows advanced reasoning about dynamics. The two vehicles closer to the ego vehicle are moving away from the intended route at a high speed and are therefore not as relevant for planning. PlanT already pays attention to the more distant vehicle behind them as this is the one that it would collide with if it does not brake. One of the failures of PlanT we observe is that it sometimes allocates the highest attention to a very close vehicle behind itself (example 4) and misses the relevant object. PlanCNN has more prominent errors when there are a large number of vehicles in the scene, but nothing directly ahead, such as problems attending to the relevant object when merging lanes (examples 2 & 3). To better assess the driving performance and relevance scores we provide additional results in the supplementary video.
110
+
111
+ <table><tr><td>$\mathbf{{Method}}$</td><td>RFDS $\uparrow$</td></tr><tr><td>Inverse Distance</td><td>${29.13} \pm {0.54}$</td></tr><tr><td>PlanCNN + Masking</td><td>${82.83} \pm {6.79}$</td></tr><tr><td>PlanT + Attention</td><td>96.82±2.12</td></tr></table>
112
+
113
+ Table 3: RFDS. Relative score of the expert when only observing the most relevant vehicle according to the respective planner.
114
+
115
+ ![01963f14-6b95-77cd-96c8-b8739cb456ed_7_305_200_1174_481_0.jpg](images/01963f14-6b95-77cd-96c8-b8739cb456ed_7_305_200_1174_481_0.jpg)
116
+
117
+ Figure 3: We contrast a failure case of the expert to PlanT (Fig. 3a) and show the quality of the relevance scores (Fig. 3b). The ego vehicle is marked with a yellow triangle, vehicles that either lead to collisions or are intuitively the most relevant in the scene are marked with a blue box.
118
+
119
+ ### 4.3 Limitations
120
+
121
+ Our study has several limitations. Firstly, since planning is a challenging task in itself and the capabilities of learning-based planners are not fully understood, we restrict the scope of our experiments to exclude the task of perception from sensor inputs. Integrating PlanT with a state-of-the-art perception system for object detection and tracking is a promising next step. Second, the expert driver used in our IL-based training strategy does not achieve a perfect score in our evaluation setting (Table 1) and has certain consistent failure modes (Fig. 3a, more examples can be found in the supplementary material). While a human driver may be able to achieve higher driving scores, human data collection would be time-consuming (the $3 \times$ dataset used in our experiments contains around 95 hours of driving). Instead, we use the best publicly available algorithmic expert, which is a standard practice for IL on CARLA [69, 70, 34, 18]. Finally, all our experiments are conducted in simulation. Real-world scenarios are more diverse and challenging. However, CARLA is a high-fidelity simulator, and previous findings demonstrate that systems developed in simulators like CARLA can be transferred to the real world [30, 71, 72].
122
+
123
+ ## 5 Conclusion
124
+
125
+ In this work, we take a step towards efficient, high-performance, explainable planning for autonomous driving with a novel object-level representation and transformer-based architecture called PlanT. Based on our experiments, we find that it is possible to replace dense, high-dimensional feature grids including HD map information with a simple representation of the ego vehicle route without a drop in performance. We show that incorporating a ${360}^{ \circ }$ field of view, information about vehicle speeds, and scaling up both the architecture and dataset size of a learned planner are essential to achieve state-of-the-art results on the CARLA simulator. Additionally, we show that PlanT can reliably identify the most relevant object in the scene via a new metric and evaluation protocol that focus on explainability.
126
+
127
+ References
128
+
129
+ [1] C. Thorpe, M. H. Hebert, T. Kanade, and S. A. Shafer. Vision and navigation for the carnegie-mellon navlab. PAMI, 10(3):362-372, May 1988.
130
+
131
+ [2] E. D. Dickmanns, R. Behringer, D. Dickmanns, T. Hildebrandt, M. Maurer, F. Thomanek, and J. Schiehlen. The seeing passenger car 'vamors-p'. In IV, 1994.
132
+
133
+ [3] J. J. Leonard, J. P. How, S. J. Teller, M. Berger, S. Campbell, G. A. Fiore, L. Fletcher, E. Fraz-zoli, A. S. Huang, S. Karaman, O. Koch, Y. Kuwata, D. Moore, E. Olson, S. Peters, J. Teo, R. Truax, M. R. Walter, D. Barrett, A. Epstein, K. Maheloni, K. Moyer, T. Jones, R. Buckley, M. E. Antone, R. Galejs, S. Krishnamurthy, and J. Williams. A perception-driven autonomous urban vehicle. JFR, 25(10):727-774, 2008.
134
+
135
+ [4] W. Xu, J. Pan, J. Wei, and J. M. Dolan. Motion planning under uncertainty for on-road autonomous driving. In ${ICRA},{2014}$ .
136
+
137
+ [5] B. Paden, M. Čáp, S. Z. Yong, D. Yershov, and E. Frazzoli. A survey of motion planning and control techniques for self-driving urban vehicles. IEEE Transactions on Intelligent Vehicles, 2016.
138
+
139
+ [6] H. Fan, F. Zhu, C. Liu, L. Zhang, L. Zhuang, D. Li, W. Zhu, J. Hu, H. Li, and Q. Kong. Baidu apollo EM motion planner. arXiv.org, 1807.08048, 2018.
140
+
141
+ [7] A. Sadat, S. Casas, M. Ren, X. Wu, P. Dhawan, and R. Urtasun. Perceive, predict, and plan: Safe motion planning through interpretable semantic representations. In ${ECCV},{2020}$ .
142
+
143
+ [8] A. Cui, A. Sadat, S. Casas, R. Liao, and R. Urtasun. Lookout: Diverse multi-future prediction and planning for self-driving. In ${ICCV},{2021}$ .
144
+
145
+ [9] D. Chen, B. Zhou, V. Koltun, and P. Krähenbühl. Learning by cheating. In CoRL, 2019.
146
+
147
+ [10] W. Zeng, W. Luo, S. Suo, A. Sadat, B. Yang, S. Casas, and R. Urtasun. End-to-end interpretable neural motion planner. In ${CVPR},{2019}$ .
148
+
149
+ [11] Z. Zhang, A. Liniger, D. Dai, F. Yu, and L. Van Gool. End-to-end urban driving by imitating a reinforcement learning coach. In ${ICCV},{2021}$ .
150
+
151
+ [12] O. Scheel, L. Bergamini, M. Wolczyk, B. Osiński, and P. Ondruska. Urban driver: Learning to drive from real-world demonstrations using policy gradients. In CoRL, 2021.
152
+
153
+ [13] M. Vitelli, Y. Chang, Y. Ye, M. Wolczyk, B. Osiński, M. Niendorf, H. Grimmett, Q. Huang, A. Jain, and P. Ondruska. Safetynet: Safe planning for real-world self-driving vehicles using machine-learned policies. arXiv.org, 2109.13602, 2021.
154
+
155
+ [14] D. Marr. Vision: A computational investigation into the human representation and processing of visual information. W. H. Freeman, San Francisco, 1982.
156
+
157
+ [15] E. S. Spelke and K. D. Kinzler. Core knowledge. Developmental Science, 10(1):89-96, 2007. doi:https://doi.org/10.1111/j.1467-7687.2007.00569.x.
158
+
159
+ [16] S. P. Johnson. Object perception. In Oxford Research Encyclopedia of Psychology. Oxford University Press, 2018.
160
+
161
+ [17] N. Hanselmann, K. Renz, K. Chitta, A. Bhattacharyya, and A. Geiger. King: Generating safety-critical driving scenarios for robust imitation via kinematics gradients. arXiv.org, 2204.13683, 2022.
162
+
163
+ [18] K. Chitta, A. Prakash, and A. Geiger. Neat: Neural attention fields for end-to-end autonomous driving. In ${ICCV},{2021}$ .
164
+
165
+ [19] A. Saha, O. Mendez Maldonado, C. Russell, and R. Bowden. Translating Images into Maps. In ${ICRA},{2022}$ .
166
+
167
+ [20] B. Zhou and P. Krähenbühl. Cross-view Transformers for real-time Map-view Semantic Segmentation. In ${CVPR},{2022}$ .
168
+
169
+ [21] L. Peng, Z. Chen, Z. Fu, P. Liang, and E. Cheng. BEVSegFormer: Bird's Eye View Semantic Segmentation From Arbitrary Camera Rigs. arXiv.org, 2203.04050, 2022.
170
+
171
+ [22] E. Xie, Z. Yu, D. Zhou, J. Philion, A. Anandkumar, S. Fidler, P. Luo, and J. M. Alvarez. ${\mathrm{M}}^{2}\mathrm{{BEV}}$ : Multi-Camera Joint 3D Detection and Segmentation with Unified Birds-Eye View Representation. arXiv.org, 2204.05088, 2022.
172
+
173
+ [23] Z. Li, W. Wang, H. Li, E. Xie, C. Sima, T. Lu, Q. Yu, and J. Dai. BEVFormer: Learning Bird's-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers. arXiv.org, 2203.17270, 2022.
174
+
175
+ [24] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polo-sukhin. Attention is all you need. In NeurIPS, pages 5998-6008, 2017.
176
+
177
+ [25] K. Chitta, A. Prakash, B. Jaeger, Z. Yu, K. Renz, , and A. Geiger. Transfuser: Imitation with transformer-based sensor fusion for autonomous driving. arXiv.org, 2205.15997, 2022.
178
+
179
+ [26] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun. CARLA: An open urban driving simulator. In ${CoRL},{2017}$ .
180
+
181
+ [27] C. Chen, A. Seff, A. L. Kornhauser, and J. Xiao. Deepdriving: Learning affordance for direct perception in autonomous driving. In ICCV, pages 2722-2730, 2015.
182
+
183
+ [28] F. Codevilla, E. Santana, A. M. López, and A. Gaidon. Exploring the limitations of behavior cloning for autonomous driving. In ${ICCV},{2019}$ .
184
+
185
+ [29] D. Chen and P. Krähenbühl. Learning from all vehicles. In CVPR, 2022.
186
+
187
+ [30] M. Müller, A. Dosovitskiy, B. Ghanem, and V. Koltun. Driving policy transfer via modularity and abstraction. In ${CoRL},{2018}$ .
188
+
189
+ [31] A. Sax, B. Emi, A. R. Zamir, L. J. Guibas, S. Savarese, and J. Malik. Learning to navigate using mid-level visual priors. In CoRL, 2019.
190
+
191
+ [32] A. Mousavian, A. Toshev, M. Fiser, J. Kosecká, A. Wahid, and J. Davidson. Visual representations for semantic target driven navigation. In ICRA, 2019.
192
+
193
+ [33] B. Zhou, P. Krähenbühl, and V. Koltun. Does computer vision matter for action? Science Robotics, 4(30), 2019.
194
+
195
+ [34] A. Behl, K. Chitta, A. Prakash, E. Ohn-Bar, and A. Geiger. Label efficient visual abstractions for autonomous driving. In IROS, 2020.
196
+
197
+ [35] D. Wang, C. Devin, Q. Cai, P. Krähenbühl, and T. Darrell. Monocular plan view networks for autonomous driving. In IROS, 2019.
198
+
199
+ [36] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv.org, 2018.
200
+
201
+ [37] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. 2019.
202
+
203
+ [38] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. De-hghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv.org, 2010.11929, 2020.
204
+
205
+ [39] M. Janner, Q. Li, and S. Levine. Offline reinforcement learning as one big sequence modeling problem. In NeurIPS, 2021.
206
+
207
+ [40] L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, and I. Mordatch. Decision transformer: Reinforcement learning via sequence modeling. In NeurIPS, 2021.
208
+
209
+ [41] H. Furuta, Y. Matsuo, and S. S. Gu. Generalized decision transformer for offline hindsight information matching. arXiv.org, 2111.10364, 2021.
210
+
211
+ [42] Q. Zheng, A. Zhang, and A. Grover. Online decision transformer. arXiv.org, 2202.05607, 2022.
212
+
213
+ [43] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez, Y. Sulsky, J. Kay, J. T. Springenberg, et al. A generalist agent. arXiv.org, 2205.06175, 2022.
214
+
215
+ [44] J. Gao, C. Sun, H. Zhao, Y. Shen, D. Anguelov, C. Li, and C. Schmid. Vectornet: Encoding hd maps and agent dynamics from vectorized representation. In CVPR, June 2020.
216
+
217
+ [45] J. Ngiam, V. Vasudevan, B. Caine, Z. Zhang, H.-T. L. Chiang, J. Ling, R. Roelofs, A. Bewley, C. Liu, A. Venugopal, D. J. Weiss, B. Sapp, Z. Chen, and J. Shlens. Scene transformer: A unified architecture for predicting future trajectories of multiple agents. In ICLR, 2022.
218
+
219
+ [46] Y. Liu, J. Zhang, L. Fang, Q. Jiang, and B. Zhou. Multimodal motion prediction with stacked transformers. In ${CVPR}$ , June 2021.
220
+
221
+ [47] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV, pages 618- 626, 2017.
222
+
223
+ [48] P. Dabkowski and Y. Gal. Real time image saliency for black box classifiers. In NeurIPS, 2017.
224
+
225
+ [49] R. C. Fong and A. Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In ${ICCV},{2017}$ .
226
+
227
+ [50] V. Petsiuk, A. Das, and K. Saenko. Rise: Randomized input sampling for explanation of black-box models. In ${BMVC},{2018}$ .
228
+
229
+ [51] R. Fong, M. Patrick, and A. Vedaldi. Understanding deep networks via extremal perturbations and smooth masks. In ${ICCV},{2019}$ .
230
+
231
+ [52] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014.
232
+
233
+ [53] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Learning deep features for discriminative localization. In ${CVPR},{2016}$ .
234
+
235
+ [54] Z. Juozapaitis, A. Koul, A. Fern, M. Erwig, and F. Doshi-Velez. Explainable reinforcement learning via reward decomposition. In IJCAI/ECAI Workshop on Explainable Artificial Intelligence, 2019.
236
+
237
+ [55] S. Greydanus, A. Koul, J. Dodge, and A. Fern. Visualizing and understanding atari agents. In ICML, 2018.
238
+
239
+ [56] G. Liu, O. Schulte, W. Zhu, and Q. Li. Toward interpretable deep reinforcement learning with linear model u-trees. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2018.
240
+
241
+ [57] R. M. Annasamy and K. Sycara. Towards better interpretability in deep q-networks. In AAAI, 2019.
242
+
243
+ [58] A. Mott, D. Zoran, M. Chrzanowski, D. Wierstra, and D. Jimenez Rezende. Towards interpretable reinforcement learning using attention augmented agents. In NeurIPS, 2019.
244
+
245
+ [59] J. Kim, A. Rohrbach, T. Darrell, J. Canny, and Z. Akata. Textual explanations for self-driving vehicles. In ${ECCV},{2018}$ .
246
+
247
+ [60] J. Kim and J. Canny. Interpretable learning for self-driving cars by visualizing causal attention. In ${ICCV},{2017}$ .
248
+
249
+ [61] U. Ramer. An iterative procedure for the polygonal approximation of plane curves. Comput. Graph. Image Process., 1:244-256, 1972.
250
+
251
+ [62] D. H. Douglas and T. K. Peucker. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartographica: The International Journal for Geographic Information and Geovisualization, 10:112-122, 1973.
252
+
253
+ [63] K. Cho, B. van Merrienboer, Ç. Gülçehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Ben-gio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014.
254
+
255
+ [64] J. Zhang and E. Ohn-Bar. Learning by watching. In CVPR, June 2021.
256
+
257
+ [65] Carla autonomous driving leaderboard. https://leaderboard.carla.org/, 2020.
258
+
259
+ [66] I. Turc, M.-W. Chang, K. Lee, and K. Toutanova. Well-read students learn better: On the importance of pre-training compact models. arXiv.org, 1908.08962, 2019.
260
+
261
+ [67] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. arXiv.org, 1711.05101, 2017.
262
+
263
+ [68] K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. B. Girshick. Masked autoencoders are scalable vision learners. arXiv.org, 2111.06377, 2021.
264
+
265
+ [69] A. Prakash, A. Behl, E. Ohn-Bar, K. Chitta, and A. Geiger. Exploring data aggregation in policy learning for vision-based urban autonomous driving. In CVPR, 2020.
266
+
267
+ [70] E. Ohn-Bar, A. Prakash, A. Behl, K. Chitta, and A. Geiger. Learning situational driving. In CVPR, 2020.
268
+
269
+ [71] D. Wang, C. Devin, Q. Cai, F. Yu, and T. Darrell. Deep object centric policies for autonomous driving. In ${ICRA},{2019}$ .
270
+
271
+ [72] I. Gog, S. Kalra, P. Schafhalter, M. A. Wright, J. E. Gonzalez, and I. Stoica. Pylot: A modular platform for exploring latency-accuracy tradeoffs in autonomous vehicles. In ICRA, 2021.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/80vpxjt3vq/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § LEARNING AN EXPLAINABLE PLANNER FOR AUTONOMOUS DRIVING
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Planning an optimal route in a complex environment requires efficient reasoning about the surrounding scene. While human drivers prioritize important objects and ignore details not relevant to the decision, learning-based planners typically extract features from dense, high-dimensional grid representations of the scene containing all vehicle and road context information. In this paper, we propose PlanT, a novel approach for planning in the context of self-driving that uses a standard transformer architecture. PlanT is based on imitation learning with a compact object-level input representation. With this representation, we demonstrate that information regarding the ego vehicle's route provides sufficient context regarding the road layout for planning. On the challenging Longest6 benchmark for CARLA, PlanT outperforms all prior methods (matching the driving score of the expert) while being ${5.3} \times$ faster than equivalent pixel-based planning baselines during inference. Furthermore, we propose an evaluation protocol to quantify the ability of planners to identify relevant objects, providing insights regarding their decision making. Our results indicate that PlanT can reliably focus on the most relevant object in the scene, even when this object is geometrically distant.
12
+
13
+ Keywords: Autonomous Driving, Transformers, Explainability
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ The ability to plan is an important aspect of human intelligence, allowing us to solve complex navigation tasks. For example, to change lanes on a busy highway, a driver must wait for sufficient space in the new lane and adjust the speed based on the expected behavior of the other vehicles. Humans quickly learn this and can generalize to new scenarios, a trait we would also like autonomous agents to have. Due to the difficulty of the planning task, the field of autonomous driving is shifting away from traditional rule-based algorithms $\left\lbrack {1,2,3,4,5,6,7,8}\right\rbrack$ towards learning-based solutions $\left\lbrack {9,{10},{11},{12},{13}}\right\rbrack$ . Learning-based planners directly map the environmental state representation (e.g., HD maps and object bounding boxes) to waypoints or vehicle controls. They emerged as a scalable alternative to rule-based planners which require significant manual effort to design.
18
+
19
+ Interestingly, while humans reason about the world in terms of objects [14, 15, 16], most existing learned planners $\left\lbrack {9,{11},{17}}\right\rbrack$ choose a high-dimensional pixel-level input representation by rendering bird's eye view (BEV) images of detailed HD maps (Fig. 1 left). It is widely believed that this kind of accurate scene understanding is key for robust self-driving vehicles, leading to significant interest in recovering pixel-level BEV information from sensor inputs [18, 19, 20, 21, 22, 23]. In this paper, we investigate whether such detailed representations are actually necessary to achieve convincing planning performance. We propose PlanT, a learning-based planner that leverages an object-level representation (Fig. 1 right) as an input to a transformer encoder [24]. We represent a scene as a set of features corresponding to (1) nearby vehicles and (2) the route the planner must follow. Each feature is low-dimensional, e.g., position, extent, and orientation (details in Section 3). We show that despite the low feature dimensionality, our model achieves state-of-the-art results using this representation. We then propose a novel evaluation scheme and metric to analyze explainability which is generally applicable to any learning-based planner. Specifically, we test the ability of a planner to identify the objects that are the most relevant to account for to plan a collision-free route.
20
+
21
+ < g r a p h i c s >
22
+
23
+ Figure 1: Scene Representations for Planning. As an alternative to the dominant paradigm of using detailed pixel-level representations for planning (left), we show the effectiveness of planners based on compact object-level representations (right).
24
+
25
+ We perform a detailed empirical analysis of learning-based planning on the Longest6 benchmark [25] of the CARLA simulator [26]. We first identify the key missing elements in the design of existing learned planners such as their incomplete field of view and sub-optimal dataset and model sizes. Addressing these issues leads to expert-level driving scores for planning with convolutional neural networks (CNNs), which is the dominant paradigm of this field, significantly advancing the state of the art. We then show the added advantages of our proposed transformer architecture, including further improvements in performance and significantly faster inference times. Finally, we show that the attention weights of the transformer, which are readily accessible, can be used to represent object relevance. Our qualitative and quantitative results on explainability confirm that PlanT attends to the objects that match our intuition for the relevance of objects for safe driving.
26
+
27
+ Contributions. (1) We demonstrate that a simple object-level representation is sufficient to encode all the information relevant for planning in urban driving environments. (2) With this representation, we significantly improve upon the previous state of the art on CARLA when we use a standard CNN architecture, and obtain further gains in performance and runtime via PlanT, our novel transformer-based approach. (3) We propose a protocol and metric for evaluating a planner's prioritization of obstacles in a scene, and show that PlanT is more explainable than CNN-based methods, i.e., the attention weights of the transformer identify the most relevant objects more reliably.
28
+
29
+ § 2 RELATED WORK
30
+
31
+ Intermediate Representations for Driving. Early work on decoupling end-to-end driving into two stages predicts a set of low-dimensional affordances from sensor inputs with CNNs which are then input to a rule-based planner [27]. These affordances are scene-descriptive attributes (e.g. emergency brake, red light, center-line distance, angle) that are compact, yet comprehensive enough to enable simple driving tasks, such as urban driving on the initial version of CARLA [26]. Unfortunately, methods based on affordances perform poorly on subsequent benchmarks in CARLA which involve higher task complexity [28]. Most state-of-the-art driving models instead rely heavily on annotated 2D data either as intermediate representations or auxiliary training objectives [25, 29]. Several studies show that using semantic segmentation as an intermediate representation helps for navigational tasks $\left\lbrack {{30},{31},{32},{33}}\right\rbrack$ . More recently, there has been a rapid growth in interest on using BEV semantic segmentation maps as the input representation to planners $\left\lbrack {9,{11},{29},{17}}\right\rbrack$ . To reduce the immense labeling cost of such segmentation methods, Behl et al. [34] propose visual abstractions, which are label-efficient alternatives to dense 2D semantic segmentation maps. They show that reduced class counts and the use of bounding boxes instead of pixel-accurate masks for certain classes is sufficient. Wang et al. [35] explicitly extract objects and render them into a BEV image for planning. Instead of rendering objects onto a 2D image, we keep our representation low-dimensional and compact by directly considering the set of objects as inputs to our model. In addition, instead of using a CNN-based architecture to process the rendered images, we show that using a transformer leads to improved performance, efficiency, and explainability.
32
+
33
+ Transformers for Control. Transformers obtain impressive results in several research areas [24, 0 36,37,38]. To make use of this architecture, the input needs to be represented as a sequence or set. Using transformers by describing the environment state as a sequence has been successful for
34
+
35
+ Atari games [39, 40, 41, 42, 43]. In the field of motion forecasting for driving, Gao et al. [44] show the advantages of representing an HD map as a set of vectors instead of a BEV image. Several follow-up works use this representation in combination with transformer-based architectures [45, 46]. However, they only tackle motion forecasting and do not address the downstream planning task considered by our work. Moreover, while all of these methods rely on an HD map with detailed information regarding lane markings and road boundaries as inputs, we show that the ego vehicle's route is a more compact yet sufficient representation.
36
+
37
+ Explainability. Explaining the decisions of neural networks is a rapidly evolving research field $\left\lbrack {{47},{48},{49},{50},{51},{52},{53}}\right\rbrack$ . Prior work on explaining policies focuses on reward decomposition [54], identifying the salient areas in an image using perturbations [55], approximating a deep neural network with a more interpretable model [56], or using interpretable attention mechanisms for predicting the correct action $\left\lbrack {{57},{58}}\right\rbrack$ . In the context of self-driving cars, existing work uses text [59] or heatmaps [60] to explain decisions. In contrast, we can directly obtain post hoc explanations for decisions of our learning-based PlanT architecture by considering its learned attention. Furthermore, we introduce a simple approach to measure the quality of explanations for a planner by considering the change in driving performance when only considering the object that the explanation deemed most relevant in a scene for the planner's prediction.
38
+
39
+ § 3 PLANNING TRANSFORMERS
40
+
41
+ In this section, we provide details about our task setup, novel scene representation, simple but effective architecture, and training strategy resulting in state-of-the-art performance.
42
+
43
+ Task. We consider the task of point-to-point navigation in an urban setting where the goal is to drive from a start to a goal location while reacting to other dynamic agents and following traffic rules. We use Imitation Learning (IL) to train the driving agent. The goal of IL is to learn a policy $\pi$ that imitates the behavior of an expert ${\pi }^{ * }$ . In our setup, the policy is a mapping $\pi : \mathcal{X} \rightarrow \mathcal{W}$ from our novel object-level input representation $\mathcal{X}$ to the future trajectory $\mathcal{W}$ of an expert driver. We describe the expert and data collection procedure in Section 4. For following traffic rules, we assume access to the traffic light state $l \in \{$ green, red $\}$ of the next traffic light relevant to the ego vehicle.
44
+
45
+ Object-level Representation. In order to drive safely, the input representation needs to encode all the task-specific information about the scene. To this end, we represent the scene as a set of objects, with vehicles and segments of the route each being assigned an oriented bounding box in BEV space (Fig. 1 right). Let $\mathcal{X} \in {\mathbb{R}}^{\left( {V + R}\right) \times A}$ be a set of $V$ vehicles and $R$ route segments with $A$ attributes each. These attributes include the position of the bounding box $\left( {{x}_{i},{y}_{i}}\right)$ relative to the ego vehicle, the extent $\left( {{w}_{i},{h}_{i}}\right)$ , the orientation ${\varphi }_{i} \in \left\lbrack {0,{2\pi }}\right\rbrack$ and a type-specific attribute ${v}_{i}$ , giving $A = 6$ dimensions. Each object can be described as a vector ${\mathbf{o}}_{i} = \left\lbrack {{x}_{i},{y}_{i},{w}_{i},{h}_{i},{\varphi }_{i},{v}_{i}}\right\rbrack$ . For a vehicle, we use ${v}_{i}$ to represent the speed. For segments of the route, ${v}_{i}$ denotes the ordering, starting from 0 for the segment closest to the ego vehicle. To obtain these route segments we subsample the dense set of points along the route provided by CARLA using the Ramer-Douglas-Peucker algorithm [61, 62]. One segment spans the area between two of these subsampled route points, with the length equal to the distance between the points and the width equal to the lane width. In addition, we restrict the maximum length of any single segment to ${L}_{\max }$ and always input a fixed number of segments ${N}_{s}$ to our policy. Since the primary focus of our work is not perception but planning, we extract the object attributes directly from the simulator. We consider objects up to a distance ${D}_{\max }$ from the ego vehicle without any further filtering. Note that our representation contains similar information as that in the widely used BEV map representation (Fig. 1 left), but only encodes the route relevant to the ego vehicle instead of the whole map. In practice, we find that this is sufficient for current CARLA benchmarks (Table 2a). More details regarding the object extraction and visualizations of the route representation are provided in the supplementary material.
46
+
47
+ Output. We predict the future trajectory $\mathcal{W}$ of the ego vehicle, centered at the coordinate frame of the current time-step $t$ . The trajectory is represented by a sequence of $2\mathrm{D}$ waypoints in $\mathrm{{BEV}}$ space, ${\left\{ {\mathbf{w}}_{t} = \left( {x}_{t},{y}_{t}\right) \right\} }_{t = 1}^{T}$ for $T = 4$ future time-steps. These waypoints are provided to lateral and longitudinal PID controllers to output actions. Details are provided in the supplementary material.
48
+
49
+ Planning Transformer (PlanT). Our model is illustrated in Fig. 2. The main building block for the IL policy $\pi$ is a standard transformer encoder, based on the BERT architecture [36]. While our architecture does not deviate from the original transformer implementation [24], adapting it to self-driving requires the tokenization of our proposed input representation. For this, we linearly project the attributes of each object to one token embedding. We then add the token embeddings to learnable object type embeddings indicating to which type the token belongs: [CLS], vehicle or route. The [CLS] token (based on [36, 38]) is a learnable token prepended to the input. This token's processing involves an attention-based aggregation of the features from all other tokens, and is used for generating the waypoint predictions. At the final layer of the transformer, we extract the feature vector from the [CLS] token and reduce its dimensionality via a linear layer. As in previous work [17], we concatenate the binary traffic light flag and the feature vector before passing it to an auto-regressive waypoint decoder that makes use of GRUs [63]. We input the current ego position and goal location to the GRU, which predicts $T = 4$ differential waypoints ${\left\{ \delta {\mathbf{w}}_{t}\right\} }_{t = 1}^{T}$ , where $t$ is the future time-step. With these differential waypoints we obtain the future waypoints as ${\left\{ {\mathbf{w}}_{t} = {\mathbf{w}}_{t - 1} + \delta {\mathbf{w}}_{t}\right\} }_{t = 1}^{T}$ . For a detailed description of the waypoint decoder, see [17,25].
50
+
51
+ < g r a p h i c s >
52
+
53
+ Figure 2: Planning Transformer (PlanT). We represent a scene (bottom left) using a set of objects containing the vehicles and route to follow (green arrows). We embed these via a linear projection (bottom right) and process them with a transformer encoder. PlanT outputs future waypoints with a GRU decoder. We use a self-supervised auxiliary task of predicting the future of other vehicles. Further, extracting and visualizing the attention weights yields an explainable decision (top left).
54
+
55
+ Training. Following recent driving models [9,29,25], we leverage the ${L}_{1}$ loss to ground truth future waypoints ${\mathbf{w}}^{gt}$ as our main training objective. Besides this, we propose the auxiliary task of predicting the future attributes of other vehicles. This is aligned with the overall driving goal in two ways. (1) The ability to reason about the future of other vehicles is important in an urban environment as it heavily influences the ego vehicle's own future. (2) Our main task is to predict the ego vehicle's future trajectory, which means the output feature of the transformer needs to encode all the information necessary to predict the future. Supervising the outputs of all vehicles on a similar task (i.e., predicting the location, pose and velocity of a future time-step) exploits synergies between the task of the ego vehicle and the other vehicles [64, 29]. We input the vehicle tokens for a given time-step ${T}_{in}$ and predict class probabilities $\widehat{\mathbf{p}}$ for every attribute of each vehicle of a future time-step ${T}_{in} + {\delta t}$ using a linear layer per attribute type on top of the corresponding output embedding. We choose to discretize the output attributes to allow uncertainty in the predictions since the future is non-deterministic. This is also better aligned with how humans drive without predicting exact locations and velocities, where a rough estimate is sufficient to make a safe decision. We calculate the cross-entropy loss ${\mathcal{L}}_{CE}$ using the one-hot encoded representation of the attribute ${\mathbf{p}}^{gt}$ as the ground truth class label. We train the model in a multi-task setting using a weighted combination of these loss functions with a weighting factor $\lambda$ :
56
+
57
+ $$
58
+ \mathcal{L} = \underset{{\mathcal{L}}^{\text{ waypoints }}}{\underbrace{\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\begin{Vmatrix}{\mathbf{w}}_{t} - {\mathbf{w}}_{t}^{gt}\end{Vmatrix}}_{1}}} + \underset{{\mathcal{L}}^{\text{ vehicles }}}{\underbrace{\frac{\lambda }{V}\mathop{\sum }\limits_{{a = 1}}^{A}\mathop{\sum }\limits_{{i = 1}}^{V}{\mathcal{L}}_{CE}\left( {{\widehat{\mathbf{p}}}_{a,i},{\mathbf{p}}_{a,i}^{gt}}\right) }}. \tag{1}
59
+ $$
60
+
61
+ 4 Experiments
62
+
63
+ In this section, we describe our experimental setup, evaluate the driving performance of our approach, analyze the explainability of its driving decisions, and finally discuss limitations.
64
+
65
+ Dataset and Benchmark. We use the expert, dataset and evaluation benchmark Longest 6 proposed by [25]. The expert policy is a rule-based algorithm with access to ground truth locations of the vehicles as well as privileged information that is not available to PlanT such as their actions and dynamics. Using this information, the expert determines the future position of all vehicles and estimates intersections between its own future position and those of the other vehicles to prevent most collisions. The dataset collected with this expert contains ${228}\mathrm{k}$ frames. We use this as our reference point denoted by $1 \times$ . For our analysis, we also generate additional data following [25], but use different seeds to obtain different initializations of the traffic. The data quantities we use are always relative to the original dataset (i.e. $2 \times$ contains double the data, $3 \times$ contains triple). We refer the reader to [25] for a detailed description of the expert algorithm and dataset collection.
66
+
67
+ Metrics. We report the established metrics of the CARLA leaderboard [65]: Route Completion (RC), Infraction Score (IS), and Driving Score (DS), which is the weighted average of the RC and IS. In addition, we show Collisions with Vehicles per kilometer (CV) and Inference Time (IT) for one forward pass of the model, measured in milliseconds on a single RTX 3080 GPU. A detailed description of the metrics is included in the supplementary material.
68
+
69
+ Baselines. To highlight the advantages of learning-based planning, we include a rule-based planning baseline that uses the same inputs as PlanT. It follows the same high-level algorithm as the expert but estimates the future of other vehicles using a constant speed assumption since it does not have access to their actions. AIM-BEV [17] is a recent privileged agent trained using IL. It uses a BEV semantic map input with channels for the road, lane markings, vehicles, and pedestrians, and a GRU identical to PlanT to predict a trajectory for the ego vehicle which is executed using lateral and longitudinal PID controllers. Roach [11] is a Reinforcement Learning (RL) based agent with a similar input representation as AIM-BEV that directly outputs driving actions. Roach and AIM-BEV are the closest existing methods to PlanT. However, they use a different input field of view in their representation leading to sub-optimal performance. We additionally build PlanCNN, a more competitive CNN-based approach for planning with the same training data and input information as PlanT, which is adapted from AIM-BEV to input a rasterized version of our object-level representation. We render the oriented vehicle bounding boxes in one channel, represent the speed of each pixel in a second channel, and render the oriented bounding boxes of the route in the third channel. We provide detailed descriptions of the baselines in the supplementary material.
70
+
71
+ Implementation. Our analysis includes three BERT encoder variants taken from [66]: MINI, SMALL, and MEDIUM with 11.2M, 28.8M and 41.4M parameters respectively. For PlanCNN, we experiment with two backbones: ResNet-18 and ResNet-34. We choose these architectures to maintain an IT which enables real-time execution. We train the models from scratch on 4RTX 2080Ti GPUs with a total batch size of 128. Optimization is done with AdamW [67] for 47 epochs with an initial learning rate of ${10}^{-4}$ which we decay by 0.1 after 45 epochs. Training takes approximately 2.8,3.4, and 4 hours for the three BERT variants on the $1 \times$ dataset. We set the weight decay to 0.1 and clip the gradient norm at 1.0. For the auxiliary objective we use quantization precisions of ${0.5}\mathrm{\;m}$ for the position and extent, ${3.75}\mathrm{\;{km}}/\mathrm{h}$ for the speed and ${11.25}^{ \circ }$ for the orientation of the vehicles. We use ${T}_{in} = 0$ and ${\delta t} = 1$ for auxiliary supervision. The loss weight $\lambda$ is set to 0.2 . By default, we use ${D}_{\max } = {30}\mathrm{\;m},{N}_{s} = 2$ , and ${L}_{\max } = {30}\mathrm{\;m}$ . Additional details regarding the model configurations as well as detailed ablation studies on the multi-task training and input representation hyperparameters are provided in the supplementary material.
72
+
73
+ § 4.1 OBTAINING EXPERT-LEVEL DRIVING PERFORMANCE
74
+
75
+ In the following, we discuss the key findings of our study which enable expert-level driving with learned planners. Unless otherwise specified, the experiments consider the largest version of our dataset $\left( {3 \times }\right)$ and models (MEDIUM for PlanT, ResNet-34 for PlanCNN).
76
+
77
+ We first analyze LAV [29] and TransFuser [25], which are recent state-of-the-art sensor-based methods on CARLA. Since PlanT accesses additional input information, it is not directly comparable to these baselines. However, we include them to put our results into perspective and to demonstrate the
78
+
79
+ max width=
80
+
81
+ $\mathbf{{Method}}$ $\mathbf{{Input}}$ DS↑ RC↑ IS↑ CV $\downarrow$ IT $\downarrow$
82
+
83
+ 1-7
84
+ Rule-based Obj. + Route ${29.09} \pm {2.12}$ ${38.00} \pm {1.64}$ ${0.84} \pm {0.00}$ ${0.64} \pm {0.07}$ -
85
+
86
+ 1-7
87
+ LAV* [29] Camera + LiDAR ${32.74} \pm {1.45}$ 70.36±3.14 ${0.51} \pm {0.02}$ ${0.84} \pm {0.11}$ -
88
+
89
+ 1-7
90
+ TransFuser [25] Camera + LiDAR 47.30±5.72 93.38±1.20 ${0.50} \pm {0.60}$ ${2.44} \pm {0.64}$ -
91
+
92
+ 1-7
93
+ AIM-BEV* [17] Rast. Obj. + HD Map ${45.06} \pm {1.68}$ ${78.31} \pm {1.12}$ ${0.55} \pm {0.01}$ ${1.67} \pm {0.16}$ ${18.14} \pm {1.08}$
94
+
95
+ 1-7
96
+ Roach* [11] Rast. Obj. + HD Map ${55.27} \pm {1.43}$ ${88.16} \pm {1.52}$ ${0.62} \pm {0.02}$ ${0.76} \pm {0.07}$ ${3.24} \pm {0.15}$
97
+
98
+ 1-7
99
+ PlanCNN Rast. Obj. + Rast. Route ${77.47} \pm {1.34}$ ${94.53} \pm {2.59}$ ${0.81} \pm {0.03}$ ${0.43} \pm {0.05}$ ${28.94} \pm {1.24}$
100
+
101
+ 1-7
102
+ PlanT Obj. + Route $\mathbf{{81.36}} \pm {6.54}$ ${93.55} \pm {2.62}$ ${0.87} \pm {0.05}$ ${0.31} \pm {0.12}$ ${10.79} \pm {0.47}$
103
+
104
+ 1-7
105
+ Expert [25] Obj. + Route + Actions ${76.91} \pm {2.23}$ ${88.67} \pm {0.56}$ ${0.86} \pm {0.03}$ ${0.28} \pm {0.06}$ -
106
+
107
+ 1-7
108
+
109
+ Table 1: Longest6 Results. We show the mean $\pm$ std for 3 evaluations. PlanT reaches expert-level performance and requires significantly less inference time than the baselines. *We evaluate the author-provided pre-trained models for LAV, AIM-BEV, and Roach.
110
+
111
+ potential gain for future sensor-based models. The results are shown in Table 1. Interestingly, both sensor-based methods outperform the rule-based planning algorithm despite their lack of ground-truth object and route information, showing the limitations of rule-based systems in complex environments. In contrast, PlanT is over 30 points better than TransFuser in terms of DS (81.36 vs. 47.30), combining the high IS of the rule-based approach and high RC of TransFuser. Importantly, this improvement is achieved using the same expert policy and IL-based training objective as TransFuser. Additionally, TransFuser incorporates manually designed heuristics in its controller to creep forward if the policy gets stuck without cause [25], which are unnecessary for PlanT.
112
+
113
+ Input Representation. In Table 1, we observe that both PlanCNN and PlanT significantly outperform AIM-BEV [17] and Roach [11]. We systematically break down the factors leading to this in Table 2a by studying the following: (1) the representation used for the road layout, (2) the horizontal field of view, (3) whether objects behind the ego vehicle are part of the representation, and (4) whether the input representation incorporates speed.
114
+
115
+ Roach uses the same maximum distance to the sides of the vehicle as AIM-BEV but additionally includes $8\mathrm{\;m}$ to the back and multiple input frames to reason about speed. We see in Table 2a that training PlanCNN in a configuration close to Roach (with the key differences being the removal of details from the map and a $0\mathrm{\;m}$ back view) results in a higher DS (59.97 vs. 55.27), showing that it is possible to replace the lane marking and road boundary information in the HD map with our compact representation. Increasing the side view from 19.2 to ${30}\mathrm{\;m}$ improves PlanCNN from 59.97 to 70.72 (Table 2a). This indicates that reasoning about distant vehicles is important. Furthermore, including vehicles to the rear further boosts PlanCNN's DS to 77.47. This also holds true for PlanT, where adding the rear improves the DS from 72.86 to 81.36. These results show that a full ${360}^{ \circ }$ field of view is helpful to handle certain situations encountered during our evaluation (e.g. changing lanes). Finally, removing the vehicle speed input significantly reduces the DS for both PlanCNN and PlanT (Table 2a), showing the importance of cues regarding motion.
116
+
117
+ PlanT vs. PlanCNN. In Table 2b, we show the impact of scaling the dataset size and the model size for PlanT and PlanCNN. The circle size indicates the inference time (IT) needed for that model. First, we observe that PlanT demonstrates better data efficiency than PlanCNN, e.g., using the $1 \times$ data setting is sufficient to reach the same performance as PlanCNN with $2 \times$ . Interestingly, scaling the data from $1 \times$ to $3 \times$ leads to expert-level performance with all the models other than PlanCNN with ResNet-18, showing the effectiveness of scaling. In fact, PlanT ${}_{\text{ MEDIUM }}\left( {81.36}\right)$ outperforms the expert (76.91) in some evaluation runs. We visualize one consistent failure mode of the expert that leads to this discrepancy in Fig. 3a. We observe that the expert sometimes stops once it has already entered an intersection if it anticipates a collision, which then leads to collisions or blocked traffic. On the other hand, PlanT learns to wait further outside an intersection before entering which is a smoother function than the discrete rule-based expert, and subsequently avoids these infractions. Importantly, in our final setting, Plan ${\mathrm{T}}_{\mathrm{{MEDIUM}}}$ is around $3 \times$ as fast as PlanCNN while being 4 points better in terms of the DS, and Plan ${\mathrm{T}}_{\mathrm{{MINI}}}$ is ${5.3} \times$ as fast (IT=5.46 ms) while reaching the same DS as PlanCNN. This shows that PlanT is suitable for systems where fast inference time is a requirement. To demonstrate the reproducibility of our results with different initializations, we report driving scores of PlanCNN and PlanT with multiple training seeds in the supplementary material.
118
+
119
+ Loss. A detailed study of the training strategy for PlanT can be found in the supplementary material, where we show that the auxiliary loss proposed in Eq. (1) is crucial to its performance. However, since this is a self-supervised objective, it can be incorporated without additional annotation costs.
120
+
121
+ max width=
122
+
123
+ $\mathbf{{Method}}$ Map Side (m) Back (m) Speed DS↑
124
+
125
+ 1-6
126
+ AIM-BEV [17] ✓ 19.2 0 - ${45.06} \pm {1.68}$
127
+
128
+ 1-6
129
+ Roach [11] ✓ 19.2 8 ✓ ${55.27} \pm {1.43}$
130
+
131
+ 1-6
132
+ 4*PlanCNN - 19.2 0 ✓ ${59.97} \pm {4.47}$
133
+
134
+ 2-6
135
+ - 30 0 ✓ ${70.72} \pm {2.99}$
136
+
137
+ 2-6
138
+ - 30 30 ✓ 77.47±1.34
139
+
140
+ 2-6
141
+ - 30 30 - ${69.13} \pm {1.43}$
142
+
143
+ 1-6
144
+ 3*PlanT - 30 0 ✓ ${72.86} \pm {5.56}$
145
+
146
+ 2-6
147
+ - 30 30 ✓ 81.36±6.54
148
+
149
+ 2-6
150
+ - 30 30 - ${72.34} \pm {3.30}$
151
+
152
+ 1-6
153
+
154
+ (a) Input Representation. DS on Longest6 (3 evaluation runs) with different input properties.
155
+
156
+ < g r a p h i c s >
157
+
158
+ Table 2: We investigate the choices of the input representation (Table 2a) and architecture (Table 2b) for learning-based planners. Including vehicles to the back of the ego vehicle, encoding vehicle speeds, and scaling to large models/datasets is crucial for performance of both PlanCNN and PlanT.
159
+
160
+ This is in line with recent findings on training transformers that show the effectiveness of supervising multiple output tokens instead of just a single [CLS] token [68].
161
+
162
+ § 4.2 EXPLAINABILITY: IDENTIFICATION OF MOST RELEVANT OBJECTS
163
+
164
+ Next, we investigate the explainability of PlanT and PlanCNN by analyzing the objects in the scene that are relevant and crucial for the agent's decision. In particular, we measure the relevance of an object in terms of the learned attention for PlanT and by considering the impact that the removal of each object has on the output predictions for PlanCNN. To quantify the ability to reason about the most relevant objects, we propose a novel evaluation scheme together with the Relative Filtered Driving Score (RFDS). For the rule-based expert algorithm, collision avoidance depends on a single vehicle which it identifies as the reason for braking. To measure the RFDS of a learned planner, we run one forward pass of the planner (without executing the actions) to obtain a scalar relevance score for each vehicle in the scene. We then execute the expert algorithm while restricting its observations to the (single) vehicle with the highest relevance score. The RFDS is defined as the relative DS of this restricted version of the expert compared to the default version which checks for collisions against all vehicles. We describe the extraction of the relevance score for PlanT and PlanCNN in the following. Our protocol leads to a fair comparison of different agents as the RFDS does not depend on the ability to drive itself but only on the obtained ranking of object relevance.
165
+
166
+ Baselines. As a naïve baseline, we consider the inverse distance to the ego vehicle as a vehicle's relevance score, such that the expert only sees the closest vehicle. For PlanT, we extract the relevance score by adding the attention weights of all layers and heads for the [CLS] token. This only requires a single forward pass of PlanT. Since PlanCNN does not use attention, we choose a masking method to find the most salient region in the image, using the same principle as $\left\lbrack {{51},{50},{49}}\right\rbrack$ . We remove one object at a time from the input image and compute the ${L}_{1}$ distance to the predicted waypoints for the full image. The objects are then ranked based on how much their absence affects the total distance. In the supplementary material, we provide additional results using GradCAM [47] adapted for PlanCNN, which we found to not work as well as the proposed masking approach.
167
+
168
+ Results. We provide results for the reasoning of the planners about relevant objects in Table 3. Both planners significantly outperform the distance-based baseline, with PlanT obtaining a mean RFDS of 96.82 compared to 82.83 for PlanCNN. We show qualitative examples in Figure $3\mathrm{\;b}$ where we highlight the vehicle with the highest relevance score using a red bounding rectangle. Both planners correctly identify the most important object in simple scenarios when only a few vehicles are in the field of view or the relevant vehicle is the only one close to the ego vehicle's path. However, PlanT is also able to correctly identify the most important object in complex scenes. When merging into a lane (examples 1 & 2 from the left) it correctly looks at the moving vehicles coming from the rear to avoid collisions. Example 3 shows advanced reasoning about dynamics. The two vehicles closer to the ego vehicle are moving away from the intended route at a high speed and are therefore not as relevant for planning. PlanT already pays attention to the more distant vehicle behind them as this is the one that it would collide with if it does not brake. One of the failures of PlanT we observe is that it sometimes allocates the highest attention to a very close vehicle behind itself (example 4) and misses the relevant object. PlanCNN has more prominent errors when there are a large number of vehicles in the scene, but nothing directly ahead, such as problems attending to the relevant object when merging lanes (examples 2 & 3). To better assess the driving performance and relevance scores we provide additional results in the supplementary video.
169
+
170
+ max width=
171
+
172
+ $\mathbf{{Method}}$ RFDS $\uparrow$
173
+
174
+ 1-2
175
+ Inverse Distance ${29.13} \pm {0.54}$
176
+
177
+ 1-2
178
+ PlanCNN + Masking ${82.83} \pm {6.79}$
179
+
180
+ 1-2
181
+ PlanT + Attention 96.82±2.12
182
+
183
+ 1-2
184
+
185
+ Table 3: RFDS. Relative score of the expert when only observing the most relevant vehicle according to the respective planner.
186
+
187
+ < g r a p h i c s >
188
+
189
+ Figure 3: We contrast a failure case of the expert to PlanT (Fig. 3a) and show the quality of the relevance scores (Fig. 3b). The ego vehicle is marked with a yellow triangle, vehicles that either lead to collisions or are intuitively the most relevant in the scene are marked with a blue box.
190
+
191
+ § 4.3 LIMITATIONS
192
+
193
+ Our study has several limitations. Firstly, since planning is a challenging task in itself and the capabilities of learning-based planners are not fully understood, we restrict the scope of our experiments to exclude the task of perception from sensor inputs. Integrating PlanT with a state-of-the-art perception system for object detection and tracking is a promising next step. Second, the expert driver used in our IL-based training strategy does not achieve a perfect score in our evaluation setting (Table 1) and has certain consistent failure modes (Fig. 3a, more examples can be found in the supplementary material). While a human driver may be able to achieve higher driving scores, human data collection would be time-consuming (the $3 \times$ dataset used in our experiments contains around 95 hours of driving). Instead, we use the best publicly available algorithmic expert, which is a standard practice for IL on CARLA [69, 70, 34, 18]. Finally, all our experiments are conducted in simulation. Real-world scenarios are more diverse and challenging. However, CARLA is a high-fidelity simulator, and previous findings demonstrate that systems developed in simulators like CARLA can be transferred to the real world [30, 71, 72].
194
+
195
+ § 5 CONCLUSION
196
+
197
+ In this work, we take a step towards efficient, high-performance, explainable planning for autonomous driving with a novel object-level representation and transformer-based architecture called PlanT. Based on our experiments, we find that it is possible to replace dense, high-dimensional feature grids including HD map information with a simple representation of the ego vehicle route without a drop in performance. We show that incorporating a ${360}^{ \circ }$ field of view, information about vehicle speeds, and scaling up both the architecture and dataset size of a learned planner are essential to achieve state-of-the-art results on the CARLA simulator. Additionally, we show that PlanT can reliably identify the most relevant object in the scene via a new metric and evaluation protocol that focus on explainability.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/8ktEdb5NHEh/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,253 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Learning Sampling Distributions for Model Predictive Control
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Sampling-based methods have become a cornerstone of contemporary approaches to Model Predictive Control (MPC), as they make no restrictions on the differentiability of the dynamics or cost function and are straightforward to parallelize. However, their efficacy is highly dependent on the quality of the sampling distribution itself, which is often assumed to be simple, like a Gaussian. This restriction can result in samples which are far from optimal, leading to poor performance. Recent work has explored improving the performance of MPC by sampling in a learned latent space of controls. However, these methods ultimately perform all MPC parameter updates and warm-starting between time steps in the control space. This requires us to rely on a number of heuristics for generating samples and updating the distribution and may lead to sub-optimal performance. Instead, we propose to carry out all operations in the latent space, allowing us to take full advantage of the learned distribution. Specifically, we frame the learning problem as bi-level optimization and show how to train the controller with backpropagation-through-time. By using a normalizing flow parameterization of the distribution, we can leverage its tractable density to avoid requiring differentiability of the dynamics and cost function. Finally, we evaluate the proposed approach on simulated robotics tasks and demonstrate its ability to surpass the performance of prior methods and scale better with a reduced number of samples.
12
+
13
+ Keywords: Model Predictive Control, Normalizing Flows
14
+
15
+ ## 1 Introduction
16
+
17
+ Sequential decision making under uncertainty is a fundamental problem in machine learning and robotics. Recently, model predictive control (MPC) has emerged as a powerful paradigm to tackle such problems on real-world robotic systems. In particular, MPC has been successfully applied to helicopter aerobatics [1], robot manipulation [2, 3, 4], humanoid robot locomotion [5], robot-assisted dressing [6], and aggressive off-road driving [7,8,9]. Sampling-based approaches to MPC are becoming particularly popular due to their simplicity and ability to handle non-differentiable dynamics and cost functions. These methods work by sampling controls from a policy distribution and rolling out an approximate system model using the sampled control sequences. They then use the resulting trajectories to compute an approximate gradient of the cost function and update the policy. The controller then samples an action from this distribution and applies it to the system, and it repeats the process from the resulting next state, creating a feedback controller. It warm-starts the optimization process using a modification of the solution at the previous time step.
18
+
19
+ An important design decision is the form of the sampling distribution, which is often simple, e.g. a Gaussian, such that we can efficiently sample and tractably update its parameters. However, this also has drawbacks: without much control over the distribution form, samples often lie in high-cost regions, hindering performance. This can be particularly problematic in complex environments with sparse costs or rewards, as a poorly parameterized distribution may hinder efficient exploration, leading the system into bad local optima. A side effect is that we often require many samples to accomplish the objective, increasing computational requirements. There have been extensions which target more complex distributions, such as Gaussian mixture models [10] and a particle method based on Stein variational gradient descent (SVGD) [11]. However, there is a large amount of structure in the environment that these methods fail to exploit. Instead, an alternative approach is to learn a sampling distribution which can leverage the environmental structure to draw more optimal samples.
20
+
21
+ Prior work on learning MPC sampling distributions generally requires differentiability of the dynamics and cost function [12, 13]. Power and Berenson [14, 15] circumvent this issue by leveraging normalizing flows $\left( \mathrm{{NFs}}\right) \left\lbrack {{16},{17},{18}}\right\rbrack$ , which have a tractable log-likelihood. This property allows them to learn flexible distributions by directly optimizing the MPC cost without requiring differentiability via the likelihood-ratio gradient. However, a limitation of their approach is that all online updates to the distribution and warm-starting between time steps occur entirely in the control space, leaving the latent space fixed. This forces them to apply heuristics to generate samples by combining those from the learned distribution with Gaussian perturbations to the current control-space mean. These restrictions prevent us from fully taking advantage of the learned distribution and potentially throws away useful information. Additionally, their approach does not allow for the incorporation of control constraints directly into the sampling distribution, which is important for real-world robots.
22
+
23
+ Instead, we propose to alter the optimization machinery to operate entirely in the latent space. As the NF latent space follows a simple distribution, it remains feasible to perform MPC updates in this learned space and update the latent distribution online. Specifically, during an episode, the parameters of the latent distribution are updated with MPC while those of the NF remain fixed. Then during training, after each episode, the parameters of the NF are updated. We can frame this setup as a bi-level optimization problem [19] and derive a method for computing an approximate gradient through the latent MPC update. This involves treating MPC as a recurrent network, where the control distribution acts as a form of memory, and unrolling the computation to train with backpropagation-through-time (BPTT). However, it is no longer clear how to warm-start between time steps because there is no clear delineation of time in the latent space. Moreover, the usual method of warm-starting, which simply shifts the current plan forward in time, may be sub-optimal. Therefore, we propose to learn a shift model, which performs all warm-starting operations in the latent space. Finally, we show how to alter the NF architecture to incorporate box constraints on the sampled controls.
24
+
25
+ Contributions: In this work, we build upon recent efforts to learn sampling distributions for MPC with NFs by moving all online parameter updates and warm-start operations into the the latent space. We accomplish this by framing the learning problem as bi-level optimization and derive an approximate gradient through the MPC update of the latent distribution in order to train the network with BPTT. Additionally, we show how to parameterize the flow architecture such that we can incorporate box constraints on the controls. Finally, we empirically evaluate our proposed approach on simulated robotics tasks, including both navigation and manipulation problems. In both cases, we demonstrate its ability to improve performance over all baselines by taking full advantage of the learned latent space. Furthermore, we find that the performance of the controllers with our learned sampling distributions scales more gracefully with a reduction in the number of samples.
26
+
27
+ ## 2 Sampling-Based Model Predictive Control
28
+
29
+ We consider the problem of controlling a discrete-time stochastic dynamical system, which is in state ${x}_{t} \in {\mathbb{R}}^{N}$ at time step $t$ . Upon the application of control ${u}_{t} \in {\mathbb{R}}^{M}$ , the system incurs the instantaneous cost $c\left( {{x}_{t},{u}_{t}}\right)$ and transitions to the next state ${x}_{t + 1}$ according to the dynamics ${x}_{t + 1} \sim f\left( {{x}_{t},{u}_{t}}\right)$ . We wish to design a state-feedback policy ${u}_{t} \sim \pi \left( {\cdot \mid {x}_{t}}\right)$ such that the system achieves good performance over $T$ steps. Instead of finding a single, globally optimal policy, MPC re-optimizes a local policy at each time step by predicting the system’s behavior over a finite horizon $H < T$ using an approximate model $\widehat{f}$ . Specifically, in sampling-based MPC, we generate a sequence of controls by sampling ${\widehat{\mathbf{u}}}_{t} \sim {\pi }_{\theta }\left( \cdot \right)$ , where ${\widehat{\mathbf{u}}}_{t} \triangleq \left( {{\widehat{u}}_{t},{\widehat{u}}_{t + 1},\cdots ,{\widehat{u}}_{t + H - 1}}\right)$ and our policy is parameterized by $\theta \in \Theta$ . We rollout our model starting at our current state ${x}_{t}$ using these sampled controls to get our predicted state sequence ${\widehat{x}}_{t} \triangleq \left( {{\widehat{x}}_{t},{\widehat{x}}_{t + 1},\cdots ,{\widehat{x}}_{t + H}}\right)$ , with ${\widehat{x}}_{t} = {x}_{t}$ . The total trajectory cost is
30
+
31
+ $$
32
+ C\left( {{\widehat{\mathbf{x}}}_{t},{\widehat{\mathbf{u}}}_{t}}\right) = \mathop{\sum }\limits_{{h = 0}}^{{H - 1}}c\left( {{\widehat{x}}_{t + h},{\widehat{u}}_{t + h}}\right) + {c}_{\text{term }}\left( {\widehat{x}}_{t + H}\right) , \tag{1}
33
+ $$
34
+
35
+ where ${c}_{\text{term }}\left( \cdot \right)$ is a terminal cost function. We then define a statistic $\widehat{J}\left( {\theta ;{x}_{t}}\right)$ defined on cost $C\left( {{\widehat{\mathbf{x}}}_{t},{\widehat{\mathbf{u}}}_{t}}\right)$ and use the rollouts to solve ${\theta }_{t} \leftarrow \arg \mathop{\min }\limits_{{\theta \in \Theta }}\widehat{J}\left( {\theta ;{x}_{t}}\right)$ . After optimizing our parameters, we sample the control sequence ${\widehat{\mathbf{u}}}_{t} \sim {\pi }_{{\theta }_{t}}\left( \cdot \right)$ , apply the first control to the real system (i.e. ${u}_{t} = {\widehat{u}}_{t}$ ), and repeat the process. Because each parameter ${\theta }_{t}$ depends on the current state, MPC effectively yields a state-feedback policy, even though the individual distributions give us an open-loop sequence.
36
+
37
+ In this paper, we consider a popular sampling-based MPC algorithm known as Model Predictive Path Integral (MPPI) control [7, 8]. MPPI optimizes the exponential utility or risk-seeking objective:
38
+
39
+ $$
40
+ \widehat{J}\left( {\theta ;{x}_{t}}\right) = - \log {\mathbb{E}}_{{\pi }_{\theta },\widehat{f}}\left\lbrack {\exp \left( {-\frac{1}{\beta }C\left( {{\widehat{\mathbf{x}}}_{t},{\widehat{\mathbf{u}}}_{t}}\right) }\right) }\right\rbrack , \tag{2}
41
+ $$
42
+
43
+ 7 where $\beta > 0$ is a scaling parameter, known as the temperature. As we do not assume that the dynamics or cost function are differentiable, we compute the gradients via the likelihood-ratio derivative:
44
+
45
+ $$
46
+ \nabla \widehat{J}\left( {\theta ;{x}_{t}}\right) = - \frac{{\mathbb{E}}_{{\pi }_{\theta },\widehat{f}}\left\lbrack {{e}^{-\frac{1}{\beta }C\left( {{\widehat{\mathbf{x}}}_{t},{\widehat{\mathbf{u}}}_{t}}\right) }{\nabla }_{\theta }\log {\pi }_{\theta }\left( {\widehat{\mathbf{u}}}_{t}\right) }\right\rbrack }{{\mathbb{E}}_{{\pi }_{\theta },\widehat{f}}\left\lbrack {e}^{-\frac{1}{\beta }C\left( {{\widehat{\mathbf{x}}}_{t},{\widehat{\mathbf{u}}}_{t}}\right) }\right\rbrack }. \tag{3}
47
+ $$
48
+
49
+ 9 In MPPI, the policy is assumed to be a factorized Gaussian of the form
50
+
51
+ $$
52
+ {\pi }_{\theta }\left( \widehat{\mathbf{u}}\right) = \mathop{\prod }\limits_{{h = 0}}^{{H - 1}}{\pi }_{{\theta }_{h}}\left( {\widehat{u}}_{t + h}\right) = \mathop{\prod }\limits_{{h = 0}}^{{H - 1}}\mathcal{N}\left( {{\widehat{u}}_{t + h};{\mu }_{t + h},{\sum }_{t + h}}\right) . \tag{4}
53
+ $$
54
+
55
+ Previous work by Wagener et al. [9] has shown that optimizing this objective with dynamic mirror descent (DMD) [20] and approximating with Monte Carlo estimates gives us the MPPI update rule:
56
+
57
+ $$
58
+ {\mu }_{t + h} = \left( {1 - {\gamma }_{t}^{\mu }}\right) {\widetilde{\mu }}_{t + h} + {\gamma }_{t}^{\mu }\mathop{\sum }\limits_{{i = 1}}^{N}{w}_{i}{\widehat{u}}_{t + h}^{\left( i\right) },\;{w}_{i} = \frac{{e}^{-\frac{1}{\beta }C\left( {{\widehat{\mathbf{x}}}_{t}^{\left( i\right) },{\widehat{\mathbf{u}}}_{t}^{\left( i\right) }}\right) }}{\mathop{\sum }\limits_{{j = 1}}^{N}{e}^{-\frac{1}{\beta }C\left( {{\widehat{\mathbf{x}}}_{t}^{\left( j\right) },{\widehat{\mathbf{u}}}_{t}^{\left( j\right) }}\right) }} \tag{5}
59
+ $$
60
+
61
+ where ${\widetilde{\mu }}_{t + h}$ is the current mean for each time step and ${\gamma }_{t}^{\mu }$ is the step size. Between time steps of DMD, we get ${\widetilde{\mu }}_{t + h}$ from our previous solution ${\mu }_{t + h}$ by using a shift model ${\widetilde{\mu }}_{t + h} = \Phi \left( {\mu }_{t + h}\right)$ . This shift model aims to predict the optimal decision at the next time step given the previous solution. In the context of MPC, it allows us to warm-start the optimization problem to speed up convergence, as we can only approximately solve the optimization problem due to real-time constraints.
62
+
63
+ ## 3 Learning the Sampling Distribution of MPC
64
+
65
+ ### 3.1 Representation of the Learned Distribution
66
+
67
+ Instead of using uninformed sampling distributions, learned distributions can potentially exploit structure in the environment to draw samples which are more likely to be collision-free and close to optimal. However, such learned distributions must be sufficiently expressive in order to better capture near-optimal, potentially multimodal, behavior. They must also be parameterized such that it is tractable to sample from and update online. If the distribution has a large number of parameters, the number of samples required to efficiently update them online may be computationally infeasible. And ideally, the form of our distribution would be such that we could find a closed-form update.
68
+
69
+ One path towards meeting these criteria is to maintain a simple latent distribution from which we can sample, and then learn a transformation of the samples which maps them to a more complex distribution. During training, we learn the parameters of this transformation, which can be conditioned on problem-specific information, such as the starting and goal configurations of the robot and obstacle placements. However, when executing the policy during an episode, the parameters of this learned transformation remain fixed, and instead, we update the parameters of the latent distribution. Concretely, we consider learning a distribution ${\pi }_{\theta ,\lambda }$ which is defined implicitly as follows:
70
+
71
+ $$
72
+ {\widehat{\mathbf{z}}}_{t} \sim {p}_{\theta }\left( \cdot \right) ,\;{\widehat{\mathbf{u}}}_{t} = {h}_{\lambda }\left( {{\widehat{\mathbf{z}}}_{t};c}\right) \tag{6}
73
+ $$
74
+
75
+ where ${\widehat{z}}_{t} \triangleq \left( {{\widehat{z}}_{t},{\widehat{z}}_{t + 1},\cdots ,{\widehat{z}}_{t + H - 1}}\right) , c$ is a context variable describing the relevant information of the environment, ${p}_{\theta }$ is the latent distribution with parameters $\theta$ , and ${h}_{\lambda }$ is the learned conditional transformation with parameters $\lambda$ . Moving forward, we assume that both ${\widehat{\mathbf{z}}}_{t}$ and ${\widehat{\mathbf{u}}}_{t}$ are stacked as vectors in ${\mathbb{R}}^{MH}$ . If ${p}_{\theta }$ is a Gaussian factorized as in Equation (4) and we assume that ${h}_{\lambda }$ is invertible, we prove in Appendix A. 5 that the corresponding DMD update to the latent mean is simply Equation (5), except that we replace the controls in the weighted sum with the latent samples.
76
+
77
+ ### 3.2 Formulating the Learning Problem
78
+
79
+ Learning the distribution ${\pi }_{\theta ,\lambda }$ amounts to solving a bi-level optimization problem [19], in which one optimization problem is nested in another. The lower-level optimization problem involves updating the latent distribution parameters at each time step, ${\theta }_{t}$ , by minimizing the expected cost with DMD. The upper-level optimization problem consists of learning $\lambda$ such that MPC performs well across a number of different environments. To formalize this, first consider that we have some distribution of environments $c \sim \mathcal{C}\left( \cdot \right)$ over which we wish MPC to perform well. For each environment, our system has some conditional initial state distribution ${x}_{0} \sim \rho \left( {\cdot \mid c}\right)$ . The objective we wish to minimize is then
80
+
81
+ $$
82
+ \ell \left( {\mathbf{\theta },\lambda ;c}\right) = {\mathbb{E}}_{{\mathbf{\pi }}_{\mathbf{\theta },\lambda },\rho , f}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{{T - 1}}\widehat{J}\left( {{\theta }_{t},\lambda ;{x}_{t}, c}\right) }\right\rbrack \tag{7}
83
+ $$
84
+
85
+ where $\mathbf{\theta } = \left( {{\theta }_{0},{\theta }_{1},\cdots ,{\theta }_{T - 1}}\right)$ and our cost statistic, $\widehat{J}$ , now depends on $\lambda$ and $c$ as well. This objective measures the expected performance of the intermediate plans produced by MPC along the $T$ steps of the episode. Our desired bi-level optimization problem can be formulated as:
86
+
87
+ $$
88
+ \mathop{\min }\limits_{\lambda }{\mathbb{E}}_{\mathcal{C}}\left\lbrack {\ell \left( {\mathbf{\theta }\left( \lambda \right) ,\lambda ;c}\right) }\right\rbrack \;\text{ s.t. }\mathbf{\theta }\left( \lambda \right) { \approx }_{\lambda }\underset{\mathbf{\theta }}{\arg \min }\ell \left( {\mathbf{\theta },\lambda ;c}\right) \tag{8}
89
+ $$
90
+
91
+ where ${ \approx }_{\lambda }$ indicates that we approximate the solution of the optimization problem with an iterative algorithm that may also be parameterized by $\lambda$ , as the exact minimizer is not available in closed form. Moreover, the notation $\mathbf{\theta }\left( \lambda \right)$ indicates the dependence of the lower-level solution on the upper-level parameters. In our case, we solve the lower-level problem with DMD, where we also parameterize the shift model, ${\Phi }_{\lambda }\left( {\cdot ;c}\right)$ , making it a learnable component and conditioned on $c$ .
92
+
93
+ The normal shift model in MPC simply shifts the control sequence forward one time step and appends a zero or random control at the end. However, because we are performing this update in the latent space, there is no clear delineation between time steps of the latent controls, as they are coupled according to the learned transformation. Therefore, there is no way to easily perform the equivalent shift operation in the latent space. As such, we instead learn this shift model along with the transformation. Besides, the standard approach described above may not be optimal. By learning it, we may be able to further improve performance. This is especially true because the performance hinges greatly on the quality of the shift model since we only run one iteration of the DMD update.
94
+
95
+ ### 3.3 Parameterizing with Normalizing Flows
96
+
97
+ In order to optimize the upper-level objective in Equation (8) with respect to $\lambda$ , we need to be able to compute the density ${\pi }_{\theta ,\lambda }$ directly. Therefore, we choose to represent ${h}_{\lambda }$ with a normalizing flow (NF) $\left\lbrack {{16},{18},{17}}\right\rbrack$ , which explicitly learns the density by defining an invertible transformation that maps latent variables to observed data. Generally, we compose a series of component flows together, i.e. ${h}_{\lambda } = {h}_{{\lambda }_{K}} \circ {h}_{{\lambda }_{K - 1}} \circ \cdots \circ {h}_{{\lambda }_{1}}$ , which define a series of intermediate variables ${\widehat{\mathbf{y}}}_{0},\ldots ,{\widehat{\mathbf{y}}}_{K - 1},{\widehat{\mathbf{y}}}_{K}$ , with ${\widehat{\mathbf{y}}}_{0} = \widehat{\mathbf{z}}$ and ${\widehat{\mathbf{y}}}_{K} = \widehat{\mathbf{u}}$ . The log-likelihood of the composed flow is given by:
98
+
99
+ $$
100
+ \log {\pi }_{\theta ,\lambda }\left( {\widehat{\mathbf{u}} \mid c}\right) = \log {p}_{\theta }\left( \widehat{\mathbf{z}}\right) - \mathop{\sum }\limits_{{i = 1}}^{K}\log \left| {\det \frac{\partial {\widehat{\mathbf{y}}}_{i}}{\partial {\widehat{\mathbf{y}}}_{i - 1}}}\right| . \tag{9}
101
+ $$
102
+
103
+ In this work, we make use of the affine coupling layer proposed by Dinh et al. [18] as part of the real non-volume-preserving (RealNVP) flow. The core idea is to split the input $\widehat{\mathbf{u}}$ into two partitions $\widehat{\mathbf{u}} = \left( {{\widehat{\mathbf{u}}}_{{I}_{1}},{\widehat{\mathbf{u}}}_{{I}_{2}}}\right)$ , where ${I}_{1}$ and ${I}_{2}$ are a partition of $\left\lbrack {1,{MH}}\right\rbrack$ , and apply
104
+
105
+ $$
106
+ {\widehat{\mathbf{y}}}_{{I}_{1}} = {\widehat{\mathbf{u}}}_{{I}_{1}},\;{\widehat{\mathbf{y}}}_{{I}_{2}} = {\widehat{\mathbf{u}}}_{{I}_{2}} \odot \exp {s}_{\lambda }\left( {{\widehat{\mathbf{u}}}_{{I}_{1}}, c}\right) + {t}_{\lambda }\left( {{\widehat{\mathbf{u}}}_{{I}_{1}}, c}\right) , \tag{10}
107
+ $$
108
+
109
+ where ${s}_{\lambda }$ and ${t}_{\lambda }$ are the scale and translation terms, which are represented with arbitrary neural networks, and $\odot$ is the Hadamard product. This makes computing the log-determinant term in Equation (9) and inverting the flow fast and efficient. Now, in robotics, we often have lower and upper limits on the controls. These are usually enforced in sampling-based MPC by either clamping the control samples or passing them through a scaled sigmoid. However, instead of enforcing the constraints heuristically after sampling, we learn a constrained sampling distribution directly. Since the sigmoid function is invertible and has a tractable log-determinant (shown in Appendix A.7), we can simply append one after ${h}_{{\lambda }_{K}}$ in the NF and scale it by the control limits. This ensures that control constraints are satisfied by design and taken into account while learning the distribution.
110
+
111
+ ### 3.4 Training the Sampling Distribution
112
+
113
+ Computing gradients through the upper-level objective is not straightforward, as both the expectation and the inner terms of Equation (7) depend on $\lambda$ . Therefore, the state distribution depends on the NF and latent shift model. One way around this issue is to consider a modified objective at each batch $d$ :
114
+
115
+ $$
116
+ {\ell }_{d}\left( {\mathbf{\theta },\lambda ;c}\right) = {\mathbb{E}}_{{\mathbf{\pi }}_{\mathbf{\theta },{\lambda }_{d}},\rho , f}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{{T - 1}}\widehat{J}\left( {{\theta }_{t},\lambda ;{x}_{t}, c}\right) }\right\rbrack , \tag{11}
117
+ $$
118
+
119
+ which fixes the outer expectation to be with respect to the current policy. Intuitively, this choice trains the NF to optimize the MPC cost function under the state distribution resulting from the current policy ${\pi }_{\mathbf{\theta },{\lambda }_{d}}$ . We then update the outer expectation distribution at each episode, overcoming the covariate shift problem that would otherwise arise.
120
+
121
+ Now, we only have to focus on computing the gradient ${\nabla }_{\lambda }{\left. \widehat{J}\left( {\theta }_{t}\left( \lambda \right) ,\lambda ;{x}_{t}, c\right) \right| }_{\lambda = {\lambda }_{d}}$ for each time step, which can be computed similar to Equation (3) and approximated with Monte Carlo sampling:
122
+
123
+ $$
124
+ \nabla \widehat{J}\left( {{\theta }_{t}\left( \lambda \right) ,\lambda ;{x}_{t}, c}\right) \approx - \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{i}{\nabla }_{\lambda }\log {\pi }_{{\theta }_{t}\left( \lambda \right) ,\lambda }\left( {{\widehat{\mathbf{u}}}_{t}^{\left( i\right) } \mid c}\right) , \tag{12}
125
+ $$
126
+
127
+ where the weights ${w}_{i}$ are defined according to Equation (5). The log-likelihood is given by Equation (9), the gradient of which involves computing the backwards pass through the network ${h}_{\lambda }$ . However, we also have to consider the dependence of the latent distribution parameters $\mathbf{\theta }\left( \lambda \right)$ on $\lambda$ . Therefore, we must backpropagate through the MPC update:
128
+
129
+ $$
130
+ {\mathbf{\mu }}_{t}\left( \lambda \right) = \left( {1 - {\gamma }_{t}^{\mu }}\right) {\widetilde{\mathbf{\mu }}}_{t}\left( \lambda \right) + {\gamma }_{t}^{\mu }\Delta {\mathbf{\mu }}_{t},\;\Delta {\mathbf{\mu }}_{t} = \frac{{\mathbb{E}}_{{\bar{\pi }}_{{\widetilde{\theta }}_{t}\left( \lambda \right) ,\lambda },\widehat{f}}\left\lbrack {{e}^{-\frac{1}{\beta }C\left( {{\widehat{\mathbf{x}}}_{t},{\widehat{\mathbf{u}}}_{t}}\right) }{h}_{\lambda }\left( {{\widehat{\mathbf{u}}}_{t};c}\right) }\right\rbrack }{{\mathbb{E}}_{{\bar{\pi }}_{{\widetilde{\theta }}_{t}\left( \lambda \right) ,\lambda },\widehat{f}}\left\lbrack {e}^{-\frac{1}{\beta }C\left( {{\widehat{\mathbf{x}}}_{t},{\widehat{\mathbf{u}}}_{t}}\right) }\right\rbrack } \tag{13}
131
+ $$
132
+
133
+ where the previous shifted mean ${\widetilde{\mathbf{\mu }}}_{t}\left( \lambda \right)$ is given by the learned latent shift model. To compute the gradient of Equation (13), we must approximate the gradient of $\Delta {\mathbf{\mu }}_{t}$ with respect to $\lambda$ , which we can compute as $\frac{\partial \Delta {\mathbf{\mu }}_{t}}{\partial \lambda } \approx {M}_{1} - {M}_{2}{M}_{3}$ , where we define:
134
+
135
+ $$
136
+ {M}_{1} = \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{i}\left\lbrack {{\nabla }_{\lambda }{h}_{\lambda }\left( {{\widehat{\mathbf{u}}}_{t}^{\left( i\right) };c}\right) + {h}_{\lambda }\left( {{\widehat{\mathbf{u}}}_{t}^{\left( i\right) };c}\right) {\nabla }_{\lambda }\log {\pi }_{\widetilde{\theta }\left( \lambda \right) ,\lambda }\left( {{\widehat{\mathbf{u}}}_{t}^{\left( i\right) } \mid c}\right) }\right\rbrack , \tag{14}
137
+ $$
138
+
139
+ $$
140
+ {M}_{2} = \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{i}{h}_{\lambda }\left( {{\widehat{\mathbf{u}}}_{t}^{\left( i\right) };c}\right) ,\;{M}_{3} = \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{i}{\nabla }_{\lambda }\log {\pi }_{\widetilde{\theta }\left( \lambda \right) ,\lambda }\left( {{\widehat{\mathbf{u}}}_{t}^{\left( i\right) } \mid c}\right) .
141
+ $$
142
+
143
+ The derivation of this approximate gradient can be found in Appendix A.6. Note that computing the gradients ${\nabla }_{\lambda }\log {\pi }_{\widetilde{\theta }\left( \lambda \right) ,\lambda }\left( {{\widehat{\mathbf{u}}}_{t}^{\left( i\right) } \mid c}\right)$ will also require us to backpropagate through the shift model due to the dependence of $\widetilde{\theta }$ on $\lambda$ . Therefore, even when the step size is set to one, i.e. ${\gamma }_{t}^{\mu } = 1$ , we introduce a form of recurrence between time steps. Please see Appendix A. 1 for additional visualizations of the computational graph generated by an episode and further descriptions of the overall algorithm.
144
+
145
+ ## 4 Related Work
146
+
147
+ Multiple works have considered sampling distributions beyond simple Gaussians, such as Gaussian mixture models [10] and a particle method based on Stein variational gradient descent (SVGD) [11]. Additionally, most implementations use heuristics to modify samples and squeeze out additional performance gains [4, 21]. In terms of learning the distribution, Amos and Yarats [12] learn a latent action space for their proposed differentiable cross-entropy method (CEM) controller. However, they require all components of the pipeline to be differentiable and do not consider learning a shift model. Agarwal et al. [13] learn a normalizing flow in the latent space of a variational autoencoder (VAE). Yet, they also require differentiability, use expert demonstrations to learn the latent space of the VAE, and have no means of warm-starting between time steps. Wang and Ba [22] propose to use a learned feedback policy to warm-start MPC, but still rely on a Gaussian perturbations to the
148
+
149
+ ![01963fcf-8d52-7514-8fa9-bbe46e22b3fa_5_308_208_1181_320_0.jpg](images/01963fcf-8d52-7514-8fa9-bbe46e22b3fa_5_308_208_1181_320_0.jpg)
150
+
151
+ Figure 1: (a) Success rate and average cost of successful trajectories on the PNGRID task across different samples. (b) Visualization of trajectories in the PNGRID task across multiple random seeds for a fixed environmental layout.
152
+
153
+ proposed action sequence. The authors also explore performing online planning in the space of the network's parameters, which results in a massive action space that may be hard to scale.
154
+
155
+ Power and Berenson [14, 15] also train a normalizing flow to use as the sampling distribution for MPC, but they do not learn a latent shift model and perform all operations in the control space. Moreover, they mix the latent samples with Gaussian perturbations to the current control-space mean, as they do not update the latent distribution directly. This prevents them from fully taking advantage of the learned distribution and throws away useful information which could potentially improve performance. In fact, we show in Appendix A. 4 that the learned shift model contributes significantly to the performance gains. Additionally, a primary focus of their work is on how to handle out-of-distribution (OOD) environments by learning a posterior over environment context variables. We could combine their approach with ours by conditioning the learned shift model on the inferred environment context for improved generalization. Finally, normalizing flows have also been used to improve exploration in RL $\left\lbrack {{23},{24},{25}}\right\rbrack$ by providing a more flexible, and potentially multimodal, distribution. They have also been employed in sampling-based motion planning [26, 27] to provide good proposal configurations to speed up convergence.
156
+
157
+ ## 5 Experimental Results
158
+
159
+ In all experiments, we denote our proposed approach as NFMPC, the baseline MPPI implementation as MPPI, and the method by Power and Berenson [14, 15] as FlowMPPI. Details about the hyper-parameters, implementation, tasks, and training can be found in Appendix A.2. We evaluate on a fixed set of environments, which includes start states, goal locations, and obstacle placements, and run 32 rollouts for each sample amount. Our primary metrics for comparison are the success rate, defined as the percentage of times the task goal was achieved, and the average cost of trajectories which successfully completed the task.
160
+
161
+ ### 5.1 Planar Robot Navigation
162
+
163
+ We begin by applying NFMPC to a planar robot navigation task in which a 2D holonomic point-robot must reach a goal position while avoiding obstacles, which are arranged in a grid (PNGRID). The point-robot has double integrator dynamics with stochasticity on the acceleration commands to create a mismatch between the predictive model used by MPC and the true environment. The starting and goal locations of the robot are randomized in each episode, which lasts for 200 time steps. An episode is considered successful if the agent reaches the goal without colliding into any obstacles. We wish to explore whether NFMPC can surpass the performance of MPPI and FlowMPPI when given access to the same number of samples. Finally, we would like to evaluate how the performance of the learned controllers scales with number of samples.
164
+
165
+ We quantitatively compare all controllers in Figure 1a and find that NFMPC consistently matches or outperforms both MPPI and FlowMPPI at each sample quantity in terms of both success rate and average trajectory cost of successful trajectories. At 1024 samples, NFMPC achieves a 29% and 17% reduction in cost over MPPI and FlowMPPI, respectively. We also find that NFMPC scales more gracefully than MPPI as the number of samples is reduced. For instance, NFMPC is able
166
+
167
+ ![01963fcf-8d52-7514-8fa9-bbe46e22b3fa_6_306_202_1179_348_0.jpg](images/01963fcf-8d52-7514-8fa9-bbe46e22b3fa_6_306_202_1179_348_0.jpg)
168
+
169
+ Figure 2: Success rate and average cost of successful trajectories on the (a) FRANKA and (b) FRANKAOBSTACLES environments across different samples.
170
+
171
+ ![01963fcf-8d52-7514-8fa9-bbe46e22b3fa_6_304_660_1182_445_0.jpg](images/01963fcf-8d52-7514-8fa9-bbe46e22b3fa_6_304_660_1182_445_0.jpg)
172
+
173
+ Figure 3: Visualization of a trajectory and top samples from (top) NFMPC, (middle) FlowMPPI, and (bottom) MPPI on the FRANKAOBSTACLES task.
174
+
175
+ to withstand a ${64} \times$ decrease in the number of samples (1024 to 16) while still achieving a ${100}\%$ success rate, although the average trajectory cost increases by 91%. Meanwhile, MPPI at 16 samples has a ${97}\%$ success rate and a ${26}\%$ increase in average trajectory cost over NFMPC with the same number of samples. We found that while FlowMPPI improves over MPPI at higher sample counts, it actually performs significantly worse with fewer samples. At 16 samples, FlowMPPI achieves only a $6\%$ success rate and has over a $2 \times$ worse average trajectory cost than NFMPC. To help understand what NFMPC is doing differently, we superimpose 32 different trajectories with fixed start and goal positions using each controller in Figure 1b. We find that both MPPI and FlowMPPI always select the same path through the environment. Meanwhile, NFMPC is able to discover different paths through the environment, allowing it to better react to the stochastic perturbations that knock it off the current plan and improve performance. Additional visualizations of the resulting trajectories and top samples drawn from the distributions for all controllers can be found in Appendix A.3.
176
+
177
+ ### 5.2 Franka Panda Arm
178
+
179
+ Next, we apply NFMPC to the FRANKA task, which involves controlling a 7 degree-of-freedom (DOF) Franka Panda robot arm and steering it towards a target goal from a fixed starting pose. The goal is randomly selected at the beginning of each episode, which lasts for 600 time steps. An episode is considered successful if the end effector reaches the target position under the time constraints. We plot our quantitative results in Figure 2a and find that NFMPC again consistently matches or outperforms MPPI and FlowMPPI at each sample amount. At 1024 samples, NFMPC achieves a 16% and 20% reduction in cost over MPPI and FlowMPPI, respectively. In fact, FlowMPPI consistently performs worse than MPPI, indicates that conditioning on just goal location does not help much in this more complex scenario. As before, we find that NFMPC scales more gracefully with reduced samples, achieving a ${100}\%$ success rate with a ${64} \times$ decrease in the number of samples (1024 to 16) while incurring only a ${63}\%$ increase in average trajectory cost. For comparison, MPPI at 16 samples has only a ${34.38}\%$ success rate and increases in average trajectory cost by over $4 \times$ . And at 16 samples, FlowMPPI always fails to complete the task.
180
+
181
+ To increase the difficulty of the problem, we consider the FRANKAOBSTACLES task, which adds a single pole obstacle that the arm must avoid. The obstacle and goal positions are randomized at the beginning of each episode, and the episode is successful if the end effector reaches the target while the arm avoids collisions. In addition to our normal baselines, we also compare to a controller trained only in environments without obstacles (NFMPC (No Obs)). We compare all four models in Figure 2b. It is also important to note that no controller achieves a ${100}\%$ success rate, as not every randomly generated environment is feasible. At 1024 samples, both NFMPC (Obs) and FlowMPPI achieve a success rate of ${87.50}\%$ , compared to the ${84.38}\%$ success rate of MPPI and NFMPC (No Obs). However, NFMPC (Obs) achieves a 16% reduction in cost over FlowMPPI, illustrating its potential benefits over all the baselines. We again find that both variants of NFMPC scale better with reduced number of samples. In fact, FlowMPPI scales even worse than MPPI, as it fails to succeed in every test environment with 32 or less samples. Meanwhile, with a ${64} \times$ decrease in the number of samples (1024 to 16), the success rate of NFMPC (Obs) only drops by 11% to 78.12%. Next, we note that NFMPC (No Obs) at 1024 or 512 samples actually performs worse than MPPI when obstacles are present. This illustrates that the gain in performance is specific to the environments on which the controller is trained. However, this appears to only be true with more samples, as NFMPC (No Obs) still achieves a 75.00% success rate at 16 samples, which is only 4% worse than NFMPC (Obs). Finally, in Appendix A.4, we present a breakdown of the individual cost terms for all models to gain insight into the learned sampling distributions. We also perform an ablation which removes the learned shift model from NFMPC to illustrate that it is a crucial component.
182
+
183
+ ## 6 Limitations
184
+
185
+ A major limitation of our approach is that the learned distribution and shift model are only valid for a fixed horizon length and control dimensionality. Therefore, these components cannot be directly transferred to new robots or for alternate choices of horizon without being retrained. However, this is, in general, true of all prior work which learns the sampling distribution for MPC. This could potentially be remedied in future work by novel architectural innovations and training distributions across both environments and robots. Moreover, the learned sampling distribution is specific to the environmental distributions on which it was trained. As such, we find that it can perform worse than MPPI when transferred to out-of-distribution environments. Prior work by Power and Berenson $\left\lbrack {{14},{15}}\right\rbrack$ remedied this by learning a generative model of environments and using this learned distribution to perform a projection step on the conditional normalizing flow. However, we found in our evaluations that conditioning on environmental information consistently hurt performance. Therefore, we cannot directly apply this technique to our method, and future work needs to be done to improve generalization capabilities. Finally, while our approach does perform better overall on in-distribution environments, figuring out how to successfully incorporate environment information into the NF may possibly open up the door to further improvements.
186
+
187
+ ## 7 Conclusion
188
+
189
+ In this paper, we presented a method for learning sampling distributions for MPC with normalizing flows (NFs), which moves all online parameter updates and warm-starting operations into the latent space. We show how to train these sampling distributions by framing the problem as bi-level optimization and deriving an approximate gradient through the MPC update. Additionally, we illustrate how to incorporate box control constraints directly into the NF architecture. Through our empirical evaluations in both simulated navigation and manipulation problems, we demonstrate that our approach is able to surpass the performance of all baselines. Moreover, we find that controllers which move all operations into the latent space are able to scale more gracefully with a reduction in the number of samples. These results indicate the importance of leveraging the latent space in learned sampling distributions for MPC. Finally, because we learn all components of the controller through episodic interactions with the environment, they can potentially be trained to account for the modeling errors in the MPC controller.
190
+
191
+ References
192
+
193
+ [1] P. Abbeel, A. Coates, and A. Y. Ng. Autonomous Helicopter Aerobatics through Apprenticeship Learning. The International Journal of Robotics Research (IJRR), 29(13):1608-1639, 2010.
194
+
195
+ [2] V. Kumar, E. Todorov, and S. Levine. Optimal Control with Learned Local Models: Application to Dexterous Manipulation. In IEEE International Conference on Robotics and Automation (ICRA), pages 378-383. IEEE, 2016.
196
+
197
+ [3] C. Finn and S. Levine. Deep Visual Foresight for Planning Robot Motion. In IEEE International Conference on Robotics and Automation (ICRA), pages 2786-2793. IEEE, 2017.
198
+
199
+ [4] M. Bhardwaj, B. Sundaralingam, A. Mousavian, N. Ratliff, D. Fox, F. Ramos, and B. Boots. STORM: An Integrated Framework for Fast Joint-Space Model-Predictive Control for Reactive Manipulation. arXiv preprint arXiv:2104.13542, 2021.
200
+
201
+ [5] T. Erez, K. Lowrey, Y. Tassa, V. Kumar, S. Kolev, and E. Todorov. An integrated system for real-time Model Predictive Control of humanoid robots. In IEEE-RAS International Conference on Humanoid Robots (Humanoids), pages 292-299. IEEE, 2013.
202
+
203
+ [6] Z. Erickson, H. M. Clever, G. Turk, C. K. Liu, and C. C. Kemp. Deep Haptic Model Predictive Control for Robot-Assisted Dressing. In IEEE International Conference on Robotics and Automation (ICRA), pages 4437-4444. IEEE, 2018.
204
+
205
+ [7] G. Williams, P. Drews, B. Goldfain, J. M. Rehg, and E. A. Theodorou. Aggressive driving with model predictive path integral control. In IEEE International Conference on Robotics and Automation (ICRA), pages 1433-1440. IEEE, 2016.
206
+
207
+ [8] G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou. Information theoretic MPC for model-based reinforcement learning. In IEEE International Conference on Robotics and Automation (ICRA), pages 1714-1721. IEEE, 2017.
208
+
209
+ [9] N. Wagener, C.-A. Cheng, J. Sacks, and B. Boots. An Online Learning Approach to Model Predictive Control. arXiv preprint arXiv:1902.08967, 2019.
210
+
211
+ [10] M. Okada and T. Taniguchi. Variational Inference MPC for Bayesian Model-based Reinforcement Learning. In Conference on Robot Learning (CoRL), pages 258-272. PMLR, 2020.
212
+
213
+ [11] A. Lambert, A. Fishman, D. Fox, B. Boots, and F. Ramos. Stein Variational Model Predictive Control. arXiv preprint arXiv:2011.07641, 2020.
214
+
215
+ [12] B. Amos and D. Yarats. The Differentiable Cross-Entropy Method. In International Conference on Machine Learning (ICML), pages 291-302. PMLR, 2020.
216
+
217
+ [13] S. Agarwal, H. Sikchi, C. Gulino, and E. Wilkinson. Imitative Planning using Conditional Normalizing Flow. arXiv preprint arXiv:2007.16162, 2020.
218
+
219
+ [14] T. Power and D. Berenson. Variational Inference MPC for Robot Motion with Normalizing Flows. Advances in Neural Information Processing Systems (NeurIPS) Workshop on Robot Learning: Self-Supervised and Lifelong Learning, 2021.
220
+
221
+ [15] T. Power and D. Berenson. Variational Inference MPC using Normalizing Flows and Out-of-Distribution Projection. arXiv preprint arXiv:2205.04667, 2022.
222
+
223
+ [16] L. Dinh, D. Krueger, and Y. Bengio. Nice: Non-linear Independent Components Estimation. arXiv preprint arXiv:1410.8516, 2014.
224
+
225
+ [17] D. Rezende and S. Mohamed. Variational Inference with Normalizing Flows. In International Conference on Machine Learning (ICML), pages 1530-1538. PMLR, 2015.
226
+
227
+ [18] L. Dinh, J. Sohl-Dickstein, and S. Bengio. Density estimation using Real NVP. arXiv preprint arXiv:1605.08803, 2016.
228
+
229
+ [19] J. F. Bard. Practical Bilevel Optimization: Algorithms and Applications, volume 30. Springer Science & Business Media, 2013.
230
+
231
+ [20] E. Hall and R. Willett. Dynamical Models and Tracking Regret in Online Convex Programming. In International Conference on Machine Learning (ICML), pages 579-587. PMLR, 2013.
232
+
233
+ [21] C. Pinneri, S. Sawant, S. Blaes, J. Achterhold, J. Stueckler, M. Rolinek, and G. Martius. Sample-efficient Cross-Entropy Method for Real-time Planning. arXiv preprint arXiv:2008.06389, 2020.
234
+
235
+ [22] T. Wang and J. Ba. Exploring Model-Based Planning with Policy Networks. arXiv preprint arXiv:1906.08649, 2019.
236
+
237
+ [23] Y. Tang and S. Agrawal. Boosting Trust Region Policy Optimization by Normalizing Flows Policy. arXiv preprint arXiv:1809.10326, 2018.
238
+
239
+ [24] Y. Tang and S. Agrawal. Implicit Policy for Reinforcement Learning. arXiv preprint arXiv:1806.06798, 2018.
240
+
241
+ [25] B. Mazoure, T. Doan, A. Durand, J. Pineau, and R. D. Hjelm. Leveraging exploration in off-policy algorithms via normalizing flows. In Conference on Robot Learning (CoRL), pages 430-444. PMLR, 2020.
242
+
243
+ [26] T. Lai and F. Ramos. Learning to Plan Optimally with Flow-based Motion Planner. arXiv preprint arXiv:2010.11323, 2020.
244
+
245
+ [27] T. Lai, W. Zhi, T. Hermans, and F. Ramos. Parallelised Diffeomorphic Sampling-based Motion Planning. arXiv preprint arXiv:2108.11775, 2021.
246
+
247
+ [28] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. Advances in Neural Information Processing Systems (NeurIPS), 32:8026- 8037, 2019.
248
+
249
+ [29] J. H. Halton. Algorithm 247: Radical-inverse quasi-random point sequence. Communications of the ACM, 7(12):701-702, 1964.
250
+
251
+ [30] J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer Normalization. arXiv preprint arXiv:1607.06450, 2016.
252
+
253
+ [31] D. P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980, 2014.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/8ktEdb5NHEh/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § LEARNING SAMPLING DISTRIBUTIONS FOR MODEL PREDICTIVE CONTROL
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Sampling-based methods have become a cornerstone of contemporary approaches to Model Predictive Control (MPC), as they make no restrictions on the differentiability of the dynamics or cost function and are straightforward to parallelize. However, their efficacy is highly dependent on the quality of the sampling distribution itself, which is often assumed to be simple, like a Gaussian. This restriction can result in samples which are far from optimal, leading to poor performance. Recent work has explored improving the performance of MPC by sampling in a learned latent space of controls. However, these methods ultimately perform all MPC parameter updates and warm-starting between time steps in the control space. This requires us to rely on a number of heuristics for generating samples and updating the distribution and may lead to sub-optimal performance. Instead, we propose to carry out all operations in the latent space, allowing us to take full advantage of the learned distribution. Specifically, we frame the learning problem as bi-level optimization and show how to train the controller with backpropagation-through-time. By using a normalizing flow parameterization of the distribution, we can leverage its tractable density to avoid requiring differentiability of the dynamics and cost function. Finally, we evaluate the proposed approach on simulated robotics tasks and demonstrate its ability to surpass the performance of prior methods and scale better with a reduced number of samples.
12
+
13
+ Keywords: Model Predictive Control, Normalizing Flows
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Sequential decision making under uncertainty is a fundamental problem in machine learning and robotics. Recently, model predictive control (MPC) has emerged as a powerful paradigm to tackle such problems on real-world robotic systems. In particular, MPC has been successfully applied to helicopter aerobatics [1], robot manipulation [2, 3, 4], humanoid robot locomotion [5], robot-assisted dressing [6], and aggressive off-road driving [7,8,9]. Sampling-based approaches to MPC are becoming particularly popular due to their simplicity and ability to handle non-differentiable dynamics and cost functions. These methods work by sampling controls from a policy distribution and rolling out an approximate system model using the sampled control sequences. They then use the resulting trajectories to compute an approximate gradient of the cost function and update the policy. The controller then samples an action from this distribution and applies it to the system, and it repeats the process from the resulting next state, creating a feedback controller. It warm-starts the optimization process using a modification of the solution at the previous time step.
18
+
19
+ An important design decision is the form of the sampling distribution, which is often simple, e.g. a Gaussian, such that we can efficiently sample and tractably update its parameters. However, this also has drawbacks: without much control over the distribution form, samples often lie in high-cost regions, hindering performance. This can be particularly problematic in complex environments with sparse costs or rewards, as a poorly parameterized distribution may hinder efficient exploration, leading the system into bad local optima. A side effect is that we often require many samples to accomplish the objective, increasing computational requirements. There have been extensions which target more complex distributions, such as Gaussian mixture models [10] and a particle method based on Stein variational gradient descent (SVGD) [11]. However, there is a large amount of structure in the environment that these methods fail to exploit. Instead, an alternative approach is to learn a sampling distribution which can leverage the environmental structure to draw more optimal samples.
20
+
21
+ Prior work on learning MPC sampling distributions generally requires differentiability of the dynamics and cost function [12, 13]. Power and Berenson [14, 15] circumvent this issue by leveraging normalizing flows $\left( \mathrm{{NFs}}\right) \left\lbrack {{16},{17},{18}}\right\rbrack$ , which have a tractable log-likelihood. This property allows them to learn flexible distributions by directly optimizing the MPC cost without requiring differentiability via the likelihood-ratio gradient. However, a limitation of their approach is that all online updates to the distribution and warm-starting between time steps occur entirely in the control space, leaving the latent space fixed. This forces them to apply heuristics to generate samples by combining those from the learned distribution with Gaussian perturbations to the current control-space mean. These restrictions prevent us from fully taking advantage of the learned distribution and potentially throws away useful information. Additionally, their approach does not allow for the incorporation of control constraints directly into the sampling distribution, which is important for real-world robots.
22
+
23
+ Instead, we propose to alter the optimization machinery to operate entirely in the latent space. As the NF latent space follows a simple distribution, it remains feasible to perform MPC updates in this learned space and update the latent distribution online. Specifically, during an episode, the parameters of the latent distribution are updated with MPC while those of the NF remain fixed. Then during training, after each episode, the parameters of the NF are updated. We can frame this setup as a bi-level optimization problem [19] and derive a method for computing an approximate gradient through the latent MPC update. This involves treating MPC as a recurrent network, where the control distribution acts as a form of memory, and unrolling the computation to train with backpropagation-through-time (BPTT). However, it is no longer clear how to warm-start between time steps because there is no clear delineation of time in the latent space. Moreover, the usual method of warm-starting, which simply shifts the current plan forward in time, may be sub-optimal. Therefore, we propose to learn a shift model, which performs all warm-starting operations in the latent space. Finally, we show how to alter the NF architecture to incorporate box constraints on the sampled controls.
24
+
25
+ Contributions: In this work, we build upon recent efforts to learn sampling distributions for MPC with NFs by moving all online parameter updates and warm-start operations into the the latent space. We accomplish this by framing the learning problem as bi-level optimization and derive an approximate gradient through the MPC update of the latent distribution in order to train the network with BPTT. Additionally, we show how to parameterize the flow architecture such that we can incorporate box constraints on the controls. Finally, we empirically evaluate our proposed approach on simulated robotics tasks, including both navigation and manipulation problems. In both cases, we demonstrate its ability to improve performance over all baselines by taking full advantage of the learned latent space. Furthermore, we find that the performance of the controllers with our learned sampling distributions scales more gracefully with a reduction in the number of samples.
26
+
27
+ § 2 SAMPLING-BASED MODEL PREDICTIVE CONTROL
28
+
29
+ We consider the problem of controlling a discrete-time stochastic dynamical system, which is in state ${x}_{t} \in {\mathbb{R}}^{N}$ at time step $t$ . Upon the application of control ${u}_{t} \in {\mathbb{R}}^{M}$ , the system incurs the instantaneous cost $c\left( {{x}_{t},{u}_{t}}\right)$ and transitions to the next state ${x}_{t + 1}$ according to the dynamics ${x}_{t + 1} \sim f\left( {{x}_{t},{u}_{t}}\right)$ . We wish to design a state-feedback policy ${u}_{t} \sim \pi \left( {\cdot \mid {x}_{t}}\right)$ such that the system achieves good performance over $T$ steps. Instead of finding a single, globally optimal policy, MPC re-optimizes a local policy at each time step by predicting the system’s behavior over a finite horizon $H < T$ using an approximate model $\widehat{f}$ . Specifically, in sampling-based MPC, we generate a sequence of controls by sampling ${\widehat{\mathbf{u}}}_{t} \sim {\pi }_{\theta }\left( \cdot \right)$ , where ${\widehat{\mathbf{u}}}_{t} \triangleq \left( {{\widehat{u}}_{t},{\widehat{u}}_{t + 1},\cdots ,{\widehat{u}}_{t + H - 1}}\right)$ and our policy is parameterized by $\theta \in \Theta$ . We rollout our model starting at our current state ${x}_{t}$ using these sampled controls to get our predicted state sequence ${\widehat{x}}_{t} \triangleq \left( {{\widehat{x}}_{t},{\widehat{x}}_{t + 1},\cdots ,{\widehat{x}}_{t + H}}\right)$ , with ${\widehat{x}}_{t} = {x}_{t}$ . The total trajectory cost is
30
+
31
+ $$
32
+ C\left( {{\widehat{\mathbf{x}}}_{t},{\widehat{\mathbf{u}}}_{t}}\right) = \mathop{\sum }\limits_{{h = 0}}^{{H - 1}}c\left( {{\widehat{x}}_{t + h},{\widehat{u}}_{t + h}}\right) + {c}_{\text{ term }}\left( {\widehat{x}}_{t + H}\right) , \tag{1}
33
+ $$
34
+
35
+ where ${c}_{\text{ term }}\left( \cdot \right)$ is a terminal cost function. We then define a statistic $\widehat{J}\left( {\theta ;{x}_{t}}\right)$ defined on cost $C\left( {{\widehat{\mathbf{x}}}_{t},{\widehat{\mathbf{u}}}_{t}}\right)$ and use the rollouts to solve ${\theta }_{t} \leftarrow \arg \mathop{\min }\limits_{{\theta \in \Theta }}\widehat{J}\left( {\theta ;{x}_{t}}\right)$ . After optimizing our parameters, we sample the control sequence ${\widehat{\mathbf{u}}}_{t} \sim {\pi }_{{\theta }_{t}}\left( \cdot \right)$ , apply the first control to the real system (i.e. ${u}_{t} = {\widehat{u}}_{t}$ ), and repeat the process. Because each parameter ${\theta }_{t}$ depends on the current state, MPC effectively yields a state-feedback policy, even though the individual distributions give us an open-loop sequence.
36
+
37
+ In this paper, we consider a popular sampling-based MPC algorithm known as Model Predictive Path Integral (MPPI) control [7, 8]. MPPI optimizes the exponential utility or risk-seeking objective:
38
+
39
+ $$
40
+ \widehat{J}\left( {\theta ;{x}_{t}}\right) = - \log {\mathbb{E}}_{{\pi }_{\theta },\widehat{f}}\left\lbrack {\exp \left( {-\frac{1}{\beta }C\left( {{\widehat{\mathbf{x}}}_{t},{\widehat{\mathbf{u}}}_{t}}\right) }\right) }\right\rbrack , \tag{2}
41
+ $$
42
+
43
+ 7 where $\beta > 0$ is a scaling parameter, known as the temperature. As we do not assume that the dynamics or cost function are differentiable, we compute the gradients via the likelihood-ratio derivative:
44
+
45
+ $$
46
+ \nabla \widehat{J}\left( {\theta ;{x}_{t}}\right) = - \frac{{\mathbb{E}}_{{\pi }_{\theta },\widehat{f}}\left\lbrack {{e}^{-\frac{1}{\beta }C\left( {{\widehat{\mathbf{x}}}_{t},{\widehat{\mathbf{u}}}_{t}}\right) }{\nabla }_{\theta }\log {\pi }_{\theta }\left( {\widehat{\mathbf{u}}}_{t}\right) }\right\rbrack }{{\mathbb{E}}_{{\pi }_{\theta },\widehat{f}}\left\lbrack {e}^{-\frac{1}{\beta }C\left( {{\widehat{\mathbf{x}}}_{t},{\widehat{\mathbf{u}}}_{t}}\right) }\right\rbrack }. \tag{3}
47
+ $$
48
+
49
+ 9 In MPPI, the policy is assumed to be a factorized Gaussian of the form
50
+
51
+ $$
52
+ {\pi }_{\theta }\left( \widehat{\mathbf{u}}\right) = \mathop{\prod }\limits_{{h = 0}}^{{H - 1}}{\pi }_{{\theta }_{h}}\left( {\widehat{u}}_{t + h}\right) = \mathop{\prod }\limits_{{h = 0}}^{{H - 1}}\mathcal{N}\left( {{\widehat{u}}_{t + h};{\mu }_{t + h},{\sum }_{t + h}}\right) . \tag{4}
53
+ $$
54
+
55
+ Previous work by Wagener et al. [9] has shown that optimizing this objective with dynamic mirror descent (DMD) [20] and approximating with Monte Carlo estimates gives us the MPPI update rule:
56
+
57
+ $$
58
+ {\mu }_{t + h} = \left( {1 - {\gamma }_{t}^{\mu }}\right) {\widetilde{\mu }}_{t + h} + {\gamma }_{t}^{\mu }\mathop{\sum }\limits_{{i = 1}}^{N}{w}_{i}{\widehat{u}}_{t + h}^{\left( i\right) },\;{w}_{i} = \frac{{e}^{-\frac{1}{\beta }C\left( {{\widehat{\mathbf{x}}}_{t}^{\left( i\right) },{\widehat{\mathbf{u}}}_{t}^{\left( i\right) }}\right) }}{\mathop{\sum }\limits_{{j = 1}}^{N}{e}^{-\frac{1}{\beta }C\left( {{\widehat{\mathbf{x}}}_{t}^{\left( j\right) },{\widehat{\mathbf{u}}}_{t}^{\left( j\right) }}\right) }} \tag{5}
59
+ $$
60
+
61
+ where ${\widetilde{\mu }}_{t + h}$ is the current mean for each time step and ${\gamma }_{t}^{\mu }$ is the step size. Between time steps of DMD, we get ${\widetilde{\mu }}_{t + h}$ from our previous solution ${\mu }_{t + h}$ by using a shift model ${\widetilde{\mu }}_{t + h} = \Phi \left( {\mu }_{t + h}\right)$ . This shift model aims to predict the optimal decision at the next time step given the previous solution. In the context of MPC, it allows us to warm-start the optimization problem to speed up convergence, as we can only approximately solve the optimization problem due to real-time constraints.
62
+
63
+ § 3 LEARNING THE SAMPLING DISTRIBUTION OF MPC
64
+
65
+ § 3.1 REPRESENTATION OF THE LEARNED DISTRIBUTION
66
+
67
+ Instead of using uninformed sampling distributions, learned distributions can potentially exploit structure in the environment to draw samples which are more likely to be collision-free and close to optimal. However, such learned distributions must be sufficiently expressive in order to better capture near-optimal, potentially multimodal, behavior. They must also be parameterized such that it is tractable to sample from and update online. If the distribution has a large number of parameters, the number of samples required to efficiently update them online may be computationally infeasible. And ideally, the form of our distribution would be such that we could find a closed-form update.
68
+
69
+ One path towards meeting these criteria is to maintain a simple latent distribution from which we can sample, and then learn a transformation of the samples which maps them to a more complex distribution. During training, we learn the parameters of this transformation, which can be conditioned on problem-specific information, such as the starting and goal configurations of the robot and obstacle placements. However, when executing the policy during an episode, the parameters of this learned transformation remain fixed, and instead, we update the parameters of the latent distribution. Concretely, we consider learning a distribution ${\pi }_{\theta ,\lambda }$ which is defined implicitly as follows:
70
+
71
+ $$
72
+ {\widehat{\mathbf{z}}}_{t} \sim {p}_{\theta }\left( \cdot \right) ,\;{\widehat{\mathbf{u}}}_{t} = {h}_{\lambda }\left( {{\widehat{\mathbf{z}}}_{t};c}\right) \tag{6}
73
+ $$
74
+
75
+ where ${\widehat{z}}_{t} \triangleq \left( {{\widehat{z}}_{t},{\widehat{z}}_{t + 1},\cdots ,{\widehat{z}}_{t + H - 1}}\right) ,c$ is a context variable describing the relevant information of the environment, ${p}_{\theta }$ is the latent distribution with parameters $\theta$ , and ${h}_{\lambda }$ is the learned conditional transformation with parameters $\lambda$ . Moving forward, we assume that both ${\widehat{\mathbf{z}}}_{t}$ and ${\widehat{\mathbf{u}}}_{t}$ are stacked as vectors in ${\mathbb{R}}^{MH}$ . If ${p}_{\theta }$ is a Gaussian factorized as in Equation (4) and we assume that ${h}_{\lambda }$ is invertible, we prove in Appendix A. 5 that the corresponding DMD update to the latent mean is simply Equation (5), except that we replace the controls in the weighted sum with the latent samples.
76
+
77
+ § 3.2 FORMULATING THE LEARNING PROBLEM
78
+
79
+ Learning the distribution ${\pi }_{\theta ,\lambda }$ amounts to solving a bi-level optimization problem [19], in which one optimization problem is nested in another. The lower-level optimization problem involves updating the latent distribution parameters at each time step, ${\theta }_{t}$ , by minimizing the expected cost with DMD. The upper-level optimization problem consists of learning $\lambda$ such that MPC performs well across a number of different environments. To formalize this, first consider that we have some distribution of environments $c \sim \mathcal{C}\left( \cdot \right)$ over which we wish MPC to perform well. For each environment, our system has some conditional initial state distribution ${x}_{0} \sim \rho \left( {\cdot \mid c}\right)$ . The objective we wish to minimize is then
80
+
81
+ $$
82
+ \ell \left( {\mathbf{\theta },\lambda ;c}\right) = {\mathbb{E}}_{{\mathbf{\pi }}_{\mathbf{\theta },\lambda },\rho ,f}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{{T - 1}}\widehat{J}\left( {{\theta }_{t},\lambda ;{x}_{t},c}\right) }\right\rbrack \tag{7}
83
+ $$
84
+
85
+ where $\mathbf{\theta } = \left( {{\theta }_{0},{\theta }_{1},\cdots ,{\theta }_{T - 1}}\right)$ and our cost statistic, $\widehat{J}$ , now depends on $\lambda$ and $c$ as well. This objective measures the expected performance of the intermediate plans produced by MPC along the $T$ steps of the episode. Our desired bi-level optimization problem can be formulated as:
86
+
87
+ $$
88
+ \mathop{\min }\limits_{\lambda }{\mathbb{E}}_{\mathcal{C}}\left\lbrack {\ell \left( {\mathbf{\theta }\left( \lambda \right) ,\lambda ;c}\right) }\right\rbrack \;\text{ s.t. }\mathbf{\theta }\left( \lambda \right) { \approx }_{\lambda }\underset{\mathbf{\theta }}{\arg \min }\ell \left( {\mathbf{\theta },\lambda ;c}\right) \tag{8}
89
+ $$
90
+
91
+ where ${ \approx }_{\lambda }$ indicates that we approximate the solution of the optimization problem with an iterative algorithm that may also be parameterized by $\lambda$ , as the exact minimizer is not available in closed form. Moreover, the notation $\mathbf{\theta }\left( \lambda \right)$ indicates the dependence of the lower-level solution on the upper-level parameters. In our case, we solve the lower-level problem with DMD, where we also parameterize the shift model, ${\Phi }_{\lambda }\left( {\cdot ;c}\right)$ , making it a learnable component and conditioned on $c$ .
92
+
93
+ The normal shift model in MPC simply shifts the control sequence forward one time step and appends a zero or random control at the end. However, because we are performing this update in the latent space, there is no clear delineation between time steps of the latent controls, as they are coupled according to the learned transformation. Therefore, there is no way to easily perform the equivalent shift operation in the latent space. As such, we instead learn this shift model along with the transformation. Besides, the standard approach described above may not be optimal. By learning it, we may be able to further improve performance. This is especially true because the performance hinges greatly on the quality of the shift model since we only run one iteration of the DMD update.
94
+
95
+ § 3.3 PARAMETERIZING WITH NORMALIZING FLOWS
96
+
97
+ In order to optimize the upper-level objective in Equation (8) with respect to $\lambda$ , we need to be able to compute the density ${\pi }_{\theta ,\lambda }$ directly. Therefore, we choose to represent ${h}_{\lambda }$ with a normalizing flow (NF) $\left\lbrack {{16},{18},{17}}\right\rbrack$ , which explicitly learns the density by defining an invertible transformation that maps latent variables to observed data. Generally, we compose a series of component flows together, i.e. ${h}_{\lambda } = {h}_{{\lambda }_{K}} \circ {h}_{{\lambda }_{K - 1}} \circ \cdots \circ {h}_{{\lambda }_{1}}$ , which define a series of intermediate variables ${\widehat{\mathbf{y}}}_{0},\ldots ,{\widehat{\mathbf{y}}}_{K - 1},{\widehat{\mathbf{y}}}_{K}$ , with ${\widehat{\mathbf{y}}}_{0} = \widehat{\mathbf{z}}$ and ${\widehat{\mathbf{y}}}_{K} = \widehat{\mathbf{u}}$ . The log-likelihood of the composed flow is given by:
98
+
99
+ $$
100
+ \log {\pi }_{\theta ,\lambda }\left( {\widehat{\mathbf{u}} \mid c}\right) = \log {p}_{\theta }\left( \widehat{\mathbf{z}}\right) - \mathop{\sum }\limits_{{i = 1}}^{K}\log \left| {\det \frac{\partial {\widehat{\mathbf{y}}}_{i}}{\partial {\widehat{\mathbf{y}}}_{i - 1}}}\right| . \tag{9}
101
+ $$
102
+
103
+ In this work, we make use of the affine coupling layer proposed by Dinh et al. [18] as part of the real non-volume-preserving (RealNVP) flow. The core idea is to split the input $\widehat{\mathbf{u}}$ into two partitions $\widehat{\mathbf{u}} = \left( {{\widehat{\mathbf{u}}}_{{I}_{1}},{\widehat{\mathbf{u}}}_{{I}_{2}}}\right)$ , where ${I}_{1}$ and ${I}_{2}$ are a partition of $\left\lbrack {1,{MH}}\right\rbrack$ , and apply
104
+
105
+ $$
106
+ {\widehat{\mathbf{y}}}_{{I}_{1}} = {\widehat{\mathbf{u}}}_{{I}_{1}},\;{\widehat{\mathbf{y}}}_{{I}_{2}} = {\widehat{\mathbf{u}}}_{{I}_{2}} \odot \exp {s}_{\lambda }\left( {{\widehat{\mathbf{u}}}_{{I}_{1}},c}\right) + {t}_{\lambda }\left( {{\widehat{\mathbf{u}}}_{{I}_{1}},c}\right) , \tag{10}
107
+ $$
108
+
109
+ where ${s}_{\lambda }$ and ${t}_{\lambda }$ are the scale and translation terms, which are represented with arbitrary neural networks, and $\odot$ is the Hadamard product. This makes computing the log-determinant term in Equation (9) and inverting the flow fast and efficient. Now, in robotics, we often have lower and upper limits on the controls. These are usually enforced in sampling-based MPC by either clamping the control samples or passing them through a scaled sigmoid. However, instead of enforcing the constraints heuristically after sampling, we learn a constrained sampling distribution directly. Since the sigmoid function is invertible and has a tractable log-determinant (shown in Appendix A.7), we can simply append one after ${h}_{{\lambda }_{K}}$ in the NF and scale it by the control limits. This ensures that control constraints are satisfied by design and taken into account while learning the distribution.
110
+
111
+ § 3.4 TRAINING THE SAMPLING DISTRIBUTION
112
+
113
+ Computing gradients through the upper-level objective is not straightforward, as both the expectation and the inner terms of Equation (7) depend on $\lambda$ . Therefore, the state distribution depends on the NF and latent shift model. One way around this issue is to consider a modified objective at each batch $d$ :
114
+
115
+ $$
116
+ {\ell }_{d}\left( {\mathbf{\theta },\lambda ;c}\right) = {\mathbb{E}}_{{\mathbf{\pi }}_{\mathbf{\theta },{\lambda }_{d}},\rho ,f}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{{T - 1}}\widehat{J}\left( {{\theta }_{t},\lambda ;{x}_{t},c}\right) }\right\rbrack , \tag{11}
117
+ $$
118
+
119
+ which fixes the outer expectation to be with respect to the current policy. Intuitively, this choice trains the NF to optimize the MPC cost function under the state distribution resulting from the current policy ${\pi }_{\mathbf{\theta },{\lambda }_{d}}$ . We then update the outer expectation distribution at each episode, overcoming the covariate shift problem that would otherwise arise.
120
+
121
+ Now, we only have to focus on computing the gradient ${\nabla }_{\lambda }{\left. \widehat{J}\left( {\theta }_{t}\left( \lambda \right) ,\lambda ;{x}_{t},c\right) \right| }_{\lambda = {\lambda }_{d}}$ for each time step, which can be computed similar to Equation (3) and approximated with Monte Carlo sampling:
122
+
123
+ $$
124
+ \nabla \widehat{J}\left( {{\theta }_{t}\left( \lambda \right) ,\lambda ;{x}_{t},c}\right) \approx - \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{i}{\nabla }_{\lambda }\log {\pi }_{{\theta }_{t}\left( \lambda \right) ,\lambda }\left( {{\widehat{\mathbf{u}}}_{t}^{\left( i\right) } \mid c}\right) , \tag{12}
125
+ $$
126
+
127
+ where the weights ${w}_{i}$ are defined according to Equation (5). The log-likelihood is given by Equation (9), the gradient of which involves computing the backwards pass through the network ${h}_{\lambda }$ . However, we also have to consider the dependence of the latent distribution parameters $\mathbf{\theta }\left( \lambda \right)$ on $\lambda$ . Therefore, we must backpropagate through the MPC update:
128
+
129
+ $$
130
+ {\mathbf{\mu }}_{t}\left( \lambda \right) = \left( {1 - {\gamma }_{t}^{\mu }}\right) {\widetilde{\mathbf{\mu }}}_{t}\left( \lambda \right) + {\gamma }_{t}^{\mu }\Delta {\mathbf{\mu }}_{t},\;\Delta {\mathbf{\mu }}_{t} = \frac{{\mathbb{E}}_{{\bar{\pi }}_{{\widetilde{\theta }}_{t}\left( \lambda \right) ,\lambda },\widehat{f}}\left\lbrack {{e}^{-\frac{1}{\beta }C\left( {{\widehat{\mathbf{x}}}_{t},{\widehat{\mathbf{u}}}_{t}}\right) }{h}_{\lambda }\left( {{\widehat{\mathbf{u}}}_{t};c}\right) }\right\rbrack }{{\mathbb{E}}_{{\bar{\pi }}_{{\widetilde{\theta }}_{t}\left( \lambda \right) ,\lambda },\widehat{f}}\left\lbrack {e}^{-\frac{1}{\beta }C\left( {{\widehat{\mathbf{x}}}_{t},{\widehat{\mathbf{u}}}_{t}}\right) }\right\rbrack } \tag{13}
131
+ $$
132
+
133
+ where the previous shifted mean ${\widetilde{\mathbf{\mu }}}_{t}\left( \lambda \right)$ is given by the learned latent shift model. To compute the gradient of Equation (13), we must approximate the gradient of $\Delta {\mathbf{\mu }}_{t}$ with respect to $\lambda$ , which we can compute as $\frac{\partial \Delta {\mathbf{\mu }}_{t}}{\partial \lambda } \approx {M}_{1} - {M}_{2}{M}_{3}$ , where we define:
134
+
135
+ $$
136
+ {M}_{1} = \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{i}\left\lbrack {{\nabla }_{\lambda }{h}_{\lambda }\left( {{\widehat{\mathbf{u}}}_{t}^{\left( i\right) };c}\right) + {h}_{\lambda }\left( {{\widehat{\mathbf{u}}}_{t}^{\left( i\right) };c}\right) {\nabla }_{\lambda }\log {\pi }_{\widetilde{\theta }\left( \lambda \right) ,\lambda }\left( {{\widehat{\mathbf{u}}}_{t}^{\left( i\right) } \mid c}\right) }\right\rbrack , \tag{14}
137
+ $$
138
+
139
+ $$
140
+ {M}_{2} = \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{i}{h}_{\lambda }\left( {{\widehat{\mathbf{u}}}_{t}^{\left( i\right) };c}\right) ,\;{M}_{3} = \mathop{\sum }\limits_{{i = 1}}^{N}{w}_{i}{\nabla }_{\lambda }\log {\pi }_{\widetilde{\theta }\left( \lambda \right) ,\lambda }\left( {{\widehat{\mathbf{u}}}_{t}^{\left( i\right) } \mid c}\right) .
141
+ $$
142
+
143
+ The derivation of this approximate gradient can be found in Appendix A.6. Note that computing the gradients ${\nabla }_{\lambda }\log {\pi }_{\widetilde{\theta }\left( \lambda \right) ,\lambda }\left( {{\widehat{\mathbf{u}}}_{t}^{\left( i\right) } \mid c}\right)$ will also require us to backpropagate through the shift model due to the dependence of $\widetilde{\theta }$ on $\lambda$ . Therefore, even when the step size is set to one, i.e. ${\gamma }_{t}^{\mu } = 1$ , we introduce a form of recurrence between time steps. Please see Appendix A. 1 for additional visualizations of the computational graph generated by an episode and further descriptions of the overall algorithm.
144
+
145
+ § 4 RELATED WORK
146
+
147
+ Multiple works have considered sampling distributions beyond simple Gaussians, such as Gaussian mixture models [10] and a particle method based on Stein variational gradient descent (SVGD) [11]. Additionally, most implementations use heuristics to modify samples and squeeze out additional performance gains [4, 21]. In terms of learning the distribution, Amos and Yarats [12] learn a latent action space for their proposed differentiable cross-entropy method (CEM) controller. However, they require all components of the pipeline to be differentiable and do not consider learning a shift model. Agarwal et al. [13] learn a normalizing flow in the latent space of a variational autoencoder (VAE). Yet, they also require differentiability, use expert demonstrations to learn the latent space of the VAE, and have no means of warm-starting between time steps. Wang and Ba [22] propose to use a learned feedback policy to warm-start MPC, but still rely on a Gaussian perturbations to the
148
+
149
+ < g r a p h i c s >
150
+
151
+ Figure 1: (a) Success rate and average cost of successful trajectories on the PNGRID task across different samples. (b) Visualization of trajectories in the PNGRID task across multiple random seeds for a fixed environmental layout.
152
+
153
+ proposed action sequence. The authors also explore performing online planning in the space of the network's parameters, which results in a massive action space that may be hard to scale.
154
+
155
+ Power and Berenson [14, 15] also train a normalizing flow to use as the sampling distribution for MPC, but they do not learn a latent shift model and perform all operations in the control space. Moreover, they mix the latent samples with Gaussian perturbations to the current control-space mean, as they do not update the latent distribution directly. This prevents them from fully taking advantage of the learned distribution and throws away useful information which could potentially improve performance. In fact, we show in Appendix A. 4 that the learned shift model contributes significantly to the performance gains. Additionally, a primary focus of their work is on how to handle out-of-distribution (OOD) environments by learning a posterior over environment context variables. We could combine their approach with ours by conditioning the learned shift model on the inferred environment context for improved generalization. Finally, normalizing flows have also been used to improve exploration in RL $\left\lbrack {{23},{24},{25}}\right\rbrack$ by providing a more flexible, and potentially multimodal, distribution. They have also been employed in sampling-based motion planning [26, 27] to provide good proposal configurations to speed up convergence.
156
+
157
+ § 5 EXPERIMENTAL RESULTS
158
+
159
+ In all experiments, we denote our proposed approach as NFMPC, the baseline MPPI implementation as MPPI, and the method by Power and Berenson [14, 15] as FlowMPPI. Details about the hyper-parameters, implementation, tasks, and training can be found in Appendix A.2. We evaluate on a fixed set of environments, which includes start states, goal locations, and obstacle placements, and run 32 rollouts for each sample amount. Our primary metrics for comparison are the success rate, defined as the percentage of times the task goal was achieved, and the average cost of trajectories which successfully completed the task.
160
+
161
+ § 5.1 PLANAR ROBOT NAVIGATION
162
+
163
+ We begin by applying NFMPC to a planar robot navigation task in which a 2D holonomic point-robot must reach a goal position while avoiding obstacles, which are arranged in a grid (PNGRID). The point-robot has double integrator dynamics with stochasticity on the acceleration commands to create a mismatch between the predictive model used by MPC and the true environment. The starting and goal locations of the robot are randomized in each episode, which lasts for 200 time steps. An episode is considered successful if the agent reaches the goal without colliding into any obstacles. We wish to explore whether NFMPC can surpass the performance of MPPI and FlowMPPI when given access to the same number of samples. Finally, we would like to evaluate how the performance of the learned controllers scales with number of samples.
164
+
165
+ We quantitatively compare all controllers in Figure 1a and find that NFMPC consistently matches or outperforms both MPPI and FlowMPPI at each sample quantity in terms of both success rate and average trajectory cost of successful trajectories. At 1024 samples, NFMPC achieves a 29% and 17% reduction in cost over MPPI and FlowMPPI, respectively. We also find that NFMPC scales more gracefully than MPPI as the number of samples is reduced. For instance, NFMPC is able
166
+
167
+ < g r a p h i c s >
168
+
169
+ Figure 2: Success rate and average cost of successful trajectories on the (a) FRANKA and (b) FRANKAOBSTACLES environments across different samples.
170
+
171
+ < g r a p h i c s >
172
+
173
+ Figure 3: Visualization of a trajectory and top samples from (top) NFMPC, (middle) FlowMPPI, and (bottom) MPPI on the FRANKAOBSTACLES task.
174
+
175
+ to withstand a ${64} \times$ decrease in the number of samples (1024 to 16) while still achieving a ${100}\%$ success rate, although the average trajectory cost increases by 91%. Meanwhile, MPPI at 16 samples has a ${97}\%$ success rate and a ${26}\%$ increase in average trajectory cost over NFMPC with the same number of samples. We found that while FlowMPPI improves over MPPI at higher sample counts, it actually performs significantly worse with fewer samples. At 16 samples, FlowMPPI achieves only a $6\%$ success rate and has over a $2 \times$ worse average trajectory cost than NFMPC. To help understand what NFMPC is doing differently, we superimpose 32 different trajectories with fixed start and goal positions using each controller in Figure 1b. We find that both MPPI and FlowMPPI always select the same path through the environment. Meanwhile, NFMPC is able to discover different paths through the environment, allowing it to better react to the stochastic perturbations that knock it off the current plan and improve performance. Additional visualizations of the resulting trajectories and top samples drawn from the distributions for all controllers can be found in Appendix A.3.
176
+
177
+ § 5.2 FRANKA PANDA ARM
178
+
179
+ Next, we apply NFMPC to the FRANKA task, which involves controlling a 7 degree-of-freedom (DOF) Franka Panda robot arm and steering it towards a target goal from a fixed starting pose. The goal is randomly selected at the beginning of each episode, which lasts for 600 time steps. An episode is considered successful if the end effector reaches the target position under the time constraints. We plot our quantitative results in Figure 2a and find that NFMPC again consistently matches or outperforms MPPI and FlowMPPI at each sample amount. At 1024 samples, NFMPC achieves a 16% and 20% reduction in cost over MPPI and FlowMPPI, respectively. In fact, FlowMPPI consistently performs worse than MPPI, indicates that conditioning on just goal location does not help much in this more complex scenario. As before, we find that NFMPC scales more gracefully with reduced samples, achieving a ${100}\%$ success rate with a ${64} \times$ decrease in the number of samples (1024 to 16) while incurring only a ${63}\%$ increase in average trajectory cost. For comparison, MPPI at 16 samples has only a ${34.38}\%$ success rate and increases in average trajectory cost by over $4 \times$ . And at 16 samples, FlowMPPI always fails to complete the task.
180
+
181
+ To increase the difficulty of the problem, we consider the FRANKAOBSTACLES task, which adds a single pole obstacle that the arm must avoid. The obstacle and goal positions are randomized at the beginning of each episode, and the episode is successful if the end effector reaches the target while the arm avoids collisions. In addition to our normal baselines, we also compare to a controller trained only in environments without obstacles (NFMPC (No Obs)). We compare all four models in Figure 2b. It is also important to note that no controller achieves a ${100}\%$ success rate, as not every randomly generated environment is feasible. At 1024 samples, both NFMPC (Obs) and FlowMPPI achieve a success rate of ${87.50}\%$ , compared to the ${84.38}\%$ success rate of MPPI and NFMPC (No Obs). However, NFMPC (Obs) achieves a 16% reduction in cost over FlowMPPI, illustrating its potential benefits over all the baselines. We again find that both variants of NFMPC scale better with reduced number of samples. In fact, FlowMPPI scales even worse than MPPI, as it fails to succeed in every test environment with 32 or less samples. Meanwhile, with a ${64} \times$ decrease in the number of samples (1024 to 16), the success rate of NFMPC (Obs) only drops by 11% to 78.12%. Next, we note that NFMPC (No Obs) at 1024 or 512 samples actually performs worse than MPPI when obstacles are present. This illustrates that the gain in performance is specific to the environments on which the controller is trained. However, this appears to only be true with more samples, as NFMPC (No Obs) still achieves a 75.00% success rate at 16 samples, which is only 4% worse than NFMPC (Obs). Finally, in Appendix A.4, we present a breakdown of the individual cost terms for all models to gain insight into the learned sampling distributions. We also perform an ablation which removes the learned shift model from NFMPC to illustrate that it is a crucial component.
182
+
183
+ § 6 LIMITATIONS
184
+
185
+ A major limitation of our approach is that the learned distribution and shift model are only valid for a fixed horizon length and control dimensionality. Therefore, these components cannot be directly transferred to new robots or for alternate choices of horizon without being retrained. However, this is, in general, true of all prior work which learns the sampling distribution for MPC. This could potentially be remedied in future work by novel architectural innovations and training distributions across both environments and robots. Moreover, the learned sampling distribution is specific to the environmental distributions on which it was trained. As such, we find that it can perform worse than MPPI when transferred to out-of-distribution environments. Prior work by Power and Berenson $\left\lbrack {{14},{15}}\right\rbrack$ remedied this by learning a generative model of environments and using this learned distribution to perform a projection step on the conditional normalizing flow. However, we found in our evaluations that conditioning on environmental information consistently hurt performance. Therefore, we cannot directly apply this technique to our method, and future work needs to be done to improve generalization capabilities. Finally, while our approach does perform better overall on in-distribution environments, figuring out how to successfully incorporate environment information into the NF may possibly open up the door to further improvements.
186
+
187
+ § 7 CONCLUSION
188
+
189
+ In this paper, we presented a method for learning sampling distributions for MPC with normalizing flows (NFs), which moves all online parameter updates and warm-starting operations into the latent space. We show how to train these sampling distributions by framing the problem as bi-level optimization and deriving an approximate gradient through the MPC update. Additionally, we illustrate how to incorporate box control constraints directly into the NF architecture. Through our empirical evaluations in both simulated navigation and manipulation problems, we demonstrate that our approach is able to surpass the performance of all baselines. Moreover, we find that controllers which move all operations into the latent space are able to scale more gracefully with a reduction in the number of samples. These results indicate the importance of leveraging the latent space in learned sampling distributions for MPC. Finally, because we learn all components of the controller through episodic interactions with the environment, they can potentially be trained to account for the modeling errors in the MPC controller.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/8tmKW-NG2bH/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Learning Goal-Conditioned Policies Offline with Self-Supervised Reward Shaping
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Developing agents that can execute multiple skills by learning from pre-collected datasets is an important problem in robotics, where online interaction with the environment is extremely time-consuming. Moreover, manually designing reward functions for every single desired skill is prohibitive. Prior works [1] targeted these challenges by learning goal-conditioned policies from offline datasets without manually specified rewards, through hindsight relabeling [2]. These methods suffer from the issue of sparsity of rewards, and fail at long-horizon tasks. In this work, we propose a novel self-supervised learning phase on the pre-collected dataset to understand the structure and the dynamics of the model, and shape a dense reward function for learning policies offline. We evaluate our method on two continuous control tasks, and show that our model is significantly better than existing related approaches $\left\lbrack {1,2}\right\rbrack$ , especially on tasks that involve long-term planning.
12
+
13
+ Keywords: Offline Reinforcement Learning, Self-Supervised Learning, Goal-Conditioned RL
14
+
15
+ ## 16 1 Introduction
16
+
17
+ While the goal of realizing general autonomous agents requires mastery of a large and diverse set of skills, achieving this by focusing on each skill individually with standard reinforcement learning (RL) frameworks is prohibitive. This is primarily due to the need for manually designed reward functions and environment interactions for each skill. Unsupervised RL has opened a way for learning agents that can execute diverse behaviours without supervision (i.e., hand-crafted rewards), and then be further adapted to downstream tasks through few-shot or zero-shot generalization. However, learning policies with such methods is impractical with real robots as they require millions of interactions when trained online.
18
+
19
+ Recently, a line of study has emerged that uses pre-collected dataset of environment interactions and trains policies offline (i.e., without additional interactions with the environment). More precisely, given a dataset of reward-free trajectories and a reward function designed to solve a specific task, the agent learns offline by relabeling the transitions in the dataset with the reward function. This setting is particularly relevant in robotics, where data collection is extremely time-consuming: disentangling data collection and policy learning in this context allows for faster policy iteration. However, it would require to design one specific reward function and to learn one policy per task that we want the agent to solve.
20
+
21
+ An important question to scale robot learning is therefore to find ways of learning multi-task policies from already collected datasets. Recent works $\left\lbrack {1,3,4}\right\rbrack$ , have targeted this problem from a goal-conditioned perspective: given a dataset of previously collected trajectories, the objective is to learn a goal-oriented agent that can reach any state in the dataset. The advantages of this formulation are two-fold: first, it makes it easy to interpret skills, and second it does not require any adaptation at test time. Making this framework unsupervised requires to break free from hand-crafted rewards, as proposed by Chebotar et al. [1], where they learn goal-conditioned policies offline through hindsight relabeling [2]. However, their approach is subject to the pitfall of learning from sparse rewards, and can be inefficient in long-horizon tasks.
22
+
23
+ In this work, we present a self-supervised reward shaping method that enables building an offline dataset with dense rewards. To this end, we develop self-supervised learning phase with the pre-collected dataset that aims at learning the structure and dynamics of the environment before training the policy. During this phase, we: (i) train a Reachability Network [5] to estimate the local distance in the state space $\mathcal{S}$ , then (ii) extract a set of representative states that covers $\mathcal{S}$ , and finally (iii) build a graph on this set to approximate global distance in $\mathcal{S}$ . We then use the graph to train goal-conditioned policy offline in two ways: to compute rewards through shortest path distance, and to create transitions of intermediate difficulty on the path to the goal.
24
+
25
+ We evaluate our method on complex continuous control tasks, and compare it to previous state-of-the-art offline $\left\lbrack {1,2}\right\rbrack$ approaches. We show that our graph-based reward method learns good goal-conditioned policies by leveraging transitions from a dataset of past experience with neither any additional interactions with the environment nor manually designed rewards. Moreover, we show that, contrary to prior work [1] that uses datasets collected with a policy trained with supervised rewards, our method allows for learning goal-conditioned policies even from datasets of poor quality, e.g. containing trajectories sampled with a random policy. Our work is thus the first to enable learning goal-conditioned policies from offline datasets without any supervision, as it does not require any hand-crafted reward function at any stage: data collection, policy training and evaluation.
26
+
27
+ ## 2 Related Work
28
+
29
+ Goal-conditioned RL In its original formulation, goal-conditioned reinforcement learning was tackled by several methods $\left\lbrack {6,7,2,8}\right\rbrack$ . The policy learning process is supervised in these works: the set of evaluation goals is available at train time as well as a shaped reward function that guides the agent to the goal. Several works propose solutions for generating goals automatically when training goal-conditioned policies, including self-play [9, 10, 11], and adversarial student-teacher policies [12]. In a recent line of research, some works [13, 14, 15, 16, 17, 18, 19, 20] focused on learning goal-conditioned policies in an unsupervised fashion. The objective is to train general agents that can reach any goal state in the environment without any supervision (reward, goal-reaching function) at train time. In particular, Mendonca et al. [19] trains a model-based agent that learns to discover novel goals with an explorer model, and reach them with an achiever policy via imagined rollouts.
30
+
31
+ Offline RL The data collection technique is an important aspect when studying the training of policies from pre-collected datasets. In this context, the first works assumed access to policies trained with task-specific rewards [21, 22]. More recently, methods proposed to leverage unsupervised exploration to collect datasets for offline RL [23, 24]. In particular, Yarats et al. [23] creates a dataset of pre-collected trajectories, ExoRL, on the DeepMind Control Suite [25] generated without any hand-crafted rewards. Similarly to URLB [26], ExoRL benchmarks a number of exploration algorithms Pathak et al. [27], Eysenbach et al. [28], Pathak et al. [29], Yarats et al. [30], and evaluates the performance of a policy trained on the corresponding offline datasets relabled with task-specific rewards.
32
+
33
+ Multi-task Offline RL Recent works proposed to learn multiple tasks from pre-collected datasets, starting with methods [31] the generates goals to improve the offline data collection process in a self-supervised way. This connection has also been studied in the supervised setting [3, 32] and to learn hierarchical policies [4]. In a setting closely relateed to our work, Actionable Models [1] considers the problem of learning goal-conditioned policies from offline datasets without interacting with the environment, and with no task-specific rewards. They employ goal-conditioned Q-learning with hindsight relabeling. As opposed to this work that relies on learning from sparse rewards, we propose to leverage a self-supervised training stage to densely shape rewards.
34
+
35
+ ![01963fc6-ec09-796d-ab3b-30ebfdaf1729_2_362_208_1066_398_0.jpg](images/01963fc6-ec09-796d-ab3b-30ebfdaf1729_2_362_208_1066_398_0.jpg)
36
+
37
+ Figure 1: Visualization of our dense reward shaping method. (a) shows how training labels are generated for training the RNet: given a state ${s}_{i}$ , positive pairs are sampled in the same trajectory within a threshold ${\tau }_{\text{reach }}$ , and the rest of the trajectory forms negative pairs. (b) presents an overview of the graph building algorithm. Given a transition $\left( {{s}_{i},{s}_{i + 1}}\right) \in \mathcal{D}$ , we add ${s}_{i}$ as node if it is distant enough from existing nodes in the graph. Moreover, we add an edge in the graph between the corresponding nearest neighbors of ${s}_{i}$ and ${s}_{i + 1}$ .
38
+
39
+ ## 3 Preliminaries
40
+
41
+ Let $\mathcal{E} = \left( {\mathcal{S},\mathcal{A}, P,{p}_{0},\gamma , T}\right)$ define a reward-free Markov decision process (MDP), where $\mathcal{S}$ and $\mathcal{A}$ are state and action spaces respectively, $P : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow {\mathbb{R}}_{ + }$ is a state-transition probability function, ${p}_{0} : \mathcal{S} \rightarrow {\mathbb{R}}_{ + }$ is an initial state distribution, $\gamma$ is a discount factor, and $T$ is the task horizon. In the goal-conditioned setting, the objective is to learn a goal-conditioned policy $\pi : \mathcal{S} \times \mathcal{G} \rightarrow \mathcal{A}$ that maximizes the expectation of the cumulative return over the goal distribution, where $\mathcal{G}$ denotes the goal space. Here, we make the common assumption that states and goals are defined in the same form, i.e., $\mathcal{G} \subset \mathcal{S}$ .
42
+
43
+ We assume that we have access to a dataset $\mathcal{D}$ of pre-collected episodes generated by using any data collection algorithm in $\mathcal{E}$ . Each episode is stored in $\mathcal{D}$ as a series of $\left( {s, a,{s}^{\prime }}\right)$ tuples. In the general offline formulation introduced by Yarats et al. [23], the dataset $\mathcal{D}$ can be relabeled by evaluating any reward function $r : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ at each tuple in $\mathcal{D}$ , and adding the resulting tuple $\left( {s, a, r\left( {s, a}\right) ,{s}^{\prime }}\right)$ in the relabeled dataset ${\mathcal{D}}_{r}$ . We can extend this protocol to the goal-oriented setting by considering a goal distribution ${p}_{\mathcal{G}}$ in the goal space, and any goal-conditioned reward function $r : \mathcal{S} \times \mathcal{A} \times \mathcal{G} \rightarrow \mathbb{R}$ . Given a tuple $\left( {s, a,{s}^{\prime }}\right)$ in $\mathcal{D}$ , we relabel it by sampling a goal $g \sim {p}_{\mathcal{G}}$ , computing $r\left( {s, a, g}\right)$ and adding the resulting tuple $\left( {s, a, g, r\left( {s, a, g}\right) ,{s}^{\prime }}\right)$ in the relabeled dataset ${\mathcal{D}}_{r,{pg}}$ . In the unsupervised framework, we assume that the evaluation goal space is unknown, and that we do not have access to any pre-defined reward function, nor any distance function in the state space.
44
+
45
+ Once the relabeled dataset ${\mathcal{D}}_{r,{p}_{G}}$ is generated, we can learn a goal-conditioned policy by executing any offline RL algorithm. The algorithm runs completely offline, by sampling tuples from ${\mathcal{D}}_{r,{p}_{G}}$ and without any interaction with the environment. The goal-conditioned policy is then evaluated online in $\mathcal{E}$ on a set of fixed evaluation goals that is not known during training.
46
+
47
+ ## 4 Self-supervised Reward Shaping
48
+
49
+ We now describe our self-supervised reward shaping method for learning a goal-conditioned agent offline on a pre-collected dataset $\mathcal{D}$ . Our method comprises three stages that we will describe below. In the first stage, we train a Reachability Network (RNet) [5] on the trajectories in $\mathcal{D}$ to predict whether two states are reachable from one another. The second stage consists in building a directed graph $\mathcal{M}$ whose nodes are a subset of states in $\mathcal{D}$ , and edges connect reachable states. We employ the RNet as a criterion to avoid adding similar states to $\mathcal{M}$ so that its nodes cover the states in $\mathcal{D}$ uniformly. Subsequently, we determine the reach of all the nodes based on the transitions in $\mathcal{D}$ and the RNet, and connect them with directed edges. The final stage is training the goal-conditioned policy on transitions and goals sampled from $\mathcal{D}$ . It is trained with dense rewards computed as the sum of a global (based on the graph distance in $\mathcal{M}$ ) and local (based on the RNet) distance terms. The important aspect of our method is that the whole training only uses trajectories from the pre-collected dataset $\mathcal{D}$ without running a single action in the environment. We now describe each component in more detail.
50
+
51
+ ### 4.1 Reachability network
52
+
53
+ In order to learn a good local distance between states in $\mathcal{D}$ , we adopt an asymmetric version of the Reachability Network (RNet) [5]. The general idea of RNet is to approximate the distance between states in the environment by the average number of steps it takes for a random policy to go from one state to another. We adapted the the original formulation with two modifications: first, we use exploration trajectories from $\mathcal{D}$ instead of random trajectories and second, we leverage the temporal direction because a state can be reachable from another without the converse being true. Let $\left( {{s}_{1}^{a},\ldots ,{s}_{T}^{a}}\right)$ denote a trajectory in $\mathcal{D}$ , where $a$ is a trajectory index. We define a reachability label ${y}_{ij}^{ab}$ for each pair of observations $\left( {{s}_{i}^{a},{s}_{j}^{b}}\right)$ by
54
+
55
+ $$
56
+ {y}_{ij}^{ab} = \left\{ {\begin{array}{ll} 1 & \text{ if }a = b\text{ and }0 \leq j - i \leq {\tau }_{\text{reach }}, \\ 0 & \text{ otherwise,} \end{array}\;}\right. \text{ for }1 \leq i, j \leq T, \tag{1}
57
+ $$
58
+
59
+ where the reachability threshold ${\tau }_{\text{reach }}$ is a hyperparameter. The reachability label is equal to 1 iff the states are in the same trajectory and the number of steps from ${s}_{i}^{a}$ to ${s}_{j}^{b}$ is below ${\tau }_{\text{reach }}$ . Note that ${y}_{ij}^{ab} \neq {y}_{ji}^{ab}$ . We train a siamese neural network $R$ , the RNet, to predict the reachability label ${y}_{ij}^{ab}$ from a pair of observations $\left( {{s}_{i}^{a},{s}_{j}^{b}}\right)$ in $\mathcal{D}$ . The RNet consists of an embedding network $g$ , and a fully-connected network $f$ to compare the embeddings, i.e.,
60
+
61
+ $$
62
+ R\left( {{s}_{i}^{a},{s}_{j}^{b}}\right) = \sigma \left\lbrack {f\left( {g\left( {s}_{i}^{a}\right) , g\left( {s}_{j}^{b}\right) }\right) }\right\rbrack , \tag{2}
63
+ $$
64
+
65
+ where $\sigma$ is a sigmoid function. A higher $R$ value indicates two states reachable easily with random walk, so they can be considered close in the environment. More precisely, $R$ takes values in(0,1) and ${s}^{\prime }$ is reachable from $s$ if $R\left( {s,{s}^{\prime }}\right) \geq {0.5}$ . RNet is trained in a self-supervised fashion, as the ground-truth labels needed to train the network are generated automatically.
66
+
67
+ ### 4.2 Directed graph
68
+
69
+ In the next phase, we use trajectories in $\mathcal{D}$ to build a directed graph $\mathcal{M}$ to capture high-level dynamics of the environment. We want the nodes of $\mathcal{M}$ to evenly represent the states in $\mathcal{D}$ . This is achieved by filtering the states in $\mathcal{D}$ : a state is added to $\mathcal{M}$ only if it is distant enough from all the other nodes in $\mathcal{M}$ . More precisely, a state $s \in \mathcal{D}$ is added to $\mathcal{M}$ if and only if
70
+
71
+ $$
72
+ R\left( {s, n}\right) < {0.5}\text{and}R\left( {n, s}\right) < {0.5}\text{, for all}n \in \mathcal{M}\text{.} \tag{3}
73
+ $$
74
+
75
+ Note that we require both the directions to be novel. This filtering avoids redundancy by preventing similar states to be added to the memory. It also has a balancing effect because it limits the number of states that can be added from a certain area even if it is visited by the agent many times in $\mathcal{D}$ .
76
+
77
+ Once the nodes are selected, we connect pairs that are reachable from one to another. To this end, we employ trajectories in $\mathcal{D}$ because they contain actual feasible transitions. Given a transition ${s}_{i} \rightarrow {s}_{j}$ in $\mathcal{D}$ , we add edge ${n}_{i} \rightarrow {n}_{j}$ if ${s}_{i}$ can be reached from node ${n}_{i}$ and node ${n}_{j}$ can be reached from ${s}_{j}$ . This way, we have a chain ${n}_{i} \rightarrow {s}_{i} \rightarrow {s}_{j} \rightarrow {n}_{j}$ and can assume ${n}_{j}$ is reachable from ${n}_{i}$ . Concretely, we select node ${n}_{i}$ to be the incoming nearest neighbor $\left( {\mathrm{{NN}}}_{\mathrm{{in}}}\right)$ to ${s}_{i}$ , and ${n}_{j}$ to be the outgoing nearest neighbor $\left( {\mathrm{{NN}}}_{\text{out }}\right)$ from ${s}_{j}$ , i.e.,
78
+
79
+ $$
80
+ {n}_{i} = {\mathrm{{NN}}}_{\text{in }}\left( {s}_{i}\right) = \mathop{\operatorname{argmax}}\limits_{{n \in \mathcal{M}}}R\left( {n,{s}_{i}}\right) ,\;{n}_{j} = {\mathrm{{NN}}}_{\text{out }}\left( {s}_{j}\right) = \mathop{\operatorname{argmax}}\limits_{{n \in \mathcal{M}}}R\left( {{s}_{j}, n}\right) . \tag{4}
81
+ $$
82
+
83
+ 5 By performing this action over all the transitions in $\mathcal{D}$ , we turn $\mathcal{M}$ into a directed graph where edges represent reachability from one node to another.
84
+
85
+ ### 4.3 Distance function for policy training
86
+
87
+ RNet predicts the reachability between ${s}_{i}$ and ${s}_{j}$ so we can directly use it as a distance metric
88
+
89
+ $$
90
+ {d}_{l}\left( {{s}_{i},{s}_{j}}\right) = 1 - R\left( {{s}_{i},{s}_{j}}\right) ,\;\forall {s}_{i},{s}_{j} \in \mathcal{S}. \tag{5}
91
+ $$
92
+
93
+ However, this reachability metric is confined within a certain threshold, so there is no guarantee that the RNet predictions will have good global properties.
94
+
95
+ In contrast, the directed graph $\mathcal{M}$ captures high-level global dynamics of the environment. We can easily derive a distance function ${d}_{\mathcal{M}}\left( {{n}_{i},{n}_{j}}\right)$ between any pair of nodes in $\mathcal{M}$ by computing the length of the shortest path in this graph, provided the graph is connected. In practice, we can use a trick to connect the graph if necessary, by adding an edge between the pair of nodes from different connected components with maximum RNet value. Moreover, we can extend this distance ${d}_{\mathcal{M}}$ to a global distance function ${d}_{g}$ in the state space $\mathcal{S}$ by finding, for any pair ${s}_{i}$ and ${s}_{j}$ in $\mathcal{S}$ their corresponding nearest neighbors in the proper direction. More precisely,
96
+
97
+ $$
98
+ {d}_{g}\left( {{s}_{i},{s}_{j}}\right) = {d}_{\mathcal{M}}\left( {{\mathrm{{NN}}}_{\text{out }}\left( {s}_{i}\right) ,{\mathrm{{NN}}}_{\text{in }}\left( {s}_{j}\right) }\right) ,\;\forall {s}_{i},{s}_{j} \in \mathcal{S}. \tag{6}
99
+ $$
100
+
101
+ The distance ${d}_{g}$ between two states in the state space becomes the length of the shortest path between their respective closest nodes in the graph. This process propagates the good local properties of RNet to get a well-shaped distance function for states that are further away. Since ${d}_{g}$ captures global distances while ${d}_{l}$ captures local fine-grained distance, we use their combination as a final distance function: $\forall {s}_{i},{s}_{j} \in \mathcal{S},\;d\left( {{s}_{i},{s}_{j}}\right) = {d}_{g}\left( {{s}_{i},{s}_{j}}\right) + {d}_{l}\left( {{s}_{i},{s}_{j}}\right)$ .
102
+
103
+ ### 4.4 Policy training
104
+
105
+ The last phase of our method is training the goal-condition policy offline. Here, we create an offline replay buffer $\mathcal{B}$ that is filled with relabeled data. We randomly sample a transition $\left( {{s}_{t},{a}_{t},{s}_{t + 1}}\right)$ from $\mathcal{D}$ as well as a goal state $g$ and relabel the transition with reward ${r}_{t} = - d\left( {{s}_{t + 1}, g}\right)$ . We then push the relabeled transition $\left( {{s}_{t},{a}_{t}, g,{r}_{t},{s}_{t + 1}}\right)$ to $\mathcal{B}$ . In order to create a curriculum that artificially guides the agent towards the goal, we experimented with two different transition augmentation techniques described below.
106
+
107
+ Sub-goal augmentation. Let $\left( {{s}_{t},{a}_{t}, g,{r}_{t},{s}_{t + 1}}\right)$ denote a relabeled transition and $\left( {{n}_{0},\ldots ,{n}_{P - 1}}\right)$ the shortest path in the graph $\mathcal{M}$ between ${n}_{0} = {\mathrm{{NN}}}_{\text{out }}\left( {s}_{t}\right)$ and ${n}_{P - 1} = {\mathrm{{NN}}}_{\text{in }}\left( g\right)$ . The augmentation technique consists in adding to the replay buffer every transition $\left( {{s}_{t},{a}_{t},{n}_{i},{r}_{t}^{i},{s}_{t + 1}}\right)$ for all $i \in \{ 0, P - 1\}$ , where ${r}_{t}^{i} = - d\left( {{s}_{t + 1},{n}_{i}}\right)$ . In other words, given a transition $\left( {{s}_{t},{a}_{t},{s}_{t + 1}}\right)$ and a goal $g$ from $\mathcal{D}$ , we push to the replay buffer a set of relabeled transitions with all goals on the shortest path from ${s}_{t}$ to $g$ (and their corresponding rewards).
108
+
109
+ Edge augmentation. Similarly to the subgoal augmentation technique, we consider a relabeled transition $\left( {{s}_{t},{a}_{t}, g,{r}_{t},{s}_{t + 1}}\right)$ and the shortest path $\left( {{n}_{0},\ldots ,{n}_{P - 1}}\right)$ between the corresponding nearest neighbors of ${s}_{t}$ and $g$ . This time, we keep the same goal $g$ for every augmented transition, but for every edge $\left( {{n}_{i - 1},{n}_{i}}\right) , i \in \{ 1, P - 1\}$ , we add the relabeled transition $\left( {{s}_{t}^{i},{a}_{t}^{i}, g,{r}_{t}^{i},{s}_{t + 1}^{i}}\right)$ to $\mathcal{B}$ where $\left( {{s}_{t}^{i},{a}_{t}^{i},{s}_{t + 1}^{i}}\right) \in \mathcal{D},{\mathrm{{NN}}}_{\text{out }}\left( {s}_{t}^{i}\right) = {n}_{i - 1},{\mathrm{{NN}}}_{\text{in }}\left( {s}_{t + 1}^{i}\right) = {n}_{i}$ and ${r}_{t}^{i} = - d\left( {{s}_{t}^{i}, g}\right)$ . Note that the existence of such a transition in $\mathcal{D}$ is guaranteed by construction: an edge is added to the graph from one node to another iff there exist a transition in $\mathcal{D}$ whose corresponding nearest neighbors are these two nodes (in the same order).
110
+
111
+ Once the replay buffer $\mathcal{B}$ is filled, the goal-conditioned policy can be trained using any off-policy algorithm. In our implementation, we chose Soft Actor-Critic [33].
112
+
113
+ ## 5 Experiments
114
+
115
+ ### 5.1 Environments & data collection
116
+
117
+ We perform experiments on two continuous control tasks with state-based inputs.
118
+
119
+ ![01963fc6-ec09-796d-ab3b-30ebfdaf1729_5_554_213_683_284_0.jpg](images/01963fc6-ec09-796d-ab3b-30ebfdaf1729_5_554_213_683_284_0.jpg)
120
+
121
+ Figure 2: (a) UMaze environment, and (b) Comparison of the performance of the goal-conditioned policy trained with RNet and Graph-based rewards on the UMaze
122
+
123
+ UMaze [34]. The first environment, shown in Figure 2a, is a two-dimensional U-shaped maze with continuous action space and a fixed initial position. We generate the training data for this environment by deploying a random policy with randomized start position in the maze. We collect ${10}\mathrm{k}$ trajectories of length $1\mathrm{k}$ . We evaluate the goal-conditioned agent trained offline on this task by giving a goal sampled at random in the environment to the agent and evaluating the final euclidean distance to the goal.
124
+
125
+ RoboYoga Walker [19]. Introduced by Mendonca et al. [19], the challenging RoboYoga benchmark is based on the Walker domain of the DeepMind Control Suite [25], and consists of 12 goals that correspond to body poses inspired from yoga (e.g. lying down, raising one leg or balancing). We consider the state-based version of the task, and use the task-agnostic dataset from Yarats et al. [23] generated with an unsupervised exploration policy. It contains ${10}\mathrm{k}$ trajectories of length $1\mathrm{\;k}$ obtained by deploying the "proto" [30] algorithm in the Walker domain. The success metric of the evaluation policy is assessed by the pose of the humanoid at the end of the episode.
126
+
127
+ ### 5.2 Ablation & design choices
128
+
129
+ In this section, we explain the design choices of our graph-based reward shaping method. We first show that graph structure is necessary for long-term planning. Then, we explain the importance of the directness of the graph on tasks with asymmetric behaviours. Finally, we show the impact the transition augmentation techniques when labeling data for the goal-conditioned policy.
130
+
131
+ Necessity of graph-based rewards. An important component of our method is the construction of a graph $\mathcal{M}$ that allows for computing a distance with good global properties in the state space. In order to empirically validate this hypothesis, we performed a comparison between the goal-conditioned policy trained with RNet rewards (i.e., by using the distance ${d}_{l}$ from equation (5)) and the one trained with both distance terms as reward. We perform this experiment on the UMaze environment, and show results in Figure 2b. We see that the model trained with graph rewards outperforms the one trained with RNet rewards overall, particularly for faraway goals (ie. rooms 3 & 4). We also notice that the model trained with RNet rewards is slightly better for goals that are close from the initial position, which highlights the fact that RNet is good for estimating local distances. Figure 3 shows a qualitative visualization of the RNet and graph-based rewards. We see that RNet distance does not properly capture global distance, as it shows low values between states in the first and fourth rooms.
132
+
133
+ Importance of graph directness. In a second experiment, we investigate the importance of the asymmetry of the RNet and the directness of the graph. To do so, we implement an undirected version of our method where the RNet is symmetric and the graph is undirected. All other components of our method remain the same. We first look at some qualitative visualizations of the shortest path in the undirected and directed graphs in the RoboYoga task, as shown in Figure 4. In the undirected case, the humanoid defies the laws of gravity and will be encouraged to get on its head from the back, which might be extremely difficult, or even infeasible. In the directed case, the shortest path
134
+
135
+ ![01963fc6-ec09-796d-ab3b-30ebfdaf1729_6_550_206_707_290_0.jpg](images/01963fc6-ec09-796d-ab3b-30ebfdaf1729_6_550_206_707_290_0.jpg)
136
+
137
+ Figure 3: (a) Heatmap of rewards computed with the RNet (a) and graph (b) distances.
138
+
139
+ ![01963fc6-ec09-796d-ab3b-30ebfdaf1729_6_309_575_1182_270_0.jpg](images/01963fc6-ec09-796d-ab3b-30ebfdaf1729_6_309_575_1182_270_0.jpg)
140
+
141
+ Figure 4: Shortest Path visualization on the RoboYoga Walker task for undirected (top) and directed (bottom) graphs.
142
+
143
+ fosters the agent to first get back on its legs, and then lean forward. In this exemple, the gravity makes the dynamics of the environment non-symmetric and non-fully reversible, which justifies the directed formulation described in our method.
144
+
145
+ Transition sampling strategy. As a final ablation study, we study the utility of the transition augmentation techniques described in subsection 4.4. We run experiments with the four possible variants of our method: (i) without any augmentation, (ii) with edge augmentation only, (iii) with subgoal augmentation only and (iv) with both augmentations. We execute this experiment on the RoboYoga task, and show results in Figure 6b. We see that both of the augmentation techniques improve the performance of the goal-conditioned, with subgoal augmentation showing the greater improvement. Moreover, we note that combining both augmentations improves the performance further. For the reminder of the experiments, we will always use both augmentation techniques.
146
+
147
+ ### 5.3 Comparison to prior work
148
+
149
+ Baselines. We compare our method to prior works on unsupervised goal-conditioned policy learning. We perform an apples-to-apples comparison by implementing the baselines using the same learning framework than that of our method, and simply changing the reward relabeling process. We compare with the following baselines:
150
+
151
+ - Hindsight Experience Replay [HER] [2] A re-implementation of the standard unsupervised RL technique, adapted to the offline setting. More precisely, we relabel sub-trajectories from $\mathcal{D}$ with a sparse reward, which is equal to 1 only for the final transition of the sub-trajectory, and 0 everywhere else. Following Chebotar et al. [1], we also label sub-trajectories with goals sampled at random in $\mathcal{D}$ and zero reward.
152
+
153
+ - HER [2] with random negative action A variant of HER where, for a transition in $\mathcal{D}$ we sample an action uniformly at random in the action space and label it with zero reward. This helps leveraging the problem of over-estimation of the Q-values for unseen actions mentioned in Chebotar et al. [1].
154
+
155
+ - Actionable Models [1] A method based on goal-conditioned Q-learning with hindsight relabeling. We re-implemented the goal relabeling procedure that uses the Q-value at the
156
+
157
+ ![01963fc6-ec09-796d-ab3b-30ebfdaf1729_7_318_197_1161_800_0.jpg](images/01963fc6-ec09-796d-ab3b-30ebfdaf1729_7_318_197_1161_800_0.jpg)
158
+
159
+ Figure 6: Performance of on the RoboYoga Walker task
160
+
161
+ final state of sub-trajectories in $\mathcal{D}$ to enable goal chaining, as well as the negative action sampling trick.
162
+
163
+ Comparison on UMaze. We compare our method to the baselines on the UMaze task, and show results in Figure 5. We see that our model outperforms all baselines overall, and shows greater improvements on goals that are far from the initial position. Interestingly, we note that the Actionable model is able to reach goals in the first room only, and is unable to navigate towards goals from the second room. This confirms the idea that sparse rewards make it difficult for the policy to learn long-horizon tasks.
164
+
165
+ Comparison on RoboYoga Walker. In a second experiment, we compare our method to baselines on the RoboYoga task. These results are shown in Figure 6a. Here again, our method outperforms prior work by a large margin, and Actionable models does not make any significant improvement. The results broken down by goal are shown in the supplementary material. Overall the empirical results suggest that our dense reward shaping method allows for faster and more robust offline goal-conditioned policy training.
166
+
167
+ ## 6 Conclusion: Summary and Limitations
168
+
169
+ We proposed a method for learning multi-task policies from pre-generated datasets in an offline and unsupervised fashion, i.e., without requiring any additional interaction with the environment, nor manually designed rewards. Our method leverages a self-supervised stage that aims at learning the dynamics of the environment from the offline dataset, and that allows for shaping a dense reward function. The shaped reward function shows great improvement compared to prior works based on hindsight relabeling, especially on long-horizon tasks where dense rewards are crucial for good policy learning. We presented one form of self-supervised learning on the offline dataset in this work. This choice needs further evaluation by comparing different self-supervision strategies and evaluating their impact on the policy learning task. This constitutes an exciting future prospect.
170
+
171
+ References
172
+
173
+ [1] Y. Chebotar, K. Hausman, Y. Lu, T. Xiao, D. Kalashnikov, J. Varley, A. Irpan, B. Eysenbach, R. C. Julian, C. Finn, et al. Actionable models: Unsupervised offline reinforcement learning of robotic skills. In International Conference on Machine Learning, pages 1518-1528. PMLR, 2021.
174
+
175
+ [2] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin, O. Pieter Abbeel, and W. Zaremba. Hindsight experience replay. Advances in neural information processing systems, 30, 2017.
176
+
177
+ [3] R. Yang, Y. Lu, W. Li, H. Sun, M. Fang, Y. Du, X. Li, L. Han, and C. Zhang. Rethinking goal-conditioned supervised learning and its connection to offline rl. arXiv preprint arXiv:2202.04478, 2022.
178
+
179
+ [4] J. Li, C. Tang, M. Tomizuka, and W. Zhan. Hierarchical planning through goal-conditioned offline reinforcement learning. arXiv preprint arXiv:2205.11790, 2022.
180
+
181
+ [5] N. Savinov, A. Raichuk, D. Vincent, R. Marinier, M. Pollefeys, T. Lillicrap, and S. Gelly. Episodic curiosity through reachability. In International Conference on Learning Representations, 2018.
182
+
183
+ [6] L. P. Kaelbling. Learning to achieve goals. In IJCAI, volume 2, pages 1094-8. Citeseer, 1993.
184
+
185
+ [7] T. Schaul, D. Horgan, K. Gregor, and D. Silver. Universal value function approximators. In International conference on machine learning, pages 1312-1320. PMLR, 2015.
186
+
187
+ [8] S. Nasiriany, V. Pong, S. Lin, and S. Levine. Planning with goal-conditioned policies. Advances in Neural Information Processing Systems, 32, 2019.
188
+
189
+ [9] S. Sukhbaatar, Z. Lin, I. Kostrikov, G. Synnaeve, A. Szlam, and R. Fergus. Intrinsic motivation and automatic curricula via asymmetric self-play. In International Conference on Learning Representations, 2018.
190
+
191
+ [10] S. Sukhbaatar, E. Denton, A. Szlam, and R. Fergus. Learning goal embeddings via self-play for hierarchical reinforcement learning. arXiv preprint arXiv:1811.09083, 2018.
192
+
193
+ [11] O. OpenAI, M. Plappert, R. Sampedro, T. Xu, I. Akkaya, V. Kosaraju, P. Welinder, R. D'Sa, A. Petron, H. P. de Oliveira Pinto, et al. Asymmetric self-play for automatic goal discovery in robotic manipulation. 2020.
194
+
195
+ [12] A. Campero, R. Raileanu, H. Kuttler, J. B. Tenenbaum, T. Rocktäschel, and E. Grefenstette. Learning with amigo: Adversarially motivated intrinsic goals. In International Conference on Learning Representations, 2020.
196
+
197
+ [13] D. Warde-Farley, T. Van de Wiele, T. Kulkarni, C. Ionescu, S. Hansen, and V. Mnih. Unsupervised control through non-parametric discriminative rewards. In International Conference on Learning Representations, 2018.
198
+
199
+ [14] A. V. Nair, V. Pong, M. Dalal, S. Bahl, S. Lin, and S. Levine. Visual reinforcement learning with imagined goals. Advances in Neural Information Processing Systems, 31:9191-9200, 2018.
200
+
201
+ [15] A. Ecoffet, J. Huizinga, J. Lehman, K. O. Stanley, and J. Clune. Go-explore: a new approach for hard-exploration problems. arXiv preprint arXiv:1901.10995, 2019.
202
+
203
+ [16] V. Pong, M. Dalal, S. Lin, A. Nair, S. Bahl, and S. Levine. Skew-fit: State-covering self-supervised reinforcement learning. In International Conference on Machine Learning, pages 7783-7792. PMLR, 2020.
204
+
205
+ [17] S. Venkattaramanujam, E. Crawford, T. Doan, and D. Precup. Self-supervised learning of distance functions for goal-conditioned reinforcement learning. arXiv preprint arXiv:1907.02998, 2019.
206
+
207
+ [18] K. Hartikainen, X. Geng, T. Haarnoja, and S. Levine. Dynamical distance learning for semi-supervised and unsupervised skill discovery. In International Conference on Learning Representations, 2019.
208
+
209
+ [19] R. Mendonca, O. Rybkin, K. Daniilidis, D. Hafner, and D. Pathak. Discovering and achieving goals via world models. Advances in Neural Information Processing Systems, 34, 2021.
210
+
211
+ [20] L. Mezghani, P. Bojanowski, K. Alahari, and S. Sukhbaatar. Walk the random walk: Learning to discover and reach goals without supervision. In ICLR Workshop on Agent Learning in Open-Endedness, 2022.
212
+
213
+ [21] J. Fu, A. Kumar, O. Nachum, G. Tucker, and S. Levine. D4rl: Datasets for deep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219, 2020.
214
+
215
+ [22] C. Gulcehre, Z. Wang, A. Novikov, T. Le Paine, S. G. Colmenarejo, K. Zolna, R. Agarwal, J. Merel, D. Mankowitz, C. Paduraru, et al. Rl unplugged: Benchmarks for offline reinforcement learning. 2020.
216
+
217
+ [23] D. Yarats, D. Brandfonbrener, H. Liu, M. Laskin, P. Abbeel, A. Lazaric, and L. Pinto. Don't change the algorithm, change the data: Exploratory data for offline reinforcement learning. In ICLR 2022 Workshop on Generalizable Policy Learning in Physical World, 2022.
218
+
219
+ [24] N. Lambert, M. Wulfmeier, W. Whitney, A. Byravan, M. Bloesch, V. Dasagi, T. Hertweck, and M. Riedmiller. The challenges of exploration for offline reinforcement learning. arXiv preprint arXiv:2201.11861, 2022.
220
+
221
+ [25] Y. Tassa, Y. Doron, A. Muldal, T. Erez, Y. Li, D. d. L. Casas, D. Budden, A. Abdolmaleki, J. Merel, A. Lefrancq, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690, 2018.
222
+
223
+ [26] M. Laskin, D. Yarats, H. Liu, K. Lee, A. Zhan, K. Lu, C. Cang, L. Pinto, and P. Abbeel. Urlb: Unsupervised reinforcement learning benchmark. arXiv preprint arXiv:2110.15191, 2021.
224
+
225
+ [27] D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell. Curiosity-driven exploration by self-supervised prediction. In International conference on machine learning, pages 2778-2787. PMLR, 2017.
226
+
227
+ [28] B. Eysenbach, A. Gupta, J. Ibarz, and S. Levine. Diversity is all you need: Learning skills without a reward function. In International Conference on Learning Representations, 2018.
228
+
229
+ [29] D. Pathak, D. Gandhi, and A. Gupta. Self-supervised exploration via disagreement. In International conference on machine learning, pages 5062-5071. PMLR, 2019.
230
+
231
+ [30] D. Yarats, R. Fergus, A. Lazaric, and L. Pinto. Reinforcement learning with prototypical representations. In International Conference on Machine Learning, pages 11920-11931. PMLR, 2021.
232
+
233
+ [31] S. Endrawis, G. Leibovich, G. Jacob, G. Novik, and A. Tamar. Efficient self-supervised data collection for offline robot learning. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 4650-4656. IEEE, 2021.
234
+
235
+ [32] Y. J. Ma, J. Yan, D. Jayaraman, and O. Bastani. How far i'll go: Offline goal-conditioned reinforcement learning via $f$ -advantage regression. arXiv preprint arXiv:2206.03023,2022.
236
+
237
+ [33] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning, pages 1861-1870. PMLR, 2018.
238
+
239
+ [34] Y. Kanagawa. https://github.com/kngwyu/mujoco-maze.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/8tmKW-NG2bH/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § LEARNING GOAL-CONDITIONED POLICIES OFFLINE WITH SELF-SUPERVISED REWARD SHAPING
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Developing agents that can execute multiple skills by learning from pre-collected datasets is an important problem in robotics, where online interaction with the environment is extremely time-consuming. Moreover, manually designing reward functions for every single desired skill is prohibitive. Prior works [1] targeted these challenges by learning goal-conditioned policies from offline datasets without manually specified rewards, through hindsight relabeling [2]. These methods suffer from the issue of sparsity of rewards, and fail at long-horizon tasks. In this work, we propose a novel self-supervised learning phase on the pre-collected dataset to understand the structure and the dynamics of the model, and shape a dense reward function for learning policies offline. We evaluate our method on two continuous control tasks, and show that our model is significantly better than existing related approaches $\left\lbrack {1,2}\right\rbrack$ , especially on tasks that involve long-term planning.
12
+
13
+ Keywords: Offline Reinforcement Learning, Self-Supervised Learning, Goal-Conditioned RL
14
+
15
+ § 16 1 INTRODUCTION
16
+
17
+ While the goal of realizing general autonomous agents requires mastery of a large and diverse set of skills, achieving this by focusing on each skill individually with standard reinforcement learning (RL) frameworks is prohibitive. This is primarily due to the need for manually designed reward functions and environment interactions for each skill. Unsupervised RL has opened a way for learning agents that can execute diverse behaviours without supervision (i.e., hand-crafted rewards), and then be further adapted to downstream tasks through few-shot or zero-shot generalization. However, learning policies with such methods is impractical with real robots as they require millions of interactions when trained online.
18
+
19
+ Recently, a line of study has emerged that uses pre-collected dataset of environment interactions and trains policies offline (i.e., without additional interactions with the environment). More precisely, given a dataset of reward-free trajectories and a reward function designed to solve a specific task, the agent learns offline by relabeling the transitions in the dataset with the reward function. This setting is particularly relevant in robotics, where data collection is extremely time-consuming: disentangling data collection and policy learning in this context allows for faster policy iteration. However, it would require to design one specific reward function and to learn one policy per task that we want the agent to solve.
20
+
21
+ An important question to scale robot learning is therefore to find ways of learning multi-task policies from already collected datasets. Recent works $\left\lbrack {1,3,4}\right\rbrack$ , have targeted this problem from a goal-conditioned perspective: given a dataset of previously collected trajectories, the objective is to learn a goal-oriented agent that can reach any state in the dataset. The advantages of this formulation are two-fold: first, it makes it easy to interpret skills, and second it does not require any adaptation at test time. Making this framework unsupervised requires to break free from hand-crafted rewards, as proposed by Chebotar et al. [1], where they learn goal-conditioned policies offline through hindsight relabeling [2]. However, their approach is subject to the pitfall of learning from sparse rewards, and can be inefficient in long-horizon tasks.
22
+
23
+ In this work, we present a self-supervised reward shaping method that enables building an offline dataset with dense rewards. To this end, we develop self-supervised learning phase with the pre-collected dataset that aims at learning the structure and dynamics of the environment before training the policy. During this phase, we: (i) train a Reachability Network [5] to estimate the local distance in the state space $\mathcal{S}$ , then (ii) extract a set of representative states that covers $\mathcal{S}$ , and finally (iii) build a graph on this set to approximate global distance in $\mathcal{S}$ . We then use the graph to train goal-conditioned policy offline in two ways: to compute rewards through shortest path distance, and to create transitions of intermediate difficulty on the path to the goal.
24
+
25
+ We evaluate our method on complex continuous control tasks, and compare it to previous state-of-the-art offline $\left\lbrack {1,2}\right\rbrack$ approaches. We show that our graph-based reward method learns good goal-conditioned policies by leveraging transitions from a dataset of past experience with neither any additional interactions with the environment nor manually designed rewards. Moreover, we show that, contrary to prior work [1] that uses datasets collected with a policy trained with supervised rewards, our method allows for learning goal-conditioned policies even from datasets of poor quality, e.g. containing trajectories sampled with a random policy. Our work is thus the first to enable learning goal-conditioned policies from offline datasets without any supervision, as it does not require any hand-crafted reward function at any stage: data collection, policy training and evaluation.
26
+
27
+ § 2 RELATED WORK
28
+
29
+ Goal-conditioned RL In its original formulation, goal-conditioned reinforcement learning was tackled by several methods $\left\lbrack {6,7,2,8}\right\rbrack$ . The policy learning process is supervised in these works: the set of evaluation goals is available at train time as well as a shaped reward function that guides the agent to the goal. Several works propose solutions for generating goals automatically when training goal-conditioned policies, including self-play [9, 10, 11], and adversarial student-teacher policies [12]. In a recent line of research, some works [13, 14, 15, 16, 17, 18, 19, 20] focused on learning goal-conditioned policies in an unsupervised fashion. The objective is to train general agents that can reach any goal state in the environment without any supervision (reward, goal-reaching function) at train time. In particular, Mendonca et al. [19] trains a model-based agent that learns to discover novel goals with an explorer model, and reach them with an achiever policy via imagined rollouts.
30
+
31
+ Offline RL The data collection technique is an important aspect when studying the training of policies from pre-collected datasets. In this context, the first works assumed access to policies trained with task-specific rewards [21, 22]. More recently, methods proposed to leverage unsupervised exploration to collect datasets for offline RL [23, 24]. In particular, Yarats et al. [23] creates a dataset of pre-collected trajectories, ExoRL, on the DeepMind Control Suite [25] generated without any hand-crafted rewards. Similarly to URLB [26], ExoRL benchmarks a number of exploration algorithms Pathak et al. [27], Eysenbach et al. [28], Pathak et al. [29], Yarats et al. [30], and evaluates the performance of a policy trained on the corresponding offline datasets relabled with task-specific rewards.
32
+
33
+ Multi-task Offline RL Recent works proposed to learn multiple tasks from pre-collected datasets, starting with methods [31] the generates goals to improve the offline data collection process in a self-supervised way. This connection has also been studied in the supervised setting [3, 32] and to learn hierarchical policies [4]. In a setting closely relateed to our work, Actionable Models [1] considers the problem of learning goal-conditioned policies from offline datasets without interacting with the environment, and with no task-specific rewards. They employ goal-conditioned Q-learning with hindsight relabeling. As opposed to this work that relies on learning from sparse rewards, we propose to leverage a self-supervised training stage to densely shape rewards.
34
+
35
+ $R\left( {{s}_{i},{s}_{j}}\right)$ ${s}_{i + 1}$ ${\mathrm{{NN}}}_{\text{ out }}$ add edge ${\mathrm{{NN}}}_{\mathrm{{in}}}$ add node (b) D ${s}_{i}$ (a)
36
+
37
+ Figure 1: Visualization of our dense reward shaping method. (a) shows how training labels are generated for training the RNet: given a state ${s}_{i}$ , positive pairs are sampled in the same trajectory within a threshold ${\tau }_{\text{ reach }}$ , and the rest of the trajectory forms negative pairs. (b) presents an overview of the graph building algorithm. Given a transition $\left( {{s}_{i},{s}_{i + 1}}\right) \in \mathcal{D}$ , we add ${s}_{i}$ as node if it is distant enough from existing nodes in the graph. Moreover, we add an edge in the graph between the corresponding nearest neighbors of ${s}_{i}$ and ${s}_{i + 1}$ .
38
+
39
+ § 3 PRELIMINARIES
40
+
41
+ Let $\mathcal{E} = \left( {\mathcal{S},\mathcal{A},P,{p}_{0},\gamma ,T}\right)$ define a reward-free Markov decision process (MDP), where $\mathcal{S}$ and $\mathcal{A}$ are state and action spaces respectively, $P : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow {\mathbb{R}}_{ + }$ is a state-transition probability function, ${p}_{0} : \mathcal{S} \rightarrow {\mathbb{R}}_{ + }$ is an initial state distribution, $\gamma$ is a discount factor, and $T$ is the task horizon. In the goal-conditioned setting, the objective is to learn a goal-conditioned policy $\pi : \mathcal{S} \times \mathcal{G} \rightarrow \mathcal{A}$ that maximizes the expectation of the cumulative return over the goal distribution, where $\mathcal{G}$ denotes the goal space. Here, we make the common assumption that states and goals are defined in the same form, i.e., $\mathcal{G} \subset \mathcal{S}$ .
42
+
43
+ We assume that we have access to a dataset $\mathcal{D}$ of pre-collected episodes generated by using any data collection algorithm in $\mathcal{E}$ . Each episode is stored in $\mathcal{D}$ as a series of $\left( {s,a,{s}^{\prime }}\right)$ tuples. In the general offline formulation introduced by Yarats et al. [23], the dataset $\mathcal{D}$ can be relabeled by evaluating any reward function $r : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ at each tuple in $\mathcal{D}$ , and adding the resulting tuple $\left( {s,a,r\left( {s,a}\right) ,{s}^{\prime }}\right)$ in the relabeled dataset ${\mathcal{D}}_{r}$ . We can extend this protocol to the goal-oriented setting by considering a goal distribution ${p}_{\mathcal{G}}$ in the goal space, and any goal-conditioned reward function $r : \mathcal{S} \times \mathcal{A} \times \mathcal{G} \rightarrow \mathbb{R}$ . Given a tuple $\left( {s,a,{s}^{\prime }}\right)$ in $\mathcal{D}$ , we relabel it by sampling a goal $g \sim {p}_{\mathcal{G}}$ , computing $r\left( {s,a,g}\right)$ and adding the resulting tuple $\left( {s,a,g,r\left( {s,a,g}\right) ,{s}^{\prime }}\right)$ in the relabeled dataset ${\mathcal{D}}_{r,{pg}}$ . In the unsupervised framework, we assume that the evaluation goal space is unknown, and that we do not have access to any pre-defined reward function, nor any distance function in the state space.
44
+
45
+ Once the relabeled dataset ${\mathcal{D}}_{r,{p}_{G}}$ is generated, we can learn a goal-conditioned policy by executing any offline RL algorithm. The algorithm runs completely offline, by sampling tuples from ${\mathcal{D}}_{r,{p}_{G}}$ and without any interaction with the environment. The goal-conditioned policy is then evaluated online in $\mathcal{E}$ on a set of fixed evaluation goals that is not known during training.
46
+
47
+ § 4 SELF-SUPERVISED REWARD SHAPING
48
+
49
+ We now describe our self-supervised reward shaping method for learning a goal-conditioned agent offline on a pre-collected dataset $\mathcal{D}$ . Our method comprises three stages that we will describe below. In the first stage, we train a Reachability Network (RNet) [5] on the trajectories in $\mathcal{D}$ to predict whether two states are reachable from one another. The second stage consists in building a directed graph $\mathcal{M}$ whose nodes are a subset of states in $\mathcal{D}$ , and edges connect reachable states. We employ the RNet as a criterion to avoid adding similar states to $\mathcal{M}$ so that its nodes cover the states in $\mathcal{D}$ uniformly. Subsequently, we determine the reach of all the nodes based on the transitions in $\mathcal{D}$ and the RNet, and connect them with directed edges. The final stage is training the goal-conditioned policy on transitions and goals sampled from $\mathcal{D}$ . It is trained with dense rewards computed as the sum of a global (based on the graph distance in $\mathcal{M}$ ) and local (based on the RNet) distance terms. The important aspect of our method is that the whole training only uses trajectories from the pre-collected dataset $\mathcal{D}$ without running a single action in the environment. We now describe each component in more detail.
50
+
51
+ § 4.1 REACHABILITY NETWORK
52
+
53
+ In order to learn a good local distance between states in $\mathcal{D}$ , we adopt an asymmetric version of the Reachability Network (RNet) [5]. The general idea of RNet is to approximate the distance between states in the environment by the average number of steps it takes for a random policy to go from one state to another. We adapted the the original formulation with two modifications: first, we use exploration trajectories from $\mathcal{D}$ instead of random trajectories and second, we leverage the temporal direction because a state can be reachable from another without the converse being true. Let $\left( {{s}_{1}^{a},\ldots ,{s}_{T}^{a}}\right)$ denote a trajectory in $\mathcal{D}$ , where $a$ is a trajectory index. We define a reachability label ${y}_{ij}^{ab}$ for each pair of observations $\left( {{s}_{i}^{a},{s}_{j}^{b}}\right)$ by
54
+
55
+ $$
56
+ {y}_{ij}^{ab} = \left\{ {\begin{array}{ll} 1 & \text{ if }a = b\text{ and }0 \leq j - i \leq {\tau }_{\text{ reach }}, \\ 0 & \text{ otherwise, } \end{array}\;}\right. \text{ for }1 \leq i,j \leq T, \tag{1}
57
+ $$
58
+
59
+ where the reachability threshold ${\tau }_{\text{ reach }}$ is a hyperparameter. The reachability label is equal to 1 iff the states are in the same trajectory and the number of steps from ${s}_{i}^{a}$ to ${s}_{j}^{b}$ is below ${\tau }_{\text{ reach }}$ . Note that ${y}_{ij}^{ab} \neq {y}_{ji}^{ab}$ . We train a siamese neural network $R$ , the RNet, to predict the reachability label ${y}_{ij}^{ab}$ from a pair of observations $\left( {{s}_{i}^{a},{s}_{j}^{b}}\right)$ in $\mathcal{D}$ . The RNet consists of an embedding network $g$ , and a fully-connected network $f$ to compare the embeddings, i.e.,
60
+
61
+ $$
62
+ R\left( {{s}_{i}^{a},{s}_{j}^{b}}\right) = \sigma \left\lbrack {f\left( {g\left( {s}_{i}^{a}\right) ,g\left( {s}_{j}^{b}\right) }\right) }\right\rbrack , \tag{2}
63
+ $$
64
+
65
+ where $\sigma$ is a sigmoid function. A higher $R$ value indicates two states reachable easily with random walk, so they can be considered close in the environment. More precisely, $R$ takes values in(0,1) and ${s}^{\prime }$ is reachable from $s$ if $R\left( {s,{s}^{\prime }}\right) \geq {0.5}$ . RNet is trained in a self-supervised fashion, as the ground-truth labels needed to train the network are generated automatically.
66
+
67
+ § 4.2 DIRECTED GRAPH
68
+
69
+ In the next phase, we use trajectories in $\mathcal{D}$ to build a directed graph $\mathcal{M}$ to capture high-level dynamics of the environment. We want the nodes of $\mathcal{M}$ to evenly represent the states in $\mathcal{D}$ . This is achieved by filtering the states in $\mathcal{D}$ : a state is added to $\mathcal{M}$ only if it is distant enough from all the other nodes in $\mathcal{M}$ . More precisely, a state $s \in \mathcal{D}$ is added to $\mathcal{M}$ if and only if
70
+
71
+ $$
72
+ R\left( {s,n}\right) < {0.5}\text{ and }R\left( {n,s}\right) < {0.5}\text{ , for all }n \in \mathcal{M}\text{ . } \tag{3}
73
+ $$
74
+
75
+ Note that we require both the directions to be novel. This filtering avoids redundancy by preventing similar states to be added to the memory. It also has a balancing effect because it limits the number of states that can be added from a certain area even if it is visited by the agent many times in $\mathcal{D}$ .
76
+
77
+ Once the nodes are selected, we connect pairs that are reachable from one to another. To this end, we employ trajectories in $\mathcal{D}$ because they contain actual feasible transitions. Given a transition ${s}_{i} \rightarrow {s}_{j}$ in $\mathcal{D}$ , we add edge ${n}_{i} \rightarrow {n}_{j}$ if ${s}_{i}$ can be reached from node ${n}_{i}$ and node ${n}_{j}$ can be reached from ${s}_{j}$ . This way, we have a chain ${n}_{i} \rightarrow {s}_{i} \rightarrow {s}_{j} \rightarrow {n}_{j}$ and can assume ${n}_{j}$ is reachable from ${n}_{i}$ . Concretely, we select node ${n}_{i}$ to be the incoming nearest neighbor $\left( {\mathrm{{NN}}}_{\mathrm{{in}}}\right)$ to ${s}_{i}$ , and ${n}_{j}$ to be the outgoing nearest neighbor $\left( {\mathrm{{NN}}}_{\text{ out }}\right)$ from ${s}_{j}$ , i.e.,
78
+
79
+ $$
80
+ {n}_{i} = {\mathrm{{NN}}}_{\text{ in }}\left( {s}_{i}\right) = \mathop{\operatorname{argmax}}\limits_{{n \in \mathcal{M}}}R\left( {n,{s}_{i}}\right) ,\;{n}_{j} = {\mathrm{{NN}}}_{\text{ out }}\left( {s}_{j}\right) = \mathop{\operatorname{argmax}}\limits_{{n \in \mathcal{M}}}R\left( {{s}_{j},n}\right) . \tag{4}
81
+ $$
82
+
83
+ 5 By performing this action over all the transitions in $\mathcal{D}$ , we turn $\mathcal{M}$ into a directed graph where edges represent reachability from one node to another.
84
+
85
+ § 4.3 DISTANCE FUNCTION FOR POLICY TRAINING
86
+
87
+ RNet predicts the reachability between ${s}_{i}$ and ${s}_{j}$ so we can directly use it as a distance metric
88
+
89
+ $$
90
+ {d}_{l}\left( {{s}_{i},{s}_{j}}\right) = 1 - R\left( {{s}_{i},{s}_{j}}\right) ,\;\forall {s}_{i},{s}_{j} \in \mathcal{S}. \tag{5}
91
+ $$
92
+
93
+ However, this reachability metric is confined within a certain threshold, so there is no guarantee that the RNet predictions will have good global properties.
94
+
95
+ In contrast, the directed graph $\mathcal{M}$ captures high-level global dynamics of the environment. We can easily derive a distance function ${d}_{\mathcal{M}}\left( {{n}_{i},{n}_{j}}\right)$ between any pair of nodes in $\mathcal{M}$ by computing the length of the shortest path in this graph, provided the graph is connected. In practice, we can use a trick to connect the graph if necessary, by adding an edge between the pair of nodes from different connected components with maximum RNet value. Moreover, we can extend this distance ${d}_{\mathcal{M}}$ to a global distance function ${d}_{g}$ in the state space $\mathcal{S}$ by finding, for any pair ${s}_{i}$ and ${s}_{j}$ in $\mathcal{S}$ their corresponding nearest neighbors in the proper direction. More precisely,
96
+
97
+ $$
98
+ {d}_{g}\left( {{s}_{i},{s}_{j}}\right) = {d}_{\mathcal{M}}\left( {{\mathrm{{NN}}}_{\text{ out }}\left( {s}_{i}\right) ,{\mathrm{{NN}}}_{\text{ in }}\left( {s}_{j}\right) }\right) ,\;\forall {s}_{i},{s}_{j} \in \mathcal{S}. \tag{6}
99
+ $$
100
+
101
+ The distance ${d}_{g}$ between two states in the state space becomes the length of the shortest path between their respective closest nodes in the graph. This process propagates the good local properties of RNet to get a well-shaped distance function for states that are further away. Since ${d}_{g}$ captures global distances while ${d}_{l}$ captures local fine-grained distance, we use their combination as a final distance function: $\forall {s}_{i},{s}_{j} \in \mathcal{S},\;d\left( {{s}_{i},{s}_{j}}\right) = {d}_{g}\left( {{s}_{i},{s}_{j}}\right) + {d}_{l}\left( {{s}_{i},{s}_{j}}\right)$ .
102
+
103
+ § 4.4 POLICY TRAINING
104
+
105
+ The last phase of our method is training the goal-condition policy offline. Here, we create an offline replay buffer $\mathcal{B}$ that is filled with relabeled data. We randomly sample a transition $\left( {{s}_{t},{a}_{t},{s}_{t + 1}}\right)$ from $\mathcal{D}$ as well as a goal state $g$ and relabel the transition with reward ${r}_{t} = - d\left( {{s}_{t + 1},g}\right)$ . We then push the relabeled transition $\left( {{s}_{t},{a}_{t},g,{r}_{t},{s}_{t + 1}}\right)$ to $\mathcal{B}$ . In order to create a curriculum that artificially guides the agent towards the goal, we experimented with two different transition augmentation techniques described below.
106
+
107
+ Sub-goal augmentation. Let $\left( {{s}_{t},{a}_{t},g,{r}_{t},{s}_{t + 1}}\right)$ denote a relabeled transition and $\left( {{n}_{0},\ldots ,{n}_{P - 1}}\right)$ the shortest path in the graph $\mathcal{M}$ between ${n}_{0} = {\mathrm{{NN}}}_{\text{ out }}\left( {s}_{t}\right)$ and ${n}_{P - 1} = {\mathrm{{NN}}}_{\text{ in }}\left( g\right)$ . The augmentation technique consists in adding to the replay buffer every transition $\left( {{s}_{t},{a}_{t},{n}_{i},{r}_{t}^{i},{s}_{t + 1}}\right)$ for all $i \in \{ 0,P - 1\}$ , where ${r}_{t}^{i} = - d\left( {{s}_{t + 1},{n}_{i}}\right)$ . In other words, given a transition $\left( {{s}_{t},{a}_{t},{s}_{t + 1}}\right)$ and a goal $g$ from $\mathcal{D}$ , we push to the replay buffer a set of relabeled transitions with all goals on the shortest path from ${s}_{t}$ to $g$ (and their corresponding rewards).
108
+
109
+ Edge augmentation. Similarly to the subgoal augmentation technique, we consider a relabeled transition $\left( {{s}_{t},{a}_{t},g,{r}_{t},{s}_{t + 1}}\right)$ and the shortest path $\left( {{n}_{0},\ldots ,{n}_{P - 1}}\right)$ between the corresponding nearest neighbors of ${s}_{t}$ and $g$ . This time, we keep the same goal $g$ for every augmented transition, but for every edge $\left( {{n}_{i - 1},{n}_{i}}\right) ,i \in \{ 1,P - 1\}$ , we add the relabeled transition $\left( {{s}_{t}^{i},{a}_{t}^{i},g,{r}_{t}^{i},{s}_{t + 1}^{i}}\right)$ to $\mathcal{B}$ where $\left( {{s}_{t}^{i},{a}_{t}^{i},{s}_{t + 1}^{i}}\right) \in \mathcal{D},{\mathrm{{NN}}}_{\text{ out }}\left( {s}_{t}^{i}\right) = {n}_{i - 1},{\mathrm{{NN}}}_{\text{ in }}\left( {s}_{t + 1}^{i}\right) = {n}_{i}$ and ${r}_{t}^{i} = - d\left( {{s}_{t}^{i},g}\right)$ . Note that the existence of such a transition in $\mathcal{D}$ is guaranteed by construction: an edge is added to the graph from one node to another iff there exist a transition in $\mathcal{D}$ whose corresponding nearest neighbors are these two nodes (in the same order).
110
+
111
+ Once the replay buffer $\mathcal{B}$ is filled, the goal-conditioned policy can be trained using any off-policy algorithm. In our implementation, we chose Soft Actor-Critic [33].
112
+
113
+ § 5 EXPERIMENTS
114
+
115
+ § 5.1 ENVIRONMENTS & DATA COLLECTION
116
+
117
+ We perform experiments on two continuous control tasks with state-based inputs.
118
+
119
+ 0.5 Ours - Graph Reward Ours - RNet Reward Room 1 Room 2 Room 3 Room 4 (b) RNet Rewards vs. Graph Rewards Success Rate 0.3 0.2 0.1 0.0 Average (a) UMaze
120
+
121
+ Figure 2: (a) UMaze environment, and (b) Comparison of the performance of the goal-conditioned policy trained with RNet and Graph-based rewards on the UMaze
122
+
123
+ UMaze [34]. The first environment, shown in Figure 2a, is a two-dimensional U-shaped maze with continuous action space and a fixed initial position. We generate the training data for this environment by deploying a random policy with randomized start position in the maze. We collect ${10}\mathrm{k}$ trajectories of length $1\mathrm{k}$ . We evaluate the goal-conditioned agent trained offline on this task by giving a goal sampled at random in the environment to the agent and evaluating the final euclidean distance to the goal.
124
+
125
+ RoboYoga Walker [19]. Introduced by Mendonca et al. [19], the challenging RoboYoga benchmark is based on the Walker domain of the DeepMind Control Suite [25], and consists of 12 goals that correspond to body poses inspired from yoga (e.g. lying down, raising one leg or balancing). We consider the state-based version of the task, and use the task-agnostic dataset from Yarats et al. [23] generated with an unsupervised exploration policy. It contains ${10}\mathrm{k}$ trajectories of length $1\mathrm{\;k}$ obtained by deploying the "proto" [30] algorithm in the Walker domain. The success metric of the evaluation policy is assessed by the pose of the humanoid at the end of the episode.
126
+
127
+ § 5.2 ABLATION & DESIGN CHOICES
128
+
129
+ In this section, we explain the design choices of our graph-based reward shaping method. We first show that graph structure is necessary for long-term planning. Then, we explain the importance of the directness of the graph on tasks with asymmetric behaviours. Finally, we show the impact the transition augmentation techniques when labeling data for the goal-conditioned policy.
130
+
131
+ Necessity of graph-based rewards. An important component of our method is the construction of a graph $\mathcal{M}$ that allows for computing a distance with good global properties in the state space. In order to empirically validate this hypothesis, we performed a comparison between the goal-conditioned policy trained with RNet rewards (i.e., by using the distance ${d}_{l}$ from equation (5)) and the one trained with both distance terms as reward. We perform this experiment on the UMaze environment, and show results in Figure 2b. We see that the model trained with graph rewards outperforms the one trained with RNet rewards overall, particularly for faraway goals (ie. rooms 3 & 4). We also notice that the model trained with RNet rewards is slightly better for goals that are close from the initial position, which highlights the fact that RNet is good for estimating local distances. Figure 3 shows a qualitative visualization of the RNet and graph-based rewards. We see that RNet distance does not properly capture global distance, as it shows low values between states in the first and fourth rooms.
132
+
133
+ Importance of graph directness. In a second experiment, we investigate the importance of the asymmetry of the RNet and the directness of the graph. To do so, we implement an undirected version of our method where the RNet is symmetric and the graph is undirected. All other components of our method remain the same. We first look at some qualitative visualizations of the shortest path in the undirected and directed graphs in the RoboYoga task, as shown in Figure 4. In the undirected case, the humanoid defies the laws of gravity and will be encouraged to get on its head from the back, which might be extremely difficult, or even infeasible. In the directed case, the shortest path
134
+
135
+ (a) RNet distance (b) Graph distance
136
+
137
+ Figure 3: (a) Heatmap of rewards computed with the RNet (a) and graph (b) distances.
138
+
139
+ < g r a p h i c s >
140
+
141
+ Figure 4: Shortest Path visualization on the RoboYoga Walker task for undirected (top) and directed (bottom) graphs.
142
+
143
+ fosters the agent to first get back on its legs, and then lean forward. In this exemple, the gravity makes the dynamics of the environment non-symmetric and non-fully reversible, which justifies the directed formulation described in our method.
144
+
145
+ Transition sampling strategy. As a final ablation study, we study the utility of the transition augmentation techniques described in subsection 4.4. We run experiments with the four possible variants of our method: (i) without any augmentation, (ii) with edge augmentation only, (iii) with subgoal augmentation only and (iv) with both augmentations. We execute this experiment on the RoboYoga task, and show results in Figure 6b. We see that both of the augmentation techniques improve the performance of the goal-conditioned, with subgoal augmentation showing the greater improvement. Moreover, we note that combining both augmentations improves the performance further. For the reminder of the experiments, we will always use both augmentation techniques.
146
+
147
+ § 5.3 COMPARISON TO PRIOR WORK
148
+
149
+ Baselines. We compare our method to prior works on unsupervised goal-conditioned policy learning. We perform an apples-to-apples comparison by implementing the baselines using the same learning framework than that of our method, and simply changing the reward relabeling process. We compare with the following baselines:
150
+
151
+ * Hindsight Experience Replay [HER] [2] A re-implementation of the standard unsupervised RL technique, adapted to the offline setting. More precisely, we relabel sub-trajectories from $\mathcal{D}$ with a sparse reward, which is equal to 1 only for the final transition of the sub-trajectory, and 0 everywhere else. Following Chebotar et al. [1], we also label sub-trajectories with goals sampled at random in $\mathcal{D}$ and zero reward.
152
+
153
+ * HER [2] with random negative action A variant of HER where, for a transition in $\mathcal{D}$ we sample an action uniformly at random in the action space and label it with zero reward. This helps leveraging the problem of over-estimation of the Q-values for unseen actions mentioned in Chebotar et al. [1].
154
+
155
+ * Actionable Models [1] A method based on goal-conditioned Q-learning with hindsight relabeling. We re-implemented the goal relabeling procedure that uses the Q-value at the
156
+
157
+ Average Room 1 Room 2 Room 3 Room 4 0.2 0.2 0.1 0.0 0.0 500 1000 500 1000 500 1000 epoch epoch epoch Figure 5: Performance on the UMaze-Task 0.4 0.3 Success Rate 0.2 Subgoal + Edge Subgoal Only Edge Only 0.0 None 200 400 600 800 1000 epoch (b) Impact of Transition Augmentation Success Rate 0.4 0.4 0.2 0.2 0.2 0.0 0.0 0.0 500 1000 500 1000 epoch epoch 0.4 0.3 Success Rate 0.2 0.1 Ours HER HER + random neg action 0.0 Actionable Models 200 400 600 800 1000 epoch (a) Comparison to baslines
158
+
159
+ Figure 6: Performance of on the RoboYoga Walker task
160
+
161
+ final state of sub-trajectories in $\mathcal{D}$ to enable goal chaining, as well as the negative action sampling trick.
162
+
163
+ Comparison on UMaze. We compare our method to the baselines on the UMaze task, and show results in Figure 5. We see that our model outperforms all baselines overall, and shows greater improvements on goals that are far from the initial position. Interestingly, we note that the Actionable model is able to reach goals in the first room only, and is unable to navigate towards goals from the second room. This confirms the idea that sparse rewards make it difficult for the policy to learn long-horizon tasks.
164
+
165
+ Comparison on RoboYoga Walker. In a second experiment, we compare our method to baselines on the RoboYoga task. These results are shown in Figure 6a. Here again, our method outperforms prior work by a large margin, and Actionable models does not make any significant improvement. The results broken down by goal are shown in the supplementary material. Overall the empirical results suggest that our dense reward shaping method allows for faster and more robust offline goal-conditioned policy training.
166
+
167
+ § 6 CONCLUSION: SUMMARY AND LIMITATIONS
168
+
169
+ We proposed a method for learning multi-task policies from pre-generated datasets in an offline and unsupervised fashion, i.e., without requiring any additional interaction with the environment, nor manually designed rewards. Our method leverages a self-supervised stage that aims at learning the dynamics of the environment from the offline dataset, and that allows for shaping a dense reward function. The shaped reward function shows great improvement compared to prior works based on hindsight relabeling, especially on long-horizon tasks where dense rewards are crucial for good policy learning. We presented one form of self-supervised learning on the offline dataset in this work. This choice needs further evaluation by comparing different self-supervision strategies and evaluating their impact on the policy learning task. This constitutes an exciting future prospect.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/A5l7wE2uqtM/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,249 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # BusyBot: Learning to Interact, Reason, and Plan in a BusyBoard Environment
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: We introduce BusyBoard, a toy-inspired robot learning environment that leverages a diverse set of articulated objects and inter-object functional relations to provide rich visual feedback for robot interactions. Based on this environment, we introduce a learning framework, BusyBot, which allows an agent to jointly acquire three fundamental capabilities (interaction, reasoning, and planning) in an integrated and self-supervised manner. With the rich sensory feedback provided by BusyBoard, BusyBot first learns a policy to efficiently interact with the environment; then with data collected using the policy, BusyBot reasons the inter-object functional relations through a causal discovery network; and finally by combining the learned interaction policy and relation reasoning skill, the agent is able to perform goal-conditioned manipulation tasks. We evaluate BusyBot in both simulated and real-world environments, and validate its generalizability to unseen objects and relations. Code and simulation will be publicly available.
12
+
13
+ Keywords: Manipulation, Learning Environment, Reasoning
14
+
15
+ ## 1 Introduction
16
+
17
+ Learning through physical interactions plays a critical role in human cognitive development $\left\lbrack {1,2,3}\right\rbrack$ . For instance, a well-designed toy like the "busyboard" (Fig. 1a) can provide an effective learning environment for children to develop fundamental manipulation and reasoning skills: the rich and amplified sensory feedback encourages children to actively explore and interact; and the observed inter-object functional relations (e.g., a switch turns on a light) facilitate the development of reasoning and task solving skills.
18
+
19
+ In this paper, we aim to provide a similar learning environment for embodied artificial agents, the BusyBoard environment, where agents learn to discover the underlying relations of objects through informative interactions and plan for goal-conditioned tasks. While simple at the first glance, this relational environment provides an integrated tool for learning and evaluating three critical capabilities of an embodied intelligent system:
20
+
21
+ - Interact: The ability to infer action affordances from visual observations - knowing where and how to manipulate an object to effectively change its state. Learning this skill through visual feedback is particularly hard for small-displacement objects (e.g., switches), whose appearance changes can be subtle even under effective actions.
22
+
23
+ - Reason: The ability to reason about inter-object functional relations (e.g., pressing a button turns on a light). In particular, the agent should learn to infer the relations by observing and predicting future states of the environment, without using the ground-truth relations as supervision.
24
+
25
+ - Plan: The ability to use the learned manipulation and reasoning skills in goal-conditioned planning tasks, in other words, generating a sequence of actions to transform the environment from a random initial state to a given goal state.
26
+
27
+ To learn these skills from the environment, we propose the BusyBot framework that acquires the above three capabilities through self-supervised interactions. To acquire the manipulation skill, the
28
+
29
+ ![01963f57-fb9a-76ea-983a-71e2d5245324_1_335_201_1128_281_0.jpg](images/01963f57-fb9a-76ea-983a-71e2d5245324_1_335_201_1128_281_0.jpg)
30
+
31
+ Figure 1: BusyBoard Environments inspired by toys for children, an integrated tool for learning and evaluating a robot's capabilities in interaction, reasoning, and planning.
32
+
33
+ algorithm learns a visual affordance model that infers effective action candidates through visual feedback. To reason about inter-object functional relations, the algorithm infers a functional scene graph and predicts the future states through a causal discovery network. Finally, to accomplish goal-conditioned manipulation tasks, the algorithm combines the learned action affordances, inter-object relationship, and dynamics to plan its actions with a model predictive control (MPC) framework. In summary, our contributions are two-fold:
34
+
35
+ - We introduce a new learning environment for embodied agents, BusyBoard, which features a diverse set of articulated objects with typical- and small- displacement joints, and rich inter-object functional relations.
36
+
37
+ - We propose BusyBot, an integrated learning framework which allows an embodied agent to acquire fundamental interaction, reasoning, and planning skills through self-supervised interactions and visual feedback.
38
+
39
+ Our experiments demonstrate that by taking advantage of the amplified effects (provided by Busy-Board), the agent is able to acquire an effective manipulation policy for small-displacement articulated objects (e.g., switches). Furthermore, by explicitly inferring the underlying inter-object functional relationship using a causal discovery network, the agent is able to predict the future states of the environment under different actions, and generalizes well to unseen environments, including real-world environment with physical robot interactions.
40
+
41
+ ## 2 Related Work
42
+
43
+ Simulation environments for robot learning. Simulation environments are crucial for advances in robot learning. However, most of the existing simulated environments are developed for specific tasks or capabilities, such as navigation [4,5,6,7], manipulation [8,9,10,11,12], causal reasoning in 2D $\left\lbrack {{13},{14}}\right\rbrack$ , or high-level task planning [15]. Inspired by human toys, our BusyBoard is an integrated environment that is compact and relevant to real-world applications, where an embodied agent can jointly learn three critical capabilities: interaction, reasoning, and planning.
44
+
45
+ Learning interaction policy. The ability to interact with a diverse set of objects is critical for many robotics tasks. Different methods have been proposed to learn interaction polices through human demonstrations $\left\lbrack {{16},{17},{18}}\right\rbrack$ or self-guided explorations $\left\lbrack {{19},{20},{21},{22}}\right\rbrack$ . However, most prior works have been ignoring a set of common but challenging objects: small-displacement objects. When interacting with these objects (e.g., switches), the effectiveness of an action often cannot be observed from the object's own visual appearance, which introduces significant challenges for learning. In this work, we address this challenge by taking advantage of BusyBoard, which amplifies action effects through responder objects and enables learning by enriching the supervision signal.
46
+
47
+ Inferring inter-object functional relations. Perceiving and understanding objects individually is often not sufficient for a lot of real-world applications that involve environments with multiple objects Objects are usually related and understanding inter-object physical [23, 24, 25] or functional relations is a crucial skill for efficient planning. In this work, we will focus on uncovering the inter-object functional relationship, as defined by Li et al. [26]. One common approach to solve this problem is to induce changes through interventions and iteratively construct a functional scene graph [27]. More recently, Graph Neural Networks (GNNs) have been demonstrated to be promising for extracting the underlying structural causal model (SCM) and predicting future dynamics from motions [25]. In our work, we further demonstrate that GNNs are able to infer inter-object functional relations from changes in visual appearances. We also show that the inferred inter-object functional relationship and scene dynamics can assist action planning for downstream goal-conditioned manipulation tasks.
48
+
49
+ ![01963f57-fb9a-76ea-983a-71e2d5245324_2_307_197_1174_410_0.jpg](images/01963f57-fb9a-76ea-983a-71e2d5245324_2_307_197_1174_410_0.jpg)
50
+
51
+ Figure 2: BusyBoard Environment is procedurally generated using articulated objects (a, b) with randomly sampled inter-object functional relation pairs between trigger and responder objects (c). (d) shows example boards and the underlying functional relations.
52
+
53
+ ## 3 The BusyBoard Environment
54
+
55
+ To learn from a diverse set of environments, we designed a data generation pipeline to generate a large collection of simulated BusyBoard (illustrate on Fig. 2), which provides the following advantages:
56
+
57
+ - Amplified effects. In addition to providing visual feedback on the interacted object (e.g., appearance change on a button after being pressed), BusyBoard also amplifies the effect of an action with responder objects (e.g., a light turns on after the button is pressed). This is especially useful for learning manipulation policies for objects with small displacements upon interaction, for which state changes are often hard to observe.
58
+
59
+ - Compact. Unlike learning from room-scale environments [26], learning from BusyBoard does not require the agent to navigate in a large space to observe all the state changes. Thus BusyBoard is able to provide denser rewards for learning manipulation and reasoning before developing skills for navigation, which is analogous to the human cognition development process.
60
+
61
+ - Relevant. In contrast to game-like environments (e.g., Atari), our environment contains everyday objects and functional relations that are analogous to those in the real world, making the learned manipulation and reasoning skills relevant and potentially transferable to the real world.
62
+
63
+ Trigger and responder objects. The BusyBoard is procedurally generated using object URDF models. For trigger objects, we select switch instances from the Partnet-Mobility dataset [12], including small-displacement objects (with small displacements upon interaction), multi-direction objects (contain one movable link that can be pushed to multiple directions), and multi-link objects (contain multiple movable links). We use other object categories (e.g., lamp, door, tracktoy) as responder objects, which can be either single-stage or multi-stage. A single-stage responder has only two possible states defined by either appearance (e.g., lamp on or off) or joint state (e.g., door open or closed). A multi-stage responder has multiple possible states (e.g., multiple light colors of a lamp responder or multiple joint positions of a tracktoy responder).
64
+
65
+ Relations. One instance of BusyBoard may contain 2-3 triggers and 5-7 responders with randomly generated relations. Effects are triggered by changes in the joint positions of the trigger objects. We use Pybullet to simulate object movement under robot interactions. We introduce three types of inter-object functional relations (shown in Fig. 2):
66
+
67
+ - One-to-one: one trigger controls one single-stage responder.
68
+
69
+ - One-to-many effects: one trigger controls multiple effects on one responder. The trigger is a multi-direction object and the responder is a multi-stage object.
70
+
71
+ - One-to-many objects: one trigger controls multiple responders. The trigger is a multi-link object (a switch with multiple buttons), and each link controls one single-stage responder.
72
+
73
+ ![01963f57-fb9a-76ea-983a-71e2d5245324_3_312_207_1167_283_0.jpg](images/01963f57-fb9a-76ea-983a-71e2d5245324_3_312_207_1167_283_0.jpg)
74
+
75
+ Figure 3: BusyBot Overview. [Interaction] infers a sequence of actions to efficiently interact with a given scene from visual input. [Reasoning] infers a functional scene graph (i.e., inference network) and predicts future states (i.e., dynamic network). [Planning] uses the manipulation policy network (learned from multiple boards), inference and dynamics network (extracted from the specific board) to plan actions for the target state.
76
+
77
+ In goal-conditioned tasks, for both "one-to-many effects" and "one-to-many objects" relations, the algorithm needs to not only know the trigger object to interact with, but also the direction and position of the action to execute. In this paper, we use "one-to-many" to refer to the super-set of both categories. We exclude many-to-one relations to eliminate possible ambiguities.
78
+
79
+ ## 4 The BusyBot Framework
80
+
81
+ The goal of BusyBot is to learn how to meaningfully interact with the BusyBoard environment (§4.1), infer the inter-object functional relations and dynamics through these interactions (§4.2), and eventually perform goal-conditioned manipulation using the learned interaction policy, relations, and dynamics. (§4.3). We will discuss each module in detail below.
82
+
83
+ ### 4.1 Interact: Learning to Interact with Amplified Effects
84
+
85
+ The goal of the interaction policy $\pi$ is to generate a sequence of actions to interact with articulated objects (i.e. triggers). The interaction policy should be able to: (a) choose the right interact position, (b) select a proper action direction based on the object's articulation structure (i.e., move up-and-down or left-and-right). (c) interact with different objects to explore novel states of the board.
86
+
87
+ Action Inference. The task is formulated as: given a top-down depth image ${o}_{t} \in {\mathbb{R}}^{W \times H}$ , the agent with policy $\pi$ generates an action ${a}_{t}$ at each step: $\pi \left( {o}_{t}\right) \rightarrow {a}_{t}$ . The action is represented in $\mathrm{{SE}}\left( 3\right)$ space, parameterized by an end-effector (i.e., a suction-based gripper) position and a moving direction ${a}_{t} = \left( {{a}_{t}^{\text{pos }},{a}_{t}^{\text{dir }}}\right)$ , where ${a}_{t}^{\text{pos }} \in {\mathbb{R}}^{3}$ is the $3\mathrm{D}$ coordinate and ${a}_{t}^{\text{dir }} \in {\mathbb{R}}^{3},\left( {\begin{Vmatrix}{a}_{t}^{\text{dir }}\end{Vmatrix} = 1}\right)$ is a unit vector in 3D indicating the moving direction of the end-effector. The moving distance is incrementally assigned until reaching a pre-defined limit.
88
+
89
+ The position network takes the depth image as input and outputs per-pixel position affordance $P \in {\left\lbrack 0,1\right\rbrack }^{W \times H}$ , which indicates the likelihood of an effective interact position. The direction inference network takes in the depth observation and the selected action position (represented as a 2-D Gaussian distribution centered around the corresponding pixel location of the 3-D action position) and outputs a score for each direction candidate $r\left( {a}_{t}^{\text{dir }}\right) \in \left\lbrack {0,1}\right\rbrack$ . We uniformly sample 18 directions in $\mathrm{{SO}}\left( 3\right)$ as the direction candidates. Since it is often hard to identify the state of small-displacement objects from visual observations, the agent executes both the selected direction and its opposite direction.
90
+
91
+ Supervision. For each executed action the reward is computed by image difference:
92
+
93
+ $$
94
+ {r}_{\mathrm{{img}}}\left( {a}_{t}^{\mathrm{{dir}}}\right) = \left\{ {\begin{array}{ll} 1 & \text{ if }\mathop{\sum }\limits_{{i = 1}}^{H}\mathop{\sum }\limits_{{j = 1}}^{W}I\left( {{o}_{ij},{o}_{ij}^{\prime }}\right) > \delta \\ 0 & \text{ if }\mathop{\sum }\limits_{{i = 1}}^{H}\mathop{\sum }\limits_{{j = 1}}^{W}I\left( {{o}_{ij},{o}_{ij}^{\prime }}\right) \leq \delta \end{array}\;I\left( {{o}_{ij},{o}_{ij}^{\prime }}\right) = \left\{ \begin{array}{ll} 1 & \text{ if }{o}_{ij} = {o}_{ij}^{\prime } \\ 0 & \text{ if }{o}_{ij} \neq {o}_{ij}^{\prime } \end{array}\right. }\right. \tag{1}
95
+ $$
96
+
97
+ where $o$ and ${o}^{\prime }$ denote RGB image observations before and after the action execution. $\delta$ is a threshold specifies the minimum number of different pixels. We use binary cross-entropy (BCE) loss between the inferred action score $r\left( {a}_{t}^{\text{dir }}\right) \in \left\lbrack {0,1}\right\rbrack$ and the ground-truth reward computed from image observations ${r}_{\text{img }}\left( {a}_{t}^{\text{dir }}\right) \in \left\lbrack {0,1}\right\rbrack$ .
98
+
99
+ Exploration. At the early stage of training the position and direction inference network, we use the epsilon-greedy method to encourage random exploration. Additionally, in order to prevent the model from only selecting the position that has the highest affordance score, we apply the Upper Confidence Bound (UCB) Bandit algorithm on the inferred position affordance to encourage exploration of all object positions in the environment. Given the position affordance $P$ , the updated position affordance is ${P}^{\prime }\left( {i, j}\right) = P\left( {i, j}\right) + c\sqrt{\frac{{ln}\left( t\right) }{N}}$ , where $c = {0.5}, t$ is the number of steps, and $N$ is the times when the pixel(i, j)falls in the $M \times M\left( {M = {10}}\right)$ window centered around each previously selected pixels.
100
+
101
+ ### 4.2 Reason: Learning to Discover Inter-object Relations by Predicting the Future
102
+
103
+ To meaningfully interact within a multi-object environment, the robot not only needs to manipulate individual objects, but also needs to learn the inter-object functional relations. The reasoning module takes in RGB image sequences of the agent's interactions (§4.1), infers the inter-object relations, and predicts future dynamics, which would guide goal-conditioned planning in the next section (§4.3). To accomplish this goal, we adopt and modify the V-CDN model [25].
104
+
105
+ The interaction dataset is collected using learned interaction policy: at each step, positions with affordance score above a threshold are grouped into clusters using the K-means clustering algorithm ( $k = 7$ , the maximum possible number of movable links on all busyboards), and positions with the highest score in each cluster are selected as position candidates. Conditioned on each position candidate, directions with the highest affordance score will be selected to form the final action candidate, from which an action will be randomly chosen to execute. For each board environment, RGB image observations of 30 interaction steps are generated and split into two subsequences: the first $T$ steps for inferring the functional scene graph, and the rest for predicting future dynamics. In addition, to prevent the model from overfitting on board appearances, we ensure that every 20 board environments share the same initial visual appearance but with different functional relations.
106
+
107
+ The inference network is implemented with three Graph Neural Networks (GNNs) to extract functional relations as a scene graph. Each object ${O}_{i}$ corresponds to a node $i$ in the graph, with a node input ${n}_{i}^{1 : T}$ that combines the object's 256-dimensional visual features (extracted at the centers of the objects' bounding boxes using the 10th layer of ResNet-50) and the object’s 3D position. The first GNN learns spatial node and edge embeddings at each step, which are concatenated with 256-dimensional embeddings of the executed actions ${a}_{i}^{1 : T}$ learned from a MLP layer. The combined embeddings are then aggregated over temporal dimension using a 1-D convolution network and input to the second GNN which predicts a probabilistic distribution over edge types ${e}^{d} = {\left\{ {e}_{ij}^{d} \mid {e}_{ij}^{d} \in {\mathbb{R}}^{2}\right\} }_{i, j = 1}^{N}$ (index 0 indicates no relation and index 1 indicates has relation). Conditioned on the edge types, the third GNN predicts 32-dimensional edge embeddings ${e}^{h} = {\left\{ {e}_{ij}^{h} \mid {e}_{ij}^{h} \in {\mathbb{R}}^{32}\right\} }_{i, j = 1}^{N}$ which stores history dynamics associated with each edge.
108
+
109
+ The dynamics network is a Graph Recurrent Network (GRN) that predicts the next state ${n}^{t + 1}$ given the current observation ${n}^{t}$ , the executed action ${a}^{t}$ , and the edges $E = \left\{ {{e}^{d},{e}^{h}}\right\}$ from the inferred functional scene graph. The inference and dynamics network are jointly trained on the objective to minimize the mean squared error (MSE) between predicted and ground-truth object features.
110
+
111
+ $$
112
+ L = \mathop{\min }\limits_{{\phi ,\psi }}\mathop{\sum }\limits_{t}{MSE}\left( {{n}^{t + 1},{f}_{\psi }^{D}\left( {{n}^{t},{a}^{t},{f}_{\phi }^{I}\left( {{n}^{1 : T},{a}^{1 : T}}\right) }\right) }\right) \tag{2}
113
+ $$
114
+
115
+ where ${f}_{\phi }^{l}$ is the inference model parameterized by $\phi ,{f}_{\psi }^{D}$ is the dynamics model parameterized by $\psi$ .
116
+
117
+ ### 4.3 Plan: Goal-conditioned Manipulation with Relation Predictive Agent
118
+
119
+ Finally, we apply BusyBot on goal-conditioned manipulation tasks. Given an initial and target state image of a board, the task is to infer 1) which object(s) to manipulate; 2) what action(s) to execute in order to successfully reach the target state.
120
+
121
+ Using the data collection method as discussed in the reasoning module, the agent infers an action candidate set and generate an interaction sequence of 30 images, which are input to the inference network to obtain the functional scene graph. Then we consider three options to plan for goal-conditioned tasks: 1) Relation agent, at each step, identify a responder that needs to be changed, and find the corresponding trigger based on the functional scene graph. This method is similar to the idea of Li et al. [26]. However, the agent might have trouble handling one-to-many relations. To solve this issue, we propose 2) Predictive agent that uses the dynamics network from the reasoning module and choose the action that minimizes the L2 distance of the predicted next state and the target state. However, the predictive agent may have difficulty generalizing to novel object instances, due to the difficulty of predicting unseen dynamics. 3) Our final method BusyBot combines the relation and predictive agent, where action candidates are first filtered based on the functional scene graph and then selected based on future dynamic predictions. More discussions are provided in Sec. 5.
122
+
123
+ ![01963f57-fb9a-76ea-983a-71e2d5245324_5_316_200_1162_302_0.jpg](images/01963f57-fb9a-76ea-983a-71e2d5245324_5_316_200_1162_302_0.jpg)
124
+
125
+ Figure 4: Qualitative Results, (a) action affordances (b) interaction steps and corresponding reasoning results (c) More reasoning results $\rightarrow$ : inferred inter-object functional relations. $\rightarrow$ : ground truth.
126
+
127
+ ## 5 Evaluation
128
+
129
+ We evaluate BusyBot with both simulated (Fig. 4) and real-world busyboards (Fig. 5). In simulation experiments, we set up the following environments: a) Training Board: for training interaction and reasoning module. b) Novel Config: testing board constructed with training object instances, but in new configurations, which includes inter-object functional relations, position and orientation of objects, board color and texture; c) Novel Object: both object instances and board configurations are novel. In total, we generate 10,000 training boards, 2000 boards with novel configurations, and 2000 boards with novel object instances. Each board has 30 interaction images, where 23 images are reserved for relation inference and the rest for future predictions. As for objects on the board, we use 41 switches, 10 doors, 5 lamps, and 2 tracktoy objects, split into training / testing with ratio: 32/9, $5/5,3/2,1/1$ . The setup for real-world evaluation is described in Sec 5.4
130
+
131
+ ### 5.1 Interaction Module Evaluation
132
+
133
+ To evaluate the interaction policy network, we compute the average precision and recall of the inferred actions for the boards, where precision $= \#$ successful actions $/\#$ total proposed actions, and recall $= \#$ successfully interacted objects $/\#$ total interactable objects. We compare with the following methods:
134
+
135
+ - Oracle (joint state supervision): Interaction policy supervised on joint states. This is considered as the oracle because changes in joint states are directly obtained from simulation.
136
+
137
+ - w/o responder: Interaction policy supervised on visual feedback but no responder effects. - w/o exploration: An ablated version of BusyBot without using UCB for exploration.
138
+
139
+ Results and Analysis. Tab. 1 summarizes the result tested on boards with novel configurations and objects. We can see that [w/o responder] achieves poor performence since the visual feedback of small-displacement objects alone is insufficient for learning. In contrast, by taking into account the responder effects, [BusyBot] is able to get informative reward and achieves comparable performance with the oracle. This validates our hypothesis that triggered responder effects can be used to amplify the visual feedback and assist the model in learning good interaction policies. We also did an ablation study on exploration and observe that without encouraging exploration of new objects, the recall of [w/o exploration] dropped by more than 45% than that of [BusyBot].
140
+
141
+ <table><tr><td rowspan="2"/><td colspan="2">Novel Config</td><td colspan="2">NovelObject</td></tr><tr><td>Prec</td><td>Recall</td><td>Prec</td><td>Recall</td></tr><tr><td>Oracle</td><td>91.8</td><td>82.4</td><td>79.5</td><td>90.2</td></tr><tr><td>w/o responder</td><td>0.71</td><td>0.65</td><td>0.24</td><td>0.49</td></tr><tr><td>w/o exploration</td><td>94.2</td><td>33.7</td><td>81.9</td><td>38.6</td></tr><tr><td>BusyBot</td><td>90.1</td><td>80.1</td><td>82.6</td><td>84.8</td></tr></table>
142
+
143
+ Table 1: Performance of Interaction Policy
144
+
145
+ ### 5.2 Relation Reasoning Module Evaluation
146
+
147
+ The reasoning module is evaluated by the following metrics: 1) Relation inference accuracy, measured by the precision (Edge-P) and recall (Edge-R) of the inferred functional relation pairs. 2) Future state prediction accuracy (Pred-A), measured by the percentage of correct future state predictions. We compare the following alternative methods:
148
+
149
+ ![01963f57-fb9a-76ea-983a-71e2d5245324_6_315_199_1159_353_0.jpg](images/01963f57-fb9a-76ea-983a-71e2d5245324_6_315_199_1159_353_0.jpg)
150
+
151
+ Figure 5: Real-world Busyboard. We test the trained model on a real-world busyboard with robot interactions (a). We also manually modify the underlying inter-object functional relations and show that the algorithm discovers the relations (c) through interactions (b). More results are included in supp.
152
+
153
+ - w/o inference: An ablated version of the model without the inference network. The dynamics network takes in all history interaction data and directly predicts the next state.
154
+
155
+ - Bad interact: An ablated version of our method, where the input of the reasoning network are data collected under an inferior interaction policy (w/o exploration).
156
+
157
+ Results and Analysis. Comparing to [w/o inference], we see that without inferring the inter-object relations, the model overfits on the training data and generalizes poorly to boards with novel configurations and objects. We also observe that with [Bad interact], the reasoning model is not able to uncover the relations accurately and make correct future state predictions. In comparison, our model generalizes well to novel board configurations and achieves performance comparable to that of the training board. This demonstrates that a good interaction policy helps the agent uncover the correct inter-object functional relations, which then helps the agent to understand scene dynamics. For boards with novel object instances, even though the future state prediction accuracy drops by around ${40}\%$ than the seen instances, which is expected since the object features are never seen by the dynamics model, the relation inference accuracy is still comparable to boards with seen object instances. The performance on novel boards verifies that the model's ability to infer inter-object functional relationship can transfer to new scenes and new objects.
158
+
159
+ <table><tr><td rowspan="2"/><td colspan="3">Training Board</td><td colspan="3">Novel Config</td><td colspan="3">Novel Object</td></tr><tr><td>Edge-P</td><td>Edge-R</td><td>Pred-A</td><td>Edge-P</td><td>Edge-R</td><td>Pred-A</td><td>Edge-P</td><td>Edge-R</td><td>Pred-A</td></tr><tr><td>w/o inference</td><td>-</td><td>-</td><td>79.2</td><td>-</td><td>-</td><td>36.2</td><td>-</td><td>-</td><td>7.04</td></tr><tr><td>w/o exploration</td><td>55.6</td><td>51.0</td><td>89.6</td><td>74.6</td><td>3.10</td><td>14.3</td><td>75.1</td><td>0.96</td><td>11.6</td></tr><tr><td>BusyBot</td><td>95.8</td><td>100</td><td>88.1</td><td>95.5</td><td>99.7</td><td>73.8</td><td>85.0</td><td>99.5</td><td>31.0</td></tr></table>
160
+
161
+ Table 2: Performance of Reasoning Module. For BusyBot, while the future state prediction accuracy (Pred-A) decreases for unseen board appearances (novel config, novel object), the reasoning module is still able to reliably infer the inter-object functional relations (Edge-P, Edge-R) in novel scenarios.
162
+
163
+ ### 5.3 Goal-Conditioned Manipulation Evaluation
164
+
165
+ We generate 50 one-to-one tasks and 50 one-to-many tasks for each type of board (training, novel config, novel object). One-to-one tasks contain only two-state triggers, and thus only require the algorithm to identify the correct trigger (similar task studied in IFRexplore [26]). One-to-many tasks contain both multi-direction and multi-link triggers that require the agent to not only identify the correct trigger, but also infer the correct action to manipulate the trigger (e.g., the correct button position or pushing direction).
166
+
167
+ ![01963f57-fb9a-76ea-983a-71e2d5245324_6_756_1572_718_314_0.jpg](images/01963f57-fb9a-76ea-983a-71e2d5245324_6_756_1572_718_314_0.jpg)
168
+
169
+ Figure 6: Goal-conditioned manipulation. Compared to the predictive agent, the relation agent generalizes better on novel objects, while struggles in handling one-to-many relations. Our method BusyBot combines the advantages of both agent.
170
+
171
+ Metrics & Baselines. We measure object-level success rate on both one-to-one and one-to-many manipulation tasks for each type of board. The success rate is defined at object level at the end of an interaction sequence with a maximum of 8 steps. Success rate $= \#$ affectable responders in goal state / # total affectable responders. We compare the three agents as discussed in the method section.
172
+
173
+ Results and Analysis. All agents achieve good performances on one-to-one tasks. This means that both the relation and dynamics learned by the reasoning module can generalize to novel board configurations and objects. The predictive agent achieves better performance on one-to-many tasks with seen object instances by leveraging future predictions to select the correct action to apply on the trigger object. In contrast, the relation agent can only identify the trigger object but not the exact action (e.g., which link to interact with or which direction to push). On the other hand, the relation agent performs slightly better than the predictive agent on all one-to-one tasks and boards with novel objects, when the dynamics model sometimes fails to predict the correct next state. This shows that inter-object functional relationship can generalize to scenarios when future predictions are not reliable enough to assist planning. Our method combines the advantages of both relation and predictive agent and achieves the highest performance on all tasks.
174
+
175
+ <table><tr><td rowspan="2"/><td colspan="2">Training</td><td colspan="2">NovelConfig</td><td colspan="2">NovelObject</td></tr><tr><td>1-to-1</td><td>1-to-m</td><td>1-to-1</td><td>1-to-m</td><td>1-to-1</td><td>1-to-m</td></tr><tr><td>Relation</td><td>98.3</td><td>61.1</td><td>93.7</td><td>60.0</td><td>92.0</td><td>62.8</td></tr><tr><td>Predictive</td><td>97.7</td><td>67.5</td><td>91.0</td><td>67.0</td><td>89.0</td><td>58.2</td></tr><tr><td>BusyBot</td><td>98.3</td><td>71.0</td><td>93.7</td><td>69.4</td><td>92.3</td><td>64.9</td></tr></table>
176
+
177
+ Table 3: Goal-conditioned Manipulation Result
178
+
179
+ ### 5.4 Real-world Experiments
180
+
181
+ Setup. We test the trained model on a busyboard in real world with robot interactions (Fig. 5). The board consists of 3 intractable trigger objects (switches) and 3 responder objects (LEDs). Objects outside the effective region are ignored. We manually modified the underlying inter-object functional relations of the board by rewiring the objects. We test with 6 different configurations including 4 one-to-one and 2 one-to-many configurations. For each configuration, the robot interacts with the board for 30 steps and the rollout is grouped into 6 overlapping and continual sub-sequences, each of which has a length of 25 . In total, we generate a real-world dataset of 36 sequences with 108 inter-object functional relation pairs for evaluating the reasoning module.
182
+
183
+ Results. Fig. 5 (c) shows example results, where the algorithm is able to refine inter-object functional relations (remove additional edges) through interactions. The precision and recall of inferred relations are $\mathbf{{93.9}\% }$ and $\mathbf{{100}\% }$ , respectively. All inter-object functional relations can be discovered by our model, with only a few additional pairs predicted. The result shows that the relation reasoning ability of the model is transferable to real-world scenarios.
184
+
185
+ ### 5.5 Limitation and Future Work
186
+
187
+ While BusyBoard environment is inspired by toys, it still lacks some of the diversity and complexity that appear in real-world toys. For example, real-world toys are often designed with multi-sensory feedback including both sound and tactile, while our environment focuses on visual effects only.
188
+
189
+ The BusyBot's interactions assume the trigger objects are small and can reach all states through single-step actions. However, it does not accommodate large-displacement object that might require a sequence of actions [20]. The relation reasoning module of BusyBot assumes objects are detected. This assumption is easy to satisfy when the environment is simple and objects are well-separated. However, this assumption may not hold when the environment gets more complicated and cluttered.
190
+
191
+ ## 6 Conclusion
192
+
193
+ We propose a toy-inspired relational environment, BusyBoard, and a learning framework, BusyBot, for embodied AI agents to acquire interaction, reasoning, and planning abilities. Our experiments demonstrate that the rich sensory feedback and amplified effects in BusyBoard help the agent learn a policy to efficiently interact with the environment; using the data collected under this interaction policy, inter-object functional relations can be inferred and used for predicting future states; and by combining the ability to interact and reason, the agent is able to perform goal-conditioned manipulation tasks. We verify the effectiveness and generalizability of our method in both simulation and real-world setups.
194
+
195
+ References
196
+
197
+ [1] R. Held and A. Hein. Movement-produced stimulation in the development of visually guided behavior. Journal of comparative and physiological psychology, 56, 1963. doi:10.1037/ h0040546.
198
+
199
+ [2] G. E. Roberson, M. T. Wallace, and J. A. Schirillo. The sensorimotor contingency of multisensory localization correlates with the conscious percept of spatial unity. Behavioral and Brain Sciences, 24(5), 2001. doi:10.1017/S0140525X0154011X.
200
+
201
+ [3] R. Baillargeon. Infants' physical world. Current Directions in Psychological Science, 13(3), 2004. doi:10.1111/j.0963-7214.2004.00281.x.
202
+
203
+ [4] D. Perille, A. Truong, X. Xiao, and P. Stone. Benchmarking metric ground navigation. In 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pages 116-121. IEEE, 2020.
204
+
205
+ [5] N. Tsoi, M. Hussein, O. Fugikawa, J. D. Zhao, and M. Vázquez. An approach to deploy interactive robotic simulators on the web for hri experiments: Results in social robot navigation. In 2021 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS). IEEE, 2021.
206
+
207
+ [6] Manolis Savva*, Abhishek Kadian*, Oleksandr Maksymets*, Y. Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik, D. Parikh, and D. Batra. Habitat: A Platform for Embodied AI Research. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019.
208
+
209
+ [7] C. Li, F. Xia, R. Martín-Martín, M. Lingelbach, S. Srivastava, B. Shen, K. E. Vainio, C. Gokmen, G. Dharan, T. Jain, A. Kurenkov, K. Liu, H. Gweon, J. Wu, L. Fei-Fei, and S. Savarese. igibson 2.0: Object-centric simulation for robot learning of everyday household tasks. In 5th Annual Conference on Robot Learning, 2021. URL https://openreview.net/forum?id= 2uGN5jNJROR.
210
+
211
+ [8] T. Yu, D. Quillen, Z. He, R. Julian, A. Narayan, H. Shively, A. Bellathur, K. Hausman, C. Finn, and S. Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning, 2021.
212
+
213
+ [9] S. James, Z. Ma, D. R. Arrojo, and A. J. Davison. Rlbench: The robot learning benchmark & learning environment. IEEE Robotics and Automation Letters, 5(2):3019-3026, 2020.
214
+
215
+ [10] O. Ahmed, F. Träuble, A. Goyal, A. Neitz, Y. Bengio, B. Schölkopf, M. Wüthrich, and S. Bauer. Causalworld: A robotic manipulation benchmark for causal structure and transfer learning, 2020.
216
+
217
+ [11] Y. Zhu, J. Wong, A. Mandlekar, and R. Martín-Martín. robosuite: A modular simulation framework and benchmark for robot learning. In arXiv preprint arXiv:2009.12293, 2020.
218
+
219
+ [12] F. Xiang, Y. Qin, K. Mo, Y. Xia, H. Zhu, F. Liu, M. Liu, H. Jiang, Y. Yuan, H. Wang, L. Yi, A. X. Chang, L. J. Guibas, and H. Su. SAPIEN: A simulated part-based interactive environment. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
220
+
221
+ [13] A. Bakhtin, L. van der Maaten, J. Johnson, L. Gustafson, and R. Girshick. Phyre: A new benchmark for physical reasoning, 2019.
222
+
223
+ [14] A. Jain, A. Szot, and J. Lim. Generalization to new actions in reinforcement learning. In H. D. III and A. Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 4661-4672. PMLR, 13-18 Jul 2020. URL https://proceedings.mlr.press/v119/jain20b.html.
224
+
225
+ [15] E. Kolve, R. Mottaghi, W. Han, E. VanderBilt, L. Weihs, A. Herrasti, D. Gordon, Y. Zhu, A. Gupta, and A. Farhadi. Ai2-thor: An interactive 3d environment for visual ai, 2019.
226
+
227
+ [16] S. Brahmbhatt, C. Ham, C. C. Kemp, and J. Hays. Contactdb: Analyzing and predicting grasp contact via thermal imaging, 2019.
228
+
229
+ [17] T. Nagarajan, C. Feichtenhofer, and K. Grauman. Grounded human-object interaction hotspots from video. In ${ICCV},{2019}$ .
230
+
231
+ [18] T. Nagarajan, Y. Li, C. Feichtenhofer, and K. Grauman. Ego-topo: Environment affordances from egocentric video, 2020.
232
+
233
+ [19] K. Mo, L. Guibas, M. Mukadam, A. Gupta, and S. Tulsiani. Where2act: From pixels to actions for articulated 3d objects. In ${ICCV},{2021}$ .
234
+
235
+ [20] Z. Xu, H. Zhanpeng, and S. Song. Umpnet: Universal manipulation policy network for articulated objects. IEEE Robotics and Automation Letters, 2022.
236
+
237
+ [21] B. Eisner, H. Zhang, and D. Held. Flowbot3d: Learning 3d articulation flow to manipulate articulated objects. RSS, 2022.
238
+
239
+ [22] S. Y. Gadre, K. Ehsani, and S. Song. Act the part: Learning interaction strategies for articulated object part discovery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15752-15761, 2021.
240
+
241
+ [23] P. Battaglia, R. Pascanu, M. Lai, D. Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. Advances in neural information processing systems, 29, 2016.
242
+
243
+ [24] C. Mitash, A. Boularias, and K. Bekris. Physics-based scene-level reasoning for object pose estimation in clutter. The International Journal of Robotics Research, page 0278364919846551, 2019.
244
+
245
+ [25] Y. Li, A. Torralba, A. Anandkumar, D. Fox, and A. Garg. Causal discovery in physical systems from videos. Advances in Neural Information Processing Systems, 33, 2020.
246
+
247
+ [26] Q. Li, K. Mo, Y. Yang, H. Zhao, and L. Guibas. IFR-Explore: Learning inter-object functional relationships in $3\mathrm{\;d}$ indoor scenes. In International Conference on Learning Representations (ICLR), 2022.
248
+
249
+ [27] S. Nair, Y. Zhu, S. Savarese, and L. Fei-Fei. Causal induction from visual observations for goal directed tasks, 2019.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/A5l7wE2uqtM/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § BUSYBOT: LEARNING TO INTERACT, REASON, AND PLAN IN A BUSYBOARD ENVIRONMENT
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: We introduce BusyBoard, a toy-inspired robot learning environment that leverages a diverse set of articulated objects and inter-object functional relations to provide rich visual feedback for robot interactions. Based on this environment, we introduce a learning framework, BusyBot, which allows an agent to jointly acquire three fundamental capabilities (interaction, reasoning, and planning) in an integrated and self-supervised manner. With the rich sensory feedback provided by BusyBoard, BusyBot first learns a policy to efficiently interact with the environment; then with data collected using the policy, BusyBot reasons the inter-object functional relations through a causal discovery network; and finally by combining the learned interaction policy and relation reasoning skill, the agent is able to perform goal-conditioned manipulation tasks. We evaluate BusyBot in both simulated and real-world environments, and validate its generalizability to unseen objects and relations. Code and simulation will be publicly available.
12
+
13
+ Keywords: Manipulation, Learning Environment, Reasoning
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Learning through physical interactions plays a critical role in human cognitive development $\left\lbrack {1,2,3}\right\rbrack$ . For instance, a well-designed toy like the "busyboard" (Fig. 1a) can provide an effective learning environment for children to develop fundamental manipulation and reasoning skills: the rich and amplified sensory feedback encourages children to actively explore and interact; and the observed inter-object functional relations (e.g., a switch turns on a light) facilitate the development of reasoning and task solving skills.
18
+
19
+ In this paper, we aim to provide a similar learning environment for embodied artificial agents, the BusyBoard environment, where agents learn to discover the underlying relations of objects through informative interactions and plan for goal-conditioned tasks. While simple at the first glance, this relational environment provides an integrated tool for learning and evaluating three critical capabilities of an embodied intelligent system:
20
+
21
+ * Interact: The ability to infer action affordances from visual observations - knowing where and how to manipulate an object to effectively change its state. Learning this skill through visual feedback is particularly hard for small-displacement objects (e.g., switches), whose appearance changes can be subtle even under effective actions.
22
+
23
+ * Reason: The ability to reason about inter-object functional relations (e.g., pressing a button turns on a light). In particular, the agent should learn to infer the relations by observing and predicting future states of the environment, without using the ground-truth relations as supervision.
24
+
25
+ * Plan: The ability to use the learned manipulation and reasoning skills in goal-conditioned planning tasks, in other words, generating a sequence of actions to transform the environment from a random initial state to a given goal state.
26
+
27
+ To learn these skills from the environment, we propose the BusyBot framework that acquires the above three capabilities through self-supervised interactions. To acquire the manipulation skill, the
28
+
29
+ < g r a p h i c s >
30
+
31
+ Figure 1: BusyBoard Environments inspired by toys for children, an integrated tool for learning and evaluating a robot's capabilities in interaction, reasoning, and planning.
32
+
33
+ algorithm learns a visual affordance model that infers effective action candidates through visual feedback. To reason about inter-object functional relations, the algorithm infers a functional scene graph and predicts the future states through a causal discovery network. Finally, to accomplish goal-conditioned manipulation tasks, the algorithm combines the learned action affordances, inter-object relationship, and dynamics to plan its actions with a model predictive control (MPC) framework. In summary, our contributions are two-fold:
34
+
35
+ * We introduce a new learning environment for embodied agents, BusyBoard, which features a diverse set of articulated objects with typical- and small- displacement joints, and rich inter-object functional relations.
36
+
37
+ * We propose BusyBot, an integrated learning framework which allows an embodied agent to acquire fundamental interaction, reasoning, and planning skills through self-supervised interactions and visual feedback.
38
+
39
+ Our experiments demonstrate that by taking advantage of the amplified effects (provided by Busy-Board), the agent is able to acquire an effective manipulation policy for small-displacement articulated objects (e.g., switches). Furthermore, by explicitly inferring the underlying inter-object functional relationship using a causal discovery network, the agent is able to predict the future states of the environment under different actions, and generalizes well to unseen environments, including real-world environment with physical robot interactions.
40
+
41
+ § 2 RELATED WORK
42
+
43
+ Simulation environments for robot learning. Simulation environments are crucial for advances in robot learning. However, most of the existing simulated environments are developed for specific tasks or capabilities, such as navigation [4,5,6,7], manipulation [8,9,10,11,12], causal reasoning in 2D $\left\lbrack {{13},{14}}\right\rbrack$ , or high-level task planning [15]. Inspired by human toys, our BusyBoard is an integrated environment that is compact and relevant to real-world applications, where an embodied agent can jointly learn three critical capabilities: interaction, reasoning, and planning.
44
+
45
+ Learning interaction policy. The ability to interact with a diverse set of objects is critical for many robotics tasks. Different methods have been proposed to learn interaction polices through human demonstrations $\left\lbrack {{16},{17},{18}}\right\rbrack$ or self-guided explorations $\left\lbrack {{19},{20},{21},{22}}\right\rbrack$ . However, most prior works have been ignoring a set of common but challenging objects: small-displacement objects. When interacting with these objects (e.g., switches), the effectiveness of an action often cannot be observed from the object's own visual appearance, which introduces significant challenges for learning. In this work, we address this challenge by taking advantage of BusyBoard, which amplifies action effects through responder objects and enables learning by enriching the supervision signal.
46
+
47
+ Inferring inter-object functional relations. Perceiving and understanding objects individually is often not sufficient for a lot of real-world applications that involve environments with multiple objects Objects are usually related and understanding inter-object physical [23, 24, 25] or functional relations is a crucial skill for efficient planning. In this work, we will focus on uncovering the inter-object functional relationship, as defined by Li et al. [26]. One common approach to solve this problem is to induce changes through interventions and iteratively construct a functional scene graph [27]. More recently, Graph Neural Networks (GNNs) have been demonstrated to be promising for extracting the underlying structural causal model (SCM) and predicting future dynamics from motions [25]. In our work, we further demonstrate that GNNs are able to infer inter-object functional relations from changes in visual appearances. We also show that the inferred inter-object functional relationship and scene dynamics can assist action planning for downstream goal-conditioned manipulation tasks.
48
+
49
+ < g r a p h i c s >
50
+
51
+ Figure 2: BusyBoard Environment is procedurally generated using articulated objects (a, b) with randomly sampled inter-object functional relation pairs between trigger and responder objects (c). (d) shows example boards and the underlying functional relations.
52
+
53
+ § 3 THE BUSYBOARD ENVIRONMENT
54
+
55
+ To learn from a diverse set of environments, we designed a data generation pipeline to generate a large collection of simulated BusyBoard (illustrate on Fig. 2), which provides the following advantages:
56
+
57
+ * Amplified effects. In addition to providing visual feedback on the interacted object (e.g., appearance change on a button after being pressed), BusyBoard also amplifies the effect of an action with responder objects (e.g., a light turns on after the button is pressed). This is especially useful for learning manipulation policies for objects with small displacements upon interaction, for which state changes are often hard to observe.
58
+
59
+ * Compact. Unlike learning from room-scale environments [26], learning from BusyBoard does not require the agent to navigate in a large space to observe all the state changes. Thus BusyBoard is able to provide denser rewards for learning manipulation and reasoning before developing skills for navigation, which is analogous to the human cognition development process.
60
+
61
+ * Relevant. In contrast to game-like environments (e.g., Atari), our environment contains everyday objects and functional relations that are analogous to those in the real world, making the learned manipulation and reasoning skills relevant and potentially transferable to the real world.
62
+
63
+ Trigger and responder objects. The BusyBoard is procedurally generated using object URDF models. For trigger objects, we select switch instances from the Partnet-Mobility dataset [12], including small-displacement objects (with small displacements upon interaction), multi-direction objects (contain one movable link that can be pushed to multiple directions), and multi-link objects (contain multiple movable links). We use other object categories (e.g., lamp, door, tracktoy) as responder objects, which can be either single-stage or multi-stage. A single-stage responder has only two possible states defined by either appearance (e.g., lamp on or off) or joint state (e.g., door open or closed). A multi-stage responder has multiple possible states (e.g., multiple light colors of a lamp responder or multiple joint positions of a tracktoy responder).
64
+
65
+ Relations. One instance of BusyBoard may contain 2-3 triggers and 5-7 responders with randomly generated relations. Effects are triggered by changes in the joint positions of the trigger objects. We use Pybullet to simulate object movement under robot interactions. We introduce three types of inter-object functional relations (shown in Fig. 2):
66
+
67
+ * One-to-one: one trigger controls one single-stage responder.
68
+
69
+ * One-to-many effects: one trigger controls multiple effects on one responder. The trigger is a multi-direction object and the responder is a multi-stage object.
70
+
71
+ * One-to-many objects: one trigger controls multiple responders. The trigger is a multi-link object (a switch with multiple buttons), and each link controls one single-stage responder.
72
+
73
+ < g r a p h i c s >
74
+
75
+ Figure 3: BusyBot Overview. [Interaction] infers a sequence of actions to efficiently interact with a given scene from visual input. [Reasoning] infers a functional scene graph (i.e., inference network) and predicts future states (i.e., dynamic network). [Planning] uses the manipulation policy network (learned from multiple boards), inference and dynamics network (extracted from the specific board) to plan actions for the target state.
76
+
77
+ In goal-conditioned tasks, for both "one-to-many effects" and "one-to-many objects" relations, the algorithm needs to not only know the trigger object to interact with, but also the direction and position of the action to execute. In this paper, we use "one-to-many" to refer to the super-set of both categories. We exclude many-to-one relations to eliminate possible ambiguities.
78
+
79
+ § 4 THE BUSYBOT FRAMEWORK
80
+
81
+ The goal of BusyBot is to learn how to meaningfully interact with the BusyBoard environment (§4.1), infer the inter-object functional relations and dynamics through these interactions (§4.2), and eventually perform goal-conditioned manipulation using the learned interaction policy, relations, and dynamics. (§4.3). We will discuss each module in detail below.
82
+
83
+ § 4.1 INTERACT: LEARNING TO INTERACT WITH AMPLIFIED EFFECTS
84
+
85
+ The goal of the interaction policy $\pi$ is to generate a sequence of actions to interact with articulated objects (i.e. triggers). The interaction policy should be able to: (a) choose the right interact position, (b) select a proper action direction based on the object's articulation structure (i.e., move up-and-down or left-and-right). (c) interact with different objects to explore novel states of the board.
86
+
87
+ Action Inference. The task is formulated as: given a top-down depth image ${o}_{t} \in {\mathbb{R}}^{W \times H}$ , the agent with policy $\pi$ generates an action ${a}_{t}$ at each step: $\pi \left( {o}_{t}\right) \rightarrow {a}_{t}$ . The action is represented in $\mathrm{{SE}}\left( 3\right)$ space, parameterized by an end-effector (i.e., a suction-based gripper) position and a moving direction ${a}_{t} = \left( {{a}_{t}^{\text{ pos }},{a}_{t}^{\text{ dir }}}\right)$ , where ${a}_{t}^{\text{ pos }} \in {\mathbb{R}}^{3}$ is the $3\mathrm{D}$ coordinate and ${a}_{t}^{\text{ dir }} \in {\mathbb{R}}^{3},\left( {\begin{Vmatrix}{a}_{t}^{\text{ dir }}\end{Vmatrix} = 1}\right)$ is a unit vector in 3D indicating the moving direction of the end-effector. The moving distance is incrementally assigned until reaching a pre-defined limit.
88
+
89
+ The position network takes the depth image as input and outputs per-pixel position affordance $P \in {\left\lbrack 0,1\right\rbrack }^{W \times H}$ , which indicates the likelihood of an effective interact position. The direction inference network takes in the depth observation and the selected action position (represented as a 2-D Gaussian distribution centered around the corresponding pixel location of the 3-D action position) and outputs a score for each direction candidate $r\left( {a}_{t}^{\text{ dir }}\right) \in \left\lbrack {0,1}\right\rbrack$ . We uniformly sample 18 directions in $\mathrm{{SO}}\left( 3\right)$ as the direction candidates. Since it is often hard to identify the state of small-displacement objects from visual observations, the agent executes both the selected direction and its opposite direction.
90
+
91
+ Supervision. For each executed action the reward is computed by image difference:
92
+
93
+ $$
94
+ {r}_{\mathrm{{img}}}\left( {a}_{t}^{\mathrm{{dir}}}\right) = \left\{ {\begin{array}{ll} 1 & \text{ if }\mathop{\sum }\limits_{{i = 1}}^{H}\mathop{\sum }\limits_{{j = 1}}^{W}I\left( {{o}_{ij},{o}_{ij}^{\prime }}\right) > \delta \\ 0 & \text{ if }\mathop{\sum }\limits_{{i = 1}}^{H}\mathop{\sum }\limits_{{j = 1}}^{W}I\left( {{o}_{ij},{o}_{ij}^{\prime }}\right) \leq \delta \end{array}\;I\left( {{o}_{ij},{o}_{ij}^{\prime }}\right) = \left\{ \begin{array}{ll} 1 & \text{ if }{o}_{ij} = {o}_{ij}^{\prime } \\ 0 & \text{ if }{o}_{ij} \neq {o}_{ij}^{\prime } \end{array}\right. }\right. \tag{1}
95
+ $$
96
+
97
+ where $o$ and ${o}^{\prime }$ denote RGB image observations before and after the action execution. $\delta$ is a threshold specifies the minimum number of different pixels. We use binary cross-entropy (BCE) loss between the inferred action score $r\left( {a}_{t}^{\text{ dir }}\right) \in \left\lbrack {0,1}\right\rbrack$ and the ground-truth reward computed from image observations ${r}_{\text{ img }}\left( {a}_{t}^{\text{ dir }}\right) \in \left\lbrack {0,1}\right\rbrack$ .
98
+
99
+ Exploration. At the early stage of training the position and direction inference network, we use the epsilon-greedy method to encourage random exploration. Additionally, in order to prevent the model from only selecting the position that has the highest affordance score, we apply the Upper Confidence Bound (UCB) Bandit algorithm on the inferred position affordance to encourage exploration of all object positions in the environment. Given the position affordance $P$ , the updated position affordance is ${P}^{\prime }\left( {i,j}\right) = P\left( {i,j}\right) + c\sqrt{\frac{{ln}\left( t\right) }{N}}$ , where $c = {0.5},t$ is the number of steps, and $N$ is the times when the pixel(i, j)falls in the $M \times M\left( {M = {10}}\right)$ window centered around each previously selected pixels.
100
+
101
+ § 4.2 REASON: LEARNING TO DISCOVER INTER-OBJECT RELATIONS BY PREDICTING THE FUTURE
102
+
103
+ To meaningfully interact within a multi-object environment, the robot not only needs to manipulate individual objects, but also needs to learn the inter-object functional relations. The reasoning module takes in RGB image sequences of the agent's interactions (§4.1), infers the inter-object relations, and predicts future dynamics, which would guide goal-conditioned planning in the next section (§4.3). To accomplish this goal, we adopt and modify the V-CDN model [25].
104
+
105
+ The interaction dataset is collected using learned interaction policy: at each step, positions with affordance score above a threshold are grouped into clusters using the K-means clustering algorithm ( $k = 7$ , the maximum possible number of movable links on all busyboards), and positions with the highest score in each cluster are selected as position candidates. Conditioned on each position candidate, directions with the highest affordance score will be selected to form the final action candidate, from which an action will be randomly chosen to execute. For each board environment, RGB image observations of 30 interaction steps are generated and split into two subsequences: the first $T$ steps for inferring the functional scene graph, and the rest for predicting future dynamics. In addition, to prevent the model from overfitting on board appearances, we ensure that every 20 board environments share the same initial visual appearance but with different functional relations.
106
+
107
+ The inference network is implemented with three Graph Neural Networks (GNNs) to extract functional relations as a scene graph. Each object ${O}_{i}$ corresponds to a node $i$ in the graph, with a node input ${n}_{i}^{1 : T}$ that combines the object's 256-dimensional visual features (extracted at the centers of the objects' bounding boxes using the 10th layer of ResNet-50) and the object’s 3D position. The first GNN learns spatial node and edge embeddings at each step, which are concatenated with 256-dimensional embeddings of the executed actions ${a}_{i}^{1 : T}$ learned from a MLP layer. The combined embeddings are then aggregated over temporal dimension using a 1-D convolution network and input to the second GNN which predicts a probabilistic distribution over edge types ${e}^{d} = {\left\{ {e}_{ij}^{d} \mid {e}_{ij}^{d} \in {\mathbb{R}}^{2}\right\} }_{i,j = 1}^{N}$ (index 0 indicates no relation and index 1 indicates has relation). Conditioned on the edge types, the third GNN predicts 32-dimensional edge embeddings ${e}^{h} = {\left\{ {e}_{ij}^{h} \mid {e}_{ij}^{h} \in {\mathbb{R}}^{32}\right\} }_{i,j = 1}^{N}$ which stores history dynamics associated with each edge.
108
+
109
+ The dynamics network is a Graph Recurrent Network (GRN) that predicts the next state ${n}^{t + 1}$ given the current observation ${n}^{t}$ , the executed action ${a}^{t}$ , and the edges $E = \left\{ {{e}^{d},{e}^{h}}\right\}$ from the inferred functional scene graph. The inference and dynamics network are jointly trained on the objective to minimize the mean squared error (MSE) between predicted and ground-truth object features.
110
+
111
+ $$
112
+ L = \mathop{\min }\limits_{{\phi ,\psi }}\mathop{\sum }\limits_{t}{MSE}\left( {{n}^{t + 1},{f}_{\psi }^{D}\left( {{n}^{t},{a}^{t},{f}_{\phi }^{I}\left( {{n}^{1 : T},{a}^{1 : T}}\right) }\right) }\right) \tag{2}
113
+ $$
114
+
115
+ where ${f}_{\phi }^{l}$ is the inference model parameterized by $\phi ,{f}_{\psi }^{D}$ is the dynamics model parameterized by $\psi$ .
116
+
117
+ § 4.3 PLAN: GOAL-CONDITIONED MANIPULATION WITH RELATION PREDICTIVE AGENT
118
+
119
+ Finally, we apply BusyBot on goal-conditioned manipulation tasks. Given an initial and target state image of a board, the task is to infer 1) which object(s) to manipulate; 2) what action(s) to execute in order to successfully reach the target state.
120
+
121
+ Using the data collection method as discussed in the reasoning module, the agent infers an action candidate set and generate an interaction sequence of 30 images, which are input to the inference network to obtain the functional scene graph. Then we consider three options to plan for goal-conditioned tasks: 1) Relation agent, at each step, identify a responder that needs to be changed, and find the corresponding trigger based on the functional scene graph. This method is similar to the idea of Li et al. [26]. However, the agent might have trouble handling one-to-many relations. To solve this issue, we propose 2) Predictive agent that uses the dynamics network from the reasoning module and choose the action that minimizes the L2 distance of the predicted next state and the target state. However, the predictive agent may have difficulty generalizing to novel object instances, due to the difficulty of predicting unseen dynamics. 3) Our final method BusyBot combines the relation and predictive agent, where action candidates are first filtered based on the functional scene graph and then selected based on future dynamic predictions. More discussions are provided in Sec. 5.
122
+
123
+ < g r a p h i c s >
124
+
125
+ Figure 4: Qualitative Results, (a) action affordances (b) interaction steps and corresponding reasoning results (c) More reasoning results $\rightarrow$ : inferred inter-object functional relations. $\rightarrow$ : ground truth.
126
+
127
+ § 5 EVALUATION
128
+
129
+ We evaluate BusyBot with both simulated (Fig. 4) and real-world busyboards (Fig. 5). In simulation experiments, we set up the following environments: a) Training Board: for training interaction and reasoning module. b) Novel Config: testing board constructed with training object instances, but in new configurations, which includes inter-object functional relations, position and orientation of objects, board color and texture; c) Novel Object: both object instances and board configurations are novel. In total, we generate 10,000 training boards, 2000 boards with novel configurations, and 2000 boards with novel object instances. Each board has 30 interaction images, where 23 images are reserved for relation inference and the rest for future predictions. As for objects on the board, we use 41 switches, 10 doors, 5 lamps, and 2 tracktoy objects, split into training / testing with ratio: 32/9, $5/5,3/2,1/1$ . The setup for real-world evaluation is described in Sec 5.4
130
+
131
+ § 5.1 INTERACTION MODULE EVALUATION
132
+
133
+ To evaluate the interaction policy network, we compute the average precision and recall of the inferred actions for the boards, where precision $= \#$ successful actions $/\#$ total proposed actions, and recall $= \#$ successfully interacted objects $/\#$ total interactable objects. We compare with the following methods:
134
+
135
+ * Oracle (joint state supervision): Interaction policy supervised on joint states. This is considered as the oracle because changes in joint states are directly obtained from simulation.
136
+
137
+ * w/o responder: Interaction policy supervised on visual feedback but no responder effects. - w/o exploration: An ablated version of BusyBot without using UCB for exploration.
138
+
139
+ Results and Analysis. Tab. 1 summarizes the result tested on boards with novel configurations and objects. We can see that [w/o responder] achieves poor performence since the visual feedback of small-displacement objects alone is insufficient for learning. In contrast, by taking into account the responder effects, [BusyBot] is able to get informative reward and achieves comparable performance with the oracle. This validates our hypothesis that triggered responder effects can be used to amplify the visual feedback and assist the model in learning good interaction policies. We also did an ablation study on exploration and observe that without encouraging exploration of new objects, the recall of [w/o exploration] dropped by more than 45% than that of [BusyBot].
140
+
141
+ max width=
142
+
143
+ 2*X 2|c|Novel Config 2|c|NovelObject
144
+
145
+ 2-5
146
+ Prec Recall Prec Recall
147
+
148
+ 1-5
149
+ Oracle 91.8 82.4 79.5 90.2
150
+
151
+ 1-5
152
+ w/o responder 0.71 0.65 0.24 0.49
153
+
154
+ 1-5
155
+ w/o exploration 94.2 33.7 81.9 38.6
156
+
157
+ 1-5
158
+ BusyBot 90.1 80.1 82.6 84.8
159
+
160
+ 1-5
161
+
162
+ Table 1: Performance of Interaction Policy
163
+
164
+ § 5.2 RELATION REASONING MODULE EVALUATION
165
+
166
+ The reasoning module is evaluated by the following metrics: 1) Relation inference accuracy, measured by the precision (Edge-P) and recall (Edge-R) of the inferred functional relation pairs. 2) Future state prediction accuracy (Pred-A), measured by the percentage of correct future state predictions. We compare the following alternative methods:
167
+
168
+ < g r a p h i c s >
169
+
170
+ Figure 5: Real-world Busyboard. We test the trained model on a real-world busyboard with robot interactions (a). We also manually modify the underlying inter-object functional relations and show that the algorithm discovers the relations (c) through interactions (b). More results are included in supp.
171
+
172
+ * w/o inference: An ablated version of the model without the inference network. The dynamics network takes in all history interaction data and directly predicts the next state.
173
+
174
+ * Bad interact: An ablated version of our method, where the input of the reasoning network are data collected under an inferior interaction policy (w/o exploration).
175
+
176
+ Results and Analysis. Comparing to [w/o inference], we see that without inferring the inter-object relations, the model overfits on the training data and generalizes poorly to boards with novel configurations and objects. We also observe that with [Bad interact], the reasoning model is not able to uncover the relations accurately and make correct future state predictions. In comparison, our model generalizes well to novel board configurations and achieves performance comparable to that of the training board. This demonstrates that a good interaction policy helps the agent uncover the correct inter-object functional relations, which then helps the agent to understand scene dynamics. For boards with novel object instances, even though the future state prediction accuracy drops by around ${40}\%$ than the seen instances, which is expected since the object features are never seen by the dynamics model, the relation inference accuracy is still comparable to boards with seen object instances. The performance on novel boards verifies that the model's ability to infer inter-object functional relationship can transfer to new scenes and new objects.
177
+
178
+ max width=
179
+
180
+ 2*X 3|c|Training Board 3|c|Novel Config 3|c|Novel Object
181
+
182
+ 2-10
183
+ Edge-P Edge-R Pred-A Edge-P Edge-R Pred-A Edge-P Edge-R Pred-A
184
+
185
+ 1-10
186
+ w/o inference - - 79.2 - - 36.2 - - 7.04
187
+
188
+ 1-10
189
+ w/o exploration 55.6 51.0 89.6 74.6 3.10 14.3 75.1 0.96 11.6
190
+
191
+ 1-10
192
+ BusyBot 95.8 100 88.1 95.5 99.7 73.8 85.0 99.5 31.0
193
+
194
+ 1-10
195
+
196
+ Table 2: Performance of Reasoning Module. For BusyBot, while the future state prediction accuracy (Pred-A) decreases for unseen board appearances (novel config, novel object), the reasoning module is still able to reliably infer the inter-object functional relations (Edge-P, Edge-R) in novel scenarios.
197
+
198
+ § 5.3 GOAL-CONDITIONED MANIPULATION EVALUATION
199
+
200
+ We generate 50 one-to-one tasks and 50 one-to-many tasks for each type of board (training, novel config, novel object). One-to-one tasks contain only two-state triggers, and thus only require the algorithm to identify the correct trigger (similar task studied in IFRexplore [26]). One-to-many tasks contain both multi-direction and multi-link triggers that require the agent to not only identify the correct trigger, but also infer the correct action to manipulate the trigger (e.g., the correct button position or pushing direction).
201
+
202
+ < g r a p h i c s >
203
+
204
+ Figure 6: Goal-conditioned manipulation. Compared to the predictive agent, the relation agent generalizes better on novel objects, while struggles in handling one-to-many relations. Our method BusyBot combines the advantages of both agent.
205
+
206
+ Metrics & Baselines. We measure object-level success rate on both one-to-one and one-to-many manipulation tasks for each type of board. The success rate is defined at object level at the end of an interaction sequence with a maximum of 8 steps. Success rate $= \#$ affectable responders in goal state / # total affectable responders. We compare the three agents as discussed in the method section.
207
+
208
+ Results and Analysis. All agents achieve good performances on one-to-one tasks. This means that both the relation and dynamics learned by the reasoning module can generalize to novel board configurations and objects. The predictive agent achieves better performance on one-to-many tasks with seen object instances by leveraging future predictions to select the correct action to apply on the trigger object. In contrast, the relation agent can only identify the trigger object but not the exact action (e.g., which link to interact with or which direction to push). On the other hand, the relation agent performs slightly better than the predictive agent on all one-to-one tasks and boards with novel objects, when the dynamics model sometimes fails to predict the correct next state. This shows that inter-object functional relationship can generalize to scenarios when future predictions are not reliable enough to assist planning. Our method combines the advantages of both relation and predictive agent and achieves the highest performance on all tasks.
209
+
210
+ max width=
211
+
212
+ 2*X 2|c|Training 2|c|NovelConfig 2|c|NovelObject
213
+
214
+ 2-7
215
+ 1-to-1 1-to-m 1-to-1 1-to-m 1-to-1 1-to-m
216
+
217
+ 1-7
218
+ Relation 98.3 61.1 93.7 60.0 92.0 62.8
219
+
220
+ 1-7
221
+ Predictive 97.7 67.5 91.0 67.0 89.0 58.2
222
+
223
+ 1-7
224
+ BusyBot 98.3 71.0 93.7 69.4 92.3 64.9
225
+
226
+ 1-7
227
+
228
+ Table 3: Goal-conditioned Manipulation Result
229
+
230
+ § 5.4 REAL-WORLD EXPERIMENTS
231
+
232
+ Setup. We test the trained model on a busyboard in real world with robot interactions (Fig. 5). The board consists of 3 intractable trigger objects (switches) and 3 responder objects (LEDs). Objects outside the effective region are ignored. We manually modified the underlying inter-object functional relations of the board by rewiring the objects. We test with 6 different configurations including 4 one-to-one and 2 one-to-many configurations. For each configuration, the robot interacts with the board for 30 steps and the rollout is grouped into 6 overlapping and continual sub-sequences, each of which has a length of 25 . In total, we generate a real-world dataset of 36 sequences with 108 inter-object functional relation pairs for evaluating the reasoning module.
233
+
234
+ Results. Fig. 5 (c) shows example results, where the algorithm is able to refine inter-object functional relations (remove additional edges) through interactions. The precision and recall of inferred relations are $\mathbf{{93.9}\% }$ and $\mathbf{{100}\% }$ , respectively. All inter-object functional relations can be discovered by our model, with only a few additional pairs predicted. The result shows that the relation reasoning ability of the model is transferable to real-world scenarios.
235
+
236
+ § 5.5 LIMITATION AND FUTURE WORK
237
+
238
+ While BusyBoard environment is inspired by toys, it still lacks some of the diversity and complexity that appear in real-world toys. For example, real-world toys are often designed with multi-sensory feedback including both sound and tactile, while our environment focuses on visual effects only.
239
+
240
+ The BusyBot's interactions assume the trigger objects are small and can reach all states through single-step actions. However, it does not accommodate large-displacement object that might require a sequence of actions [20]. The relation reasoning module of BusyBot assumes objects are detected. This assumption is easy to satisfy when the environment is simple and objects are well-separated. However, this assumption may not hold when the environment gets more complicated and cluttered.
241
+
242
+ § 6 CONCLUSION
243
+
244
+ We propose a toy-inspired relational environment, BusyBoard, and a learning framework, BusyBot, for embodied AI agents to acquire interaction, reasoning, and planning abilities. Our experiments demonstrate that the rich sensory feedback and amplified effects in BusyBoard help the agent learn a policy to efficiently interact with the environment; using the data collected under this interaction policy, inter-object functional relations can be inferred and used for predicting future states; and by combining the ability to interact and reason, the agent is able to perform goal-conditioned manipulation tasks. We verify the effectiveness and generalizability of our method in both simulation and real-world setups.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/AdFROt9BoqE/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,259 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Cross-Domain Transfer via Semantic Skill Imitation
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: We propose an approach for semantic imitation, which uses demonstrations from a source domain, e.g., human videos, to accelerate reinforcement learning (RL) in a different target domain, e.g., a robotic manipulator in a simulated kitchen. Instead of imitating low-level actions like joint velocities, our approach imitates the sequence of demonstrated semantic skills like "opening the microwave" or "turning on the stove". This allows us to transfer demonstrations across environments (e.g., real-world to simulated kitchen) and agent embodiments (e.g., bimanual human demonstration to robotic arm). We evaluate on three challenging cross-domain learning problems and match the performance of demonstration-accelerated RL approaches that require in-domain demonstrations. In a simulated kitchen environment, our approach learns long-horizon robot manipulation tasks, using less than 3 minutes of human video demonstrations from a real-world kitchen. This enables scaling robot learning via the reuse of demonstrations, e.g., collected as human videos, for learning in any number of target domains.
12
+
13
+ Keywords: Reinforcement Learning, Imitation, Transfer Learning
14
+
15
+ ## 1 Introduction
16
+
17
+ ![01963f3a-c8a2-73eb-a3fd-293db5aaef01_0_899_1195_589_646_0.jpg](images/01963f3a-c8a2-73eb-a3fd-293db5aaef01_0_899_1195_589_646_0.jpg)
18
+
19
+ Figure 1: We address semantic imitation, which aims to leverage demonstrations from a source domain, e.g., human video demonstrations, to accelerate the learning of the same tasks in a different target domain, e.g., controlling a robotic manipulator in a simulated kitchen environment.
20
+
21
+ Consider a person imitating an expert in two scenarios: a beginner learning to play tennis, and a chef following a recipe for a new dish. In the former case, when mastering the basic skills of tennis, humans tend to imitate the precise arm movements demonstrated by the expert. In contrast, when operating in a familiar domain, such as a chef learning to cook a new dish, imitation happens on a higher scale. Instead of imitating individual movements, they follow high-level, semantically meaningful skills like "stir the mixture" or "turn on the oven". Such semantic skills generalize across environment layouts, and allow humans to follow demonstrations across substantially different environments.
22
+
23
+ Most works that leverage demonstrations in robotics imitate low-level actions. Demonstrations are typically provided by manually moving the robot [1] or via teleoperation [2]. A critical challenge of this approach is scaling: demonstrations need to be collected in every new environment. On the other hand, imitation of high-level (semantic) skills has the promise of generalization: demonstrations can be collected in one kitchen and applied to any number of kitchens, eliminating the need to re-demonstrate in every new environment. Learning via imitation of high-level skills can lead to scalable and generalizable robot learning.
24
+
25
+ In this work, we present Semantic Transfer Accelerated RL (STAR), which accelerates RL using cross-domain demonstrations by leveraging semantic skills, instead of low-level actions. We consider a setting with significantly different source and target environments. Figure 1 shows an example: a robot arm learns to do a kitchen manipulation task by following a visual human demonstration from a different (real-world) kitchen. An approach that follows the precise arm movements of the human will fail due to embodiment and environment differences. Yet, by following the demonstrated semantic skills like "open the microwave" and "turn on the stove", our approach can leverage demonstrations despite the domain differences. Like the chef in the above example, we use prior experience for enabling this semantic transfer. We assume access to datasets of prior experience collected across many tasks, in both the source and target domains. From this data, we learn semantic skills like "open the microwave" or "turn on the stove". Next, we collect demonstrations of the task in the source domain and find "semantically similar" states in the target domain. Using this mapping, we learn a policy to follow the demonstrated semantic skills in semantically similar states in the target domain.
26
+
27
+ We present results on two semantic imitation problems in simulation and on real-to-sim transfer from human videos. In simulation, we test STAR in: (1) a maze navigation task across mazes of different layouts and (2) a sequence of kitchen tasks between two variations of the FrankaKitchen environment [3]. In both tasks our approach matches the learning efficiency of methods with in-domain demonstrations, despite only using cross-domain demonstrations. Additionally, we show that a human demonstration video recorded within 3 minutes in a real-world kitchen can accelerate the learning of long-horizon manipulation tasks in the FrankaKitchen by hundreds of thousands of robot environment interactions.
28
+
29
+ In summary, our contributions are twofold: (1) we introduce STAR, an approach for cross-domain transfer via learned semantic skills, (2) we show that STAR can leverage demonstrations across substantially differing domains to accelerate the learning of long-horizon tasks.
30
+
31
+ ## 2 Related Work
32
+
33
+ Learning from demonstrations. Learning from Demonstrations (LfD, Argall et al. [4]) is a popular method for learning robot behaviors using demonstrations of the target task, often collected by human operators. Common approaches include behavioral cloning (BC, Pomerleau [5]) and adversarial imitation approaches [6]. A number of works have proposed approaches for combining these imitation objectives with reinforcement learning [7, 8, 9, 10]. However, all of these approaches require demonstrations in the target domain, limiting their applicability to new domains. In contrast, our approach imitates the demonstrations' semantic skills and thus enables transfer across domains.
34
+
35
+ Skill-based Imitation. Using temporal abstraction via skills has a long tradition in hierarchical RL [11, 12, 13]. Skills have also been used for the imitation of long-horizon tasks. Pertsch et al. [14], Hakhamaneshi et al. [15] learn skills from task-agnostic offline experience [16, 17] and imitate demonstrated skills instead of primitive actions. But, since the learned skills do not capture semantic information, they require demonstrations in the target domain. Xu et al. [18], Huang et al. [19] divide long-horizon tasks into subroutines, but struggle if the two domains requires a different sequence of subroutines, e.g., if skill pre-conditions are not met in the target environment. Our approach is robust to such mismatches without requiring demonstrations in the target domain.
36
+
37
+ Cross-Domain Imitation. Peng et al. [20] assume a pre-specified mapping between source and target domain. $\left\lbrack {{21},{22}}\right\rbrack$ leverage offline experience to learn mappings while $\left\lbrack {{23},{24},{25}}\right\rbrack$ rely on paired demonstrations. A popular goal is to leverage human videos for robot learning since they are easy to collect at scale. [26, 27] learn reward functions from human demonstrations and Schmeckpeper et al. [28] add human experience to an RL agent's replay buffer, but they only consider short-horizon tasks and rely on environments being similar. Yu et al. [29] meta-learn cross-domain subroutines, but cannot handle different subroutines between source and target. Our approach imitates long-horizon tasks across domains, without a pre-defined mapping and is robust to different semantic subroutines.
38
+
39
+ ## 3 Problem Formulation
40
+
41
+ We define a source environment $S$ and a target environment $T$ . In the source domain, we have $N$ demonstrations ${\tau }_{1 : N}^{S}$ with ${\tau }_{i}^{S} = \left\{ {{s}_{0}^{S},{a}_{0}^{S},{s}_{1}^{S},{a}_{1}^{S},\ldots }\right\}$ sequences of states ${s}^{S}$ and actions ${a}^{S}$ . Our goal is to leverage these demonstrations to accelerate training of a policy $\pi \left( {s}^{T}\right)$ in the target environment, acting on target states ${s}^{T}$ and predicting actions ${a}^{T}.\pi \left( {s}^{T}\right)$ maximizes the discounted target task reward ${J}^{T} = {\mathbb{E}}_{\pi }\left\lbrack {\mathop{\sum }\limits_{{l = 0}}^{{L - 1}}{\gamma }^{l}R\left( {{s}_{l}^{T},{a}_{l}^{T}}\right) }\right\rbrack$ for an episode of length $L$ . We account for different state-action spaces $\left( {{s}^{S},{a}^{S}}\right)$ vs. $\left( {{s}^{T},{a}^{T}}\right)$ between source and target, but drop the superscript in the following sections, assuming that the context makes it clear whether we are addressing source or target states. In Section 4.3 we describe how we bridge this domain gap. Without loss of generality we assume that the source and target environments are substantially different; sequences of low-level actions that solve a task in the source environment do not lead to high reward in the target environment. Yet, we assume that the demonstrations show a set of semantic skills, which when followed in the target environment can lead to task success. Here the term semantic skill refers to a high-level notion of skill, like "open the microwave" or "turn on the oven", which is independent of the environment-specific low-level actions required to perform it. We further assume that both source and target environment allow for the execution of the same set of semantic skills.
42
+
43
+ Semantic imitation requires an agent to understand the semantic skills performed in the demonstrations. We use task-agnostic datasets ${\mathcal{D}}_{S}$ and ${\mathcal{D}}_{T}$ in the source and target domains to extract such semantic skills. Each ${\mathcal{D}}_{i}$ consists of state-action trajectories collected across a diverse range of prior tasks, e.g., from previously trained policies or teleoperation, as is commonly assumed in prior work $\left\lbrack {{16},{17},{14},{15}}\right\rbrack$ . We also assume discrete semantic skill annotations ${k}_{t} \in \mathcal{K}$ , denoting the skill being executed at time step $t$ . These can be collected manually, but we demonstrate how to use pre-trained action recognition models as a more scalable alternative (Sec. 5.2).
44
+
45
+ ## 4 Approach
46
+
47
+ Algorithm 1 STAR (Semantic Transfer Accelerated RL)
48
+
49
+ ---
50
+
51
+ Pre-Train low-level policy ${\pi }^{l}\left( {a \mid s, k, z}\right) \; \vartriangleright$ cf. Sec. 4.1
52
+
53
+ Match source demos to target states $\vartriangleright$ cf. Sec. 4.3
54
+
55
+ Pre-train ${p}^{\text{demo }}\left( {k \mid s}\right) ,{p}^{\mathrm{{TA}}}\left( {k \mid s}\right) ,{p}^{\mathrm{{TA}}}\left( {z \mid s, k}\right) , D\left( s\right) \; \vartriangleright$ cf. Tab. 1
56
+
57
+ for each target train iteration do
58
+
59
+ Collect online experience $\left( {s, k, z, R,{s}^{\prime }}\right)$
60
+
61
+ Update high-level policies with eq. 3 D ch
62
+
63
+ return trained high-level policies ${\pi }^{\text{sem }}\left( {k \mid s}\right) ,{\pi }^{\text{lat }}\left( {z \mid s, k}\right)$
64
+
65
+ ---
66
+
67
+ Our approach STAR imitates demonstrations' semantic skills, instead of low-level actions, to enable cross-domain, semantic imitation. We use a two-layer hierarchical policy with a high-level that outputs the semantic skill and a low-level that executes the skill. We first describe our semantic skill representation, followed by the low-level and high-level policy learning. Algorithm 1 summarizes our approach.
68
+
69
+ ### 4.1 Semantic Skill Representation
70
+
71
+ A skill is characterized by both its semantics, i.e., whether to open the microwave or turn on the stove, as well as the details of its low-level execution, e.g., at what angle to approach the microwave or where to grasp its door handle. Thus, we represent skills via a low-level policy ${\pi }^{l}\left( {a \mid s, k, z}\right)$ which is conditioned on the current environment state $s$ , the semantic skill ID $k$ and a latent variable $z$ which captures the execution details. For example, when "turning on the stove", $a$ are the joint velocities, $s$ is the robot and environment state, $k$ is the semantic skill ID of this skill, and $z$ captures the robot hand orientation as it interacts with the stove. Figure 2, left depicts the training setup for ${\pi }^{l}$ . We randomly sample an $H$ -step state-action subsequence $\left( {{s}_{0 : H - 1},{a}_{0 : H - 2}}\right)$ from ${\mathcal{D}}_{T}$ . An inference network $q\left( {z \mid s, a, k}\right)$ encodes the sequence into a latent representation $z$ conditioned on the semantic skill ID $k$ at the first time step. $k$ and $z$ are passed to ${\pi }^{l}$ , which reconstructs the sampled actions. A single tuple(k, z)represents a sequence of $H$ steps, since such temporal abstraction facilitates
72
+
73
+ ![01963f3a-c8a2-73eb-a3fd-293db5aaef01_3_315_208_1177_369_0.jpg](images/01963f3a-c8a2-73eb-a3fd-293db5aaef01_3_315_208_1177_369_0.jpg)
74
+
75
+ Figure 2: Model overview for pre-training (left) and target task learning (right). We pre-train a semantic skill policy ${\pi }^{l}$ (grey) and use it to decode actions from the learned high-level policies ${\pi }^{\text{sem }}$ and ${\pi }^{\text{lat }}$ (blue and yellow) during target task learning. See training details in the main text.
76
+
77
+ long-horizon imitation [14], leading to the following objective:
78
+
79
+ $$
80
+ {\mathcal{L}}_{{\pi }_{l}} = {\mathbb{E}}_{q}\left\lbrack {\mathop{\prod }\limits_{{t = 0}}^{{H - 2}}\log {\pi }^{l}\left( {{a}_{t} \mid {s}_{t}, k, z}\right) }\right\rbrack - \beta {D}_{\mathrm{{KL}}}\left( {q\left( {z \mid {s}_{0 : H - 1},{a}_{0 : H - 2}, k}\right) , p\left( z\right) }\right) . \tag{1}
81
+ $$
82
+
83
+ Here ${D}_{\mathrm{{KL}}}$ denotes the Kullback-Leibler divergence. We use a simple uniform Gaussian prior $p\left( z\right)$ and a weighting factor $\beta$ for the regularization objective [30]. The semantic skill ID $k$ is pre-defined, discrete and labelled, while the latent $z$ is learned and continuous. In this way, our formulation captures discrete aspects of manipulation skills (open a microwave vs. turn on a stove) while being able to continuously modulate each semantic skill (e.g., different ways of approaching the microwave).
84
+
85
+ ### 4.2 Semantic Transfer Accelerated RL
86
+
87
+ After pre-training the low-level policy ${\pi }^{l}\left( {a \mid s, k, z}\right)$ , we learn the high-level policy using the source domain demonstrations. Concretely, we train a policy ${\pi }^{h}\left( {k, z \mid s}\right)$ that predicts tuples(k, z)which get executed via ${\pi }^{l}$ . Note that unlike prior work [14], our high-level policy outputs both, the semantic skill $k$ and the low-level execution latent $z$ . It is thus able to choose which semantic skill to execute and tailor its execution to the target domain. Cross-domain demonstrations solely guide the semantic skill choice, since the low-level execution might vary between source and target domains. Thus, we factorize ${\pi }^{h}$ into a semantic sub-policy ${\pi }^{\text{sem }}\left( {k \mid s}\right)$ and a latent, non-semantic sub-policy ${\pi }^{\text{lat }}\left( {z \mid s, k}\right)$ :
88
+
89
+ $$
90
+ \pi \left( {a \mid s}\right) = \underset{\text{skill policy }}{\underbrace{{\pi }^{l}\left( {a \mid s, k, z}\right) }} \cdot \underset{\text{high-level policy }{\pi }^{h}\left( {k, z \mid s}\right) }{\underbrace{{\pi }^{\text{lat }}\left( {z \mid s, k}\right) {\pi }^{\text{sem }}\left( {k \mid s}\right) }}. \tag{2}
91
+ $$
92
+
93
+ Intuitively, this can be thought of as first deciding what skill to execute (e.g., open the microwave), followed by how to execute it. We pre-train multiple models via supervised learning for training ${\pi }^{h}$ : (1) two semantic skill priors ${p}^{\text{demo }}\left( {k \mid s}\right)$ and ${p}^{\mathrm{{TA}}}\left( {k \mid s}\right)$ , trained to infer the semantic skill annotations from demonstrations and task-agnostic dataset ${\mathcal{D}}_{T}$ respectively,(2) a task-agnostic prior ${p}^{\mathrm{{TA}}}\left( {z \mid s, k}\right)$ over the latent skill variable $z$ , trained to match the output of the inference network on ${\mathcal{D}}_{T}$ and (3) a discriminator $D\left( s\right)$ , trained to classify whether a state is part of the demonstration trajectories. We summarize all pre-trained components and their supervised training objectives in Appendix, Table 1.
94
+
95
+ We provide an overview of our semantic imitation architecture and the used regularization terms in Figure 2, right. We build on the idea of weighted policy regularization with a learned demonstration support estimator from Pertsch et al. [14] (for a brief summary, see appendix B). We regularize the high-level semantic policy ${\pi }^{\text{sem }}$ (blue) towards the demonstration skill distribution ${p}^{\text{demo }}\left( {k \mid s}\right)$ when $D\left( s\right)$ classifies the current state as part of the demonstrations (green). For states which $D\left( s\right)$ classifies as outside the demonstration support, we regularize ${\pi }^{\text{sem }}$ towards the task-agnostic prior ${p}^{\mathrm{{TA}}}\left( {k \mid s}\right)$ (red). We always regularize the non-semantic sub-policy ${\pi }^{\text{lat }}\left( {z \mid s, k}\right)$ (yellow) towards the task-agnostic prior ${p}^{\mathrm{{TA}}}\left( {z \mid s, k}\right)$ , since execution-specific information cannot be transferred across
96
+
97
+ domains. The overall optimization objective for ${\pi }^{h}$ is:
98
+
99
+ $$
100
+ {\mathbb{E}}_{{\pi }^{h}}\left\lbrack {\widetilde{r}\left( {s, a}\right) \underset{\text{demonstration regularization }}{\underbrace{-{\alpha }_{q}{D}_{\mathrm{{KL}}}\left( {{\pi }^{\text{sem }}\left( {k \mid s}\right) ,{p}^{\text{demo }}\left( {k \mid s}\right) }\right) \cdot D\left( s\right) }}\underset{\text{task-agnostic semantic prior regularization }}{\underbrace{-{\alpha }_{p}{D}_{\mathrm{{KL}}}\left( {{\pi }^{\text{sem }}\left( {k \mid s}\right) ,{p}^{\mathrm{{TA}}}\left( {k \mid s}\right) }\right) \cdot \left( {1 - D\left( s\right) }\right) }},}\right.
101
+ $$
102
+
103
+ $$
104
+ \left. \underset{\text{task-agnostic execution prior regularization }}{\underbrace{-{\alpha }_{l}{D}_{\mathrm{{KL}}}\left( {{\pi }^{\mathrm{{lat}}}\left( {z \mid s, k}\right) ,{p}^{\mathrm{{TA}}}\left( {z \mid s, k}\right) }\right) }}\right\rbrack . \tag{3}
105
+ $$
106
+
107
+ ${\alpha }_{q},{\alpha }_{p}$ and ${\alpha }_{l}$ are either fixed or automatically tuned via dual gradient descent. We augment the target task reward using the discriminator $D\left( s\right)$ to encourage the policy to reach states within the demonstration support: $\widetilde{r}\left( {s, a}\right) = \left( {1 - \kappa }\right) \cdot R\left( {s, a}\right) + \kappa \cdot \left\lbrack {\log D\left( s\right) - \log \left( {1 - D\left( s\right) }\right) }\right\rbrack$ . In the setting with no target environment rewards (pure imitation learning), we rely solely on this discriminator reward for policy training (Section D). For a summary of the full procedure, see Algorithm 2.
108
+
109
+ The final challenge is that the discriminator $D\left( s\right)$ and the prior ${p}^{\text{demo }}\left( {k \mid s}\right)$ are trained on states from the source domain, but need to be applied to the target domain. Since the domains differ substantially, we cannot expect the pre-trained networks to generalize. Instead, we need to explicitly bridge the state domain gap, as described next.
110
+
111
+ ### 4.3 Cross-Domain State Matching
112
+
113
+ ![01963f3a-c8a2-73eb-a3fd-293db5aaef01_4_919_814_544_501_0.jpg](images/01963f3a-c8a2-73eb-a3fd-293db5aaef01_4_919_814_544_501_0.jpg)
114
+
115
+ Figure 3: State matching between source and target domain. For every source domain state from the demonstrations, we compute the task-agnostic semantic skill distribution ${p}^{\mathrm{{TA}}}\left( {k \mid s}\right)$ and find the target domain state with the most similar semantic skill distribution from the task-agnostic dataset ${\mathcal{D}}_{T}$ . We then relabel the demonstrations with these matched states from the target domain.
116
+
117
+ Demonstrations help guide the policy's decisions when prompted with multiple possible skills to execute. For example, if choosing between "open the microwave" or "turn on the stove", we want to find the demonstration state that has the same two skill choices and then guide the policy towards the demonstrated skill. In short, we want to use demonstrations to guide the exploration of semantic skills in semantically similar states, i.e., states with similar skills to choose from in source and target environments.
118
+
119
+ Following this intuition, we find corresponding states based on the similarity between the task-agnostic semantic skill prior distributions ${p}^{\mathrm{{TA}}}\left( {k \mid s}\right)$ . We illustrate an example in Figure 3: for a given source demonstration state ${s}^{S}$ with high likelihood of opening the microwave, we find a target domain state ${s}^{T}$ that has high likelihood of opening the microwave, by minimizing the symmetric KL divergence between the task-agnostic skill distributions (we omit ${\left( \cdot \right) }^{\mathrm{{TA}}}$ for brevity):
120
+
121
+ $$
122
+ \mathop{\min }\limits_{{{s}^{T} \in {\mathcal{D}}_{T}}}{D}_{\mathrm{{KL}}}\left( {{p}_{T}\left( {k \mid {s}^{T}}\right) ,{p}_{S}\left( {k \mid {s}^{S}}\right) }\right) + {D}_{\mathrm{{KL}}}\left( {{p}_{S}\left( {k \mid {s}^{S}}\right) ,{p}_{T}\left( {k \mid {s}^{T}}\right) }\right) \tag{4}
123
+ $$
124
+
125
+ In practice, states can be matched incorrectly when the task agnostic dataset chooses one skill with much higher probability than others. In such states, the divergence in equation 4 is dominated by one skill, and others are ignored, causing matching errors. Using a state's temporal context can result in more robust correspondences by reducing the influence of high likelihood skills in any single state. We compute an aggregated skill distribution $\phi \left( {k \mid s}\right)$ using a temporal window around the current state:
126
+
127
+ $$
128
+ \phi \left( {k \mid {s}_{t}}\right) = \frac{1}{Z\left( s\right) }\left( {\mathop{\sum }\limits_{{i = t}}^{T}{\gamma }_{ + }^{i}p\left( {k \mid {s}_{i}}\right) + \mathop{\sum }\limits_{{j = 1}}^{{t - 1}}{\gamma }_{ - }^{t - j}p\left( {k \mid {s}_{t - j}}\right) }\right) \tag{5}
129
+ $$
130
+
131
+ Here, ${\gamma }_{ + },{\gamma }_{ - } \in \left\lbrack {0,1}\right\rbrack$ determine the forward and backward horizon of the aggregate skill distribution. 1 $Z\left( s\right)$ ensures that the aggregate probability distribution sums to one. Instead of ${p}^{\mathrm{{TA}}}$ in equation 4, we
132
+
133
+ use $\phi \left( {k \mid s}\right)$ . By matching all source-domain demonstrations states to states in the target domain via $\phi \left( {k \mid s}\right)$ , we create a proxy dataset of target state demonstrations, which we use to pre-train the models ${p}^{\text{demo }}\left( {k \mid s}\right)$ and $D\left( s\right)$ . Once trained, we use them for training the high-level policy via equation 3 .
134
+
135
+ ## 5 Experiments
136
+
137
+ ![01963f3a-c8a2-73eb-a3fd-293db5aaef01_5_773_345_715_436_0.jpg](images/01963f3a-c8a2-73eb-a3fd-293db5aaef01_5_773_345_715_436_0.jpg)
138
+
139
+ Figure 4: We evaluate on three pairs of source (top) and target (bottom) environments. Left: maze navigation. The agent needs to follow a sequence of colored rooms (red path) but the maze layout changes substantially between source and target domains. Middle: kitchen manipulation. A robotic arm executes a sequence of skills, but the layout of the kitchens differs. Right: Same as before, but with human demonstrations from a real-world kitchen.
140
+
141
+ Our experiments are designed to answer the following questions: (1) Can we leverage demonstrations across domains to accelerate learning via semantic imitation? (2) Can we use semantic imitation to teach a robot a new task from real-world videos of humans performing the task? (3) Is our approach robust to missing skills in the demonstrations? We test semantic imitation across two simulated maze and kitchen environments, as well as from real-world videos of humans to a simulated robot. Our results show that our approach can accelerate learning from cross-domain demonstrations, even with real-to-sim gap.
142
+
143
+ ### 5.1 Cross-Domain Imitation in Simulation
144
+
145
+ We first test our approach STAR in two simulated settings: a maze navigation and a robot kitchen manipulation task (see Figure 4, left & middle). In the maze navigation task, both domains have corresponding rooms, indicated by their color in Figure 4. The agent needs to follow a sequence of semantic skills like "go to red room", "go to green room" etc. In the kitchen manipulation task, a Franka arm tackles long-horizon manipulation tasks in a simulated kitchen [3]. We define 7 semantic skills, like "open the microwave" or "turn on the stove" in the source and target environments. In both environments we collect demonstrations in the source domain, and task-agnostic datasets in both the source and target domains using motion planners and human teleoperation respectively. For further details on action and observation spaces, rewards and data collection, see Sec C.4.
146
+
147
+ We compare our approach to multile prior skill-based RL approaches with and without demonstration guidance: SPiRL [16] learns skills from ${\mathcal{D}}_{T}$ and then trains a high-level policy over skills; BC+RL [7, 8] pre-trains with behavioral cloning and finetunes with SAC [31]; SkillSeq, similar to Xu et al. [18], sequentially executes the semantic skills as demonstrated; SkiLD [14] is an oracle with access to demonstrations in the target domain and follows them using learned skills. For more details on the implementation of our approach and all comparisons, see appendix, Sections C.1 - C.3.
148
+
149
+ Figure 5, left, compares the performance of all approaches in both tasks. BC+RL is unable to leverage the cross-domain demonstrations and makes no progress on the task. SPiRL is able to learn the kitchen manipulation task, but requires many more environment interactions to reach the same performance as our approach. SkillSeq succeeds in approximately 20% of the maze episodes and solves on average 3 out of 4 subtasks in the kitchen manipulation environment after fine-tuning. The mixed success is due to inaccuracies in execution of the skill policies. Our approach, STAR, can use cross-domain demonstrations to match the learning efficiency of SkiLD (oracle) that has access to target domain demonstrations. This shows that our approach is effective at extracting useful information from cross-domain demonstrations. We find that this trend holds even in the "pure" imitation learning (IL) setting without environment rewards, where we solely rely on the learned discriminator reward to guide learning (see appendix, Section D for detailed results). Thus,
150
+
151
+ ![01963f3a-c8a2-73eb-a3fd-293db5aaef01_6_308_206_1183_354_0.jpg](images/01963f3a-c8a2-73eb-a3fd-293db5aaef01_6_308_206_1183_354_0.jpg)
152
+
153
+ Figure 5: Left: Performance on the simulated semantic imitation tasks. STAR, matches the performance of the oracle, SkiLD, which has access to target domain demonstrations and outperforms both SPiRL, which does not use demonstrations, and SkillSeq, which follows the demonstrated semantic skills sequentially. Right: Ablations in the kitchen environment, see main text for details.
154
+
155
+ ![01963f3a-c8a2-73eb-a3fd-293db5aaef01_6_312_712_1174_280_0.jpg](images/01963f3a-c8a2-73eb-a3fd-293db5aaef01_6_312_712_1174_280_0.jpg)
156
+
157
+ Figure 6: Semantic imitation from human demonstrations. Left: Qualitative state matching results. The top row displays frames subsampled from a task demonstration in the human kitchen source domain. The bottom row visualizes the states matched to the source frames via the procedure described in Section 4.3. The matched states represent corresponding semantic scenes in which the agent e.g., opens the microwave, turns on the stove or opens the cabinet. Right: Quantitative results on the kitchen manipulation task from human video demonstrations.
158
+
159
+ STAR can be used both, as a demonstration-guided RL algorithm and for cross-domain imitation learning. Qualitative results can be viewed at https://tinyurl.com/star-rl and in Figure 8.
160
+
161
+ To study the different components of our approach, we run ablations in the FrankaKitchen environment (Fig. 5, right). Removing the discriminator-based weighting for the demonstration regularization (-D-weight) (Eq. 4) or removing the demonstration regularization altogether (-DemoReg), leads to poor performance. In contrast, removing the discriminator-based dense reward (-D-reward) or temporal aggregation during matching (-TempAgg) affects learning speed but has the same asymptotic performance. Finally, a model without the latent variable $z\left( {-\mathbf{z}}\right)$ cannot model the diversity of skill executions in the data; the resulting skills are too imprecise to learn long-horizon tasks. We show qualitative examples of the effect of varying matching window sizes $\left\lbrack {{\gamma }^{ - },{\gamma }^{ + }}\right\rbrack$ on the project website: https://tinyurl.com/star-rl.
162
+
163
+ ### 5.2 Imitation from Human Demonstrations
164
+
165
+ In this section we ask: can our approach be used to leverage human video demonstrations for teaching new tasks to robots? Imitating human demonstrations presents a larger challenge since it requires bridging domain differences that span observation spaces (from images in the real-world to low-dimensional states in simulation), agent morphologies (from a bimanual human to a 7DOF robot arm), and environments (from the real-world to a simulated robotic environment). To investigate this question, we collect 20 human video demonstrations in a real-world kitchen, which demonstrate a task the robotic agent needs to learn in the target simulated domain. Instead of collecting a large, task-agnostic dataset in the human source domain and manually annotating semantic skill labels, we demonstrate a more scalable alternative: we use an action recognition model, pre-trained on the EPIC Kitchens dataset [32], zero-shot to predict semantic skill distributions on the human demonstration videos. We define a mapping from the 97 verb and 300 noun classes in EPIC Kitchens to the skills present in the target domain and then use our approach as described in Section 4.2, using the EPIC skill distributions as the task-agnostic skill prior ${p}^{\mathrm{{TA}}}\left( {k \mid s}\right)$ . For data collection details, see Section C.4.
166
+
167
+ We visualize qualitative matching results between the domains in Figure 6, left. We successfully match frames to the corresponding semantic states in the target domain. In Figure 6, right, we show that this leads to successful semantic imitation of the human demonstrations. Our approach STAR with EPIC Kitchens auto-generated skill distributions is able to reach the same asymptotic performance as the oracle approach that has access to target domain demonstrations, with only slightly reduced learning speed. It also outperforms the SkillSeq and SPiRL baselines (for qualitative results see https://tinyurl.com/star-rl).
168
+
169
+ To recap: for this experiment we did not collect a large, task-agnostic human dataset and we did not manually annotate any human videos. Collecting a few human demonstrations in an unseen kitchen was sufficient to substantially accelerate learning of the target task on the robot in simulation. This demonstrates one avenue for scaling robot learning by (1) learning from easy-to-collect human video demonstrations and (2) using pre-trained skill prediction models to bridge the domain gap.
170
+
171
+ ### 5.3 Robustness to Noisy Demonstrations and Labels
172
+
173
+ ![01963f3a-c8a2-73eb-a3fd-293db5aaef01_7_1009_752_476_252_0.jpg](images/01963f3a-c8a2-73eb-a3fd-293db5aaef01_7_1009_752_476_252_0.jpg)
174
+
175
+ Figure 7: Semantic imitation with missing skills in the demonstrations. Our approach STAR still learns the full task faster than learning without demonstrations (SPiRL), while SkillSeq get stuck at the missing skill.
176
+
177
+ In realistic scenarios agents often need to cope with noisy demonstration data, e.g., with partial demonstrations or faulty labels. Thus, we test STAR's ability to handle such noise. First, we test imitation from partial demonstrations with missing subskills. These commonly occur when there are large differences between source and target domain, e.g., the demonstration domain might already have a pot on the stove, and starts with "turn on the stove", but in the target domain we need to first place the pot on the stove. We test this in the simulated kitchen tasks by dropping individual subskills from the demonstrations ( ‘w/o Task i’ in Figure 7). Figure 7 shows that the SkillSeq approach struggles with such noise: it gets stuck whenever the corresponding skill is missing in the demonstration. In contrast, STAR can leverage demonstrations that are lacking complete subskills and still learn faster than the no-demonstration baseline SPiRL. When a skill is missing, the STAR agent finds itself off the demonstration support. Then the objective in equation 3 regularizes the policy towards the task-agnostic skill prior, encouraging the agent to explore until it finds its way (back) to the demonstration support. This allows our method to bridge "holes" in the demonstrations. We also test STAR's robustness to noisy semantic skill labels, in Section E. We find that STAR is robust to errors in the annotated skill lengths and to uncertain skill detections. Only frequent, high-confidence mis-detections of skills can lead to erroneous matches and decreased performance. Both experiments show that STAR's guidance with semantic demonstrations is robust to noise in the training and demonstration data.
178
+
179
+ ## 6 Conclusion and Limitations
180
+
181
+ In this work, we presented STAR, an approach for imitation based on semantic skills that can use cross-domain demonstrations for accelerating RL. STAR is effective on multiple semantic imitation problems, including using real-world human demonstration videos for learning a robotic kitchen manipulation task. Our results present a promising way to use large-scale human video datasets like EPIC Kitchens [32] for behavior learning in robotics. However, our approach assumes a pre-defined set of semantic skills and semantic skill labels on the training data. We demonstrated how such assumptions can be reduced via the use of pre-trained skill prediction models. Yet, obtaining such semantic information from cheaper-to-collect natural language descriptions of the training trajectories without a pre-defined skill set is an exciting direction for future work. Additionally, strengthening the robustness to skill mis-labelings, e.g., via a more robust state matching mechanism, can further improve performance on noisy, real-world datasets.
182
+
183
+ References
184
+
185
+ [1] P. Sharma, L. Mohan, L. Pinto, and A. Gupta. Multiple interactions made easy (mime): Large scale demonstrations data for imitation. In Conference on robot learning, pages 906-915. PMLR, 2018.
186
+
187
+ [2] A. Mandlekar, Y. Zhu, A. Garg, J. Booher, M. Spero, A. Tung, J. Gao, J. Emmons, A. Gupta, E. Orbay, S. Savarese, and L. Fei-Fei. Roboturk: A crowdsourcing platform for robotic skill learning through imitation. In ${CoRL},{2018}$ .
188
+
189
+ [3] A. Gupta, V. Kumar, C. Lynch, S. Levine, and K. Hausman. Relay policy learning: Solving long-horizon tasks via imitation and reinforcement learning. CoRL, 2019.
190
+
191
+ [4] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning from demonstration. Robotics and autonomous systems, 57(5):469-483, 2009.
192
+
193
+ [5] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In Proceedings of Neural Information Processing Systems (NeurIPS), pages 305-313, 1989.
194
+
195
+ [6] J. Ho and S. Ermon. Generative adversarial imitation learning. NeurIPS, 2016.
196
+
197
+ [7] A. Rajeswaran, V. Kumar, A. Gupta, G. Vezzani, J. Schulman, E. Todorov, and S. Levine. Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. In Robotics: Science and Systems, 2018.
198
+
199
+ [8] A. Nair, B. McGrew, M. Andrychowicz, W. Zaremba, and P. Abbeel. Overcoming exploration in reinforcement learning with demonstrations. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 6292-6299. IEEE, 2018.
200
+
201
+ [9] Y. Zhu, Z. Wang, J. Merel, A. Rusu, T. Erez, S. Cabi, S. Tunyasuvunakool, J. Kramár, R. Hadsell, $\mathrm{N}$ . de Freitas, and $\mathrm{N}$ . Heess. Reinforcement and imitation learning for diverse visuomotor skills. In Robotics: Science and Systems, 2018.
202
+
203
+ [10] X. B. Peng, P. Abbeel, S. Levine, and M. van de Panne. Deepmimic: Example-guided deep reinforcement learning of physics-based character skills. ACM Transactions on Graphics (TOG), 37(4):1-14, 2018.
204
+
205
+ [11] R. S. Sutton, D. Precup, and S. Singh. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112:181-211, 1999.
206
+
207
+ [12] P.-L. Bacon, J. Harb, and D. Precup. The option-critic architecture. In AAAI, 2017.
208
+
209
+ [13] O. Nachum, S. S. Gu, H. Lee, and S. Levine. Data-efficient hierarchical reinforcement learning. NeurIPS, 2018.
210
+
211
+ [14] K. Pertsch, Y. Lee, Y. Wu, and J. J. Lim. Demonstration-guided reinforcement learning with learned skills. In Conference on Robot Learning (CoRL), 2021.
212
+
213
+ [15] K. Hakhamaneshi, R. Zhao, A. Zhan, P. Abbeel, and M. Laskin. Hierarchical few-shot imitation with skill transition models. arXiv preprint arXiv:2107.08981, 2021.
214
+
215
+ [16] K. Pertsch, Y. Lee, and J. J. Lim. Accelerating reinforcement learning with learned skill priors. In Conference on Robot Learning (CoRL), 2020.
216
+
217
+ [17] A. Ajay, A. Kumar, P. Agrawal, S. Levine, and O. Nachum. Opal: Offline primitive discovery for accelerating offline reinforcement learning. arXiv preprint arXiv:2010.13611, 2020.
218
+
219
+ [18] D. Xu, S. Nair, Y. Zhu, J. Gao, A. Garg, L. Fei-Fei, and S. Savarese. Neural task programming: Learning to generalize across hierarchical tasks. In 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018.
220
+
221
+ [19] D.-A. Huang, S. Nair, D. Xu, Y. Zhu, A. Garg, L. Fei-Fei, S. Savarese, and J. C. Niebles. Neural task graphs: Generalizing to unseen tasks from a single video demonstration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019.
222
+
223
+ [20] X. B. Peng, E. Coumans, T. Zhang, T.-W. Lee, J. Tan, and S. Levine. Learning agile robotic locomotion skills by imitating animals. RSS, 2020.
224
+
225
+ [21] L. Smith, N. Dhawan, M. Zhang, P. Abbeel, and S. Levine. Avid: Learning multi-stage tasks via pixel-level translation of human videos. arXiv preprint arXiv:1912.04443, 2019.
226
+
227
+ [22] N. Das, S. Bechtle, T. Davchev, D. Jayaraman, A. Rai, and F. Meier. Model-based inverse reinforcement learning from visual demonstrations. CoRL, 2020.
228
+
229
+ [23] Y. Duan, M. Andrychowicz, B. C. Stadie, J. Ho, J. Schneider, I. Sutskever, P. Abbeel, and W. Zaremba. One-shot imitation learning. NeurIPS, 2017.
230
+
231
+ [24] P. Sharma, D. Pathak, and A. Gupta. Third-person visual imitation learning via decoupled hierarchical controller. NeurIPS, 2019.
232
+
233
+ [25] T. Yu, C. Finn, A. Xie, S. Dasari, T. Zhang, P. Abbeel, and S. Levine. One-shot imitation from observing humans via domain-adaptive meta-learning. arXiv preprint arXiv:1802.01557, 2018.
234
+
235
+ [26] P. Sermanet, C. Lynch, Y. Chebotar, J. Hsu, E. Jang, S. Schaal, S. Levine, and G. Brain. Time-contrastive networks: Self-supervised learning from video. In 2018 IEEE international conference on robotics and automation (ICRA), 2018.
236
+
237
+ [27] A. S. Chen, S. Nair, and C. Finn. Learning generalizable robotic reward functions from" in-the-wild" human videos. RSS, 2021.
238
+
239
+ [28] K. Schmeckpeper, O. Rybkin, K. Daniilidis, S. Levine, and C. Finn. Reinforcement learning with videos: Combining offline observations with interaction. CoRL, 2020.
240
+
241
+ [29] T. Yu, P. Abbeel, S. Levine, and C. Finn. One-shot hierarchical imitation learning of compound visuomotor tasks. RSS, 2018.
242
+
243
+ [30] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerch-ner. beta-VAE: Learning basic visual concepts with a constrained variational framework. In ICLR, 2017.
244
+
245
+ [31] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. ICML, 2018.
246
+
247
+ [32] D. Damen, H. Doughty, G. M. Farinella, , A. Furnari, J. Ma, E. Kazakos, D. Moltisanti, J. Munro, T. Perrett, W. Price, and M. Wray. Rescaling egocentric vision: Collection, pipeline and challenges for epic-kitchens-100. International Journal of Computer Vision (IJCV), 2021.
248
+
249
+ [33] T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V. Kumar, H. Zhu, A. Gupta, P. Abbeel, et al. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905, 2018.
250
+
251
+ [34] L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han. On the variance of the adaptive learning rate and beyond. In ICLR, 2020.
252
+
253
+ [35] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
254
+
255
+ [36] Y. Lee, A. Szot, S.-H. Sun, and J. J. Lim. Generalizable imitation learning from observation via inferring goal proximity. Advances in Neural Information Processing Systems, 34, 2021.
256
+
257
+ [37] K. Lowrey, A. Rajeswaran, S. Kakade, E. Todorov, and I. Mordatch. Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control. In ICLR, 2019.
258
+
259
+ [38] H. Fan, Y. Li, B. Xiong, W.-Y. Lo, and C. Feichtenhofer. Pyslowfast, 2020.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/AdFROt9BoqE/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § CROSS-DOMAIN TRANSFER VIA SEMANTIC SKILL IMITATION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: We propose an approach for semantic imitation, which uses demonstrations from a source domain, e.g., human videos, to accelerate reinforcement learning (RL) in a different target domain, e.g., a robotic manipulator in a simulated kitchen. Instead of imitating low-level actions like joint velocities, our approach imitates the sequence of demonstrated semantic skills like "opening the microwave" or "turning on the stove". This allows us to transfer demonstrations across environments (e.g., real-world to simulated kitchen) and agent embodiments (e.g., bimanual human demonstration to robotic arm). We evaluate on three challenging cross-domain learning problems and match the performance of demonstration-accelerated RL approaches that require in-domain demonstrations. In a simulated kitchen environment, our approach learns long-horizon robot manipulation tasks, using less than 3 minutes of human video demonstrations from a real-world kitchen. This enables scaling robot learning via the reuse of demonstrations, e.g., collected as human videos, for learning in any number of target domains.
12
+
13
+ Keywords: Reinforcement Learning, Imitation, Transfer Learning
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ < g r a p h i c s >
18
+
19
+ Figure 1: We address semantic imitation, which aims to leverage demonstrations from a source domain, e.g., human video demonstrations, to accelerate the learning of the same tasks in a different target domain, e.g., controlling a robotic manipulator in a simulated kitchen environment.
20
+
21
+ Consider a person imitating an expert in two scenarios: a beginner learning to play tennis, and a chef following a recipe for a new dish. In the former case, when mastering the basic skills of tennis, humans tend to imitate the precise arm movements demonstrated by the expert. In contrast, when operating in a familiar domain, such as a chef learning to cook a new dish, imitation happens on a higher scale. Instead of imitating individual movements, they follow high-level, semantically meaningful skills like "stir the mixture" or "turn on the oven". Such semantic skills generalize across environment layouts, and allow humans to follow demonstrations across substantially different environments.
22
+
23
+ Most works that leverage demonstrations in robotics imitate low-level actions. Demonstrations are typically provided by manually moving the robot [1] or via teleoperation [2]. A critical challenge of this approach is scaling: demonstrations need to be collected in every new environment. On the other hand, imitation of high-level (semantic) skills has the promise of generalization: demonstrations can be collected in one kitchen and applied to any number of kitchens, eliminating the need to re-demonstrate in every new environment. Learning via imitation of high-level skills can lead to scalable and generalizable robot learning.
24
+
25
+ In this work, we present Semantic Transfer Accelerated RL (STAR), which accelerates RL using cross-domain demonstrations by leveraging semantic skills, instead of low-level actions. We consider a setting with significantly different source and target environments. Figure 1 shows an example: a robot arm learns to do a kitchen manipulation task by following a visual human demonstration from a different (real-world) kitchen. An approach that follows the precise arm movements of the human will fail due to embodiment and environment differences. Yet, by following the demonstrated semantic skills like "open the microwave" and "turn on the stove", our approach can leverage demonstrations despite the domain differences. Like the chef in the above example, we use prior experience for enabling this semantic transfer. We assume access to datasets of prior experience collected across many tasks, in both the source and target domains. From this data, we learn semantic skills like "open the microwave" or "turn on the stove". Next, we collect demonstrations of the task in the source domain and find "semantically similar" states in the target domain. Using this mapping, we learn a policy to follow the demonstrated semantic skills in semantically similar states in the target domain.
26
+
27
+ We present results on two semantic imitation problems in simulation and on real-to-sim transfer from human videos. In simulation, we test STAR in: (1) a maze navigation task across mazes of different layouts and (2) a sequence of kitchen tasks between two variations of the FrankaKitchen environment [3]. In both tasks our approach matches the learning efficiency of methods with in-domain demonstrations, despite only using cross-domain demonstrations. Additionally, we show that a human demonstration video recorded within 3 minutes in a real-world kitchen can accelerate the learning of long-horizon manipulation tasks in the FrankaKitchen by hundreds of thousands of robot environment interactions.
28
+
29
+ In summary, our contributions are twofold: (1) we introduce STAR, an approach for cross-domain transfer via learned semantic skills, (2) we show that STAR can leverage demonstrations across substantially differing domains to accelerate the learning of long-horizon tasks.
30
+
31
+ § 2 RELATED WORK
32
+
33
+ Learning from demonstrations. Learning from Demonstrations (LfD, Argall et al. [4]) is a popular method for learning robot behaviors using demonstrations of the target task, often collected by human operators. Common approaches include behavioral cloning (BC, Pomerleau [5]) and adversarial imitation approaches [6]. A number of works have proposed approaches for combining these imitation objectives with reinforcement learning [7, 8, 9, 10]. However, all of these approaches require demonstrations in the target domain, limiting their applicability to new domains. In contrast, our approach imitates the demonstrations' semantic skills and thus enables transfer across domains.
34
+
35
+ Skill-based Imitation. Using temporal abstraction via skills has a long tradition in hierarchical RL [11, 12, 13]. Skills have also been used for the imitation of long-horizon tasks. Pertsch et al. [14], Hakhamaneshi et al. [15] learn skills from task-agnostic offline experience [16, 17] and imitate demonstrated skills instead of primitive actions. But, since the learned skills do not capture semantic information, they require demonstrations in the target domain. Xu et al. [18], Huang et al. [19] divide long-horizon tasks into subroutines, but struggle if the two domains requires a different sequence of subroutines, e.g., if skill pre-conditions are not met in the target environment. Our approach is robust to such mismatches without requiring demonstrations in the target domain.
36
+
37
+ Cross-Domain Imitation. Peng et al. [20] assume a pre-specified mapping between source and target domain. $\left\lbrack {{21},{22}}\right\rbrack$ leverage offline experience to learn mappings while $\left\lbrack {{23},{24},{25}}\right\rbrack$ rely on paired demonstrations. A popular goal is to leverage human videos for robot learning since they are easy to collect at scale. [26, 27] learn reward functions from human demonstrations and Schmeckpeper et al. [28] add human experience to an RL agent's replay buffer, but they only consider short-horizon tasks and rely on environments being similar. Yu et al. [29] meta-learn cross-domain subroutines, but cannot handle different subroutines between source and target. Our approach imitates long-horizon tasks across domains, without a pre-defined mapping and is robust to different semantic subroutines.
38
+
39
+ § 3 PROBLEM FORMULATION
40
+
41
+ We define a source environment $S$ and a target environment $T$ . In the source domain, we have $N$ demonstrations ${\tau }_{1 : N}^{S}$ with ${\tau }_{i}^{S} = \left\{ {{s}_{0}^{S},{a}_{0}^{S},{s}_{1}^{S},{a}_{1}^{S},\ldots }\right\}$ sequences of states ${s}^{S}$ and actions ${a}^{S}$ . Our goal is to leverage these demonstrations to accelerate training of a policy $\pi \left( {s}^{T}\right)$ in the target environment, acting on target states ${s}^{T}$ and predicting actions ${a}^{T}.\pi \left( {s}^{T}\right)$ maximizes the discounted target task reward ${J}^{T} = {\mathbb{E}}_{\pi }\left\lbrack {\mathop{\sum }\limits_{{l = 0}}^{{L - 1}}{\gamma }^{l}R\left( {{s}_{l}^{T},{a}_{l}^{T}}\right) }\right\rbrack$ for an episode of length $L$ . We account for different state-action spaces $\left( {{s}^{S},{a}^{S}}\right)$ vs. $\left( {{s}^{T},{a}^{T}}\right)$ between source and target, but drop the superscript in the following sections, assuming that the context makes it clear whether we are addressing source or target states. In Section 4.3 we describe how we bridge this domain gap. Without loss of generality we assume that the source and target environments are substantially different; sequences of low-level actions that solve a task in the source environment do not lead to high reward in the target environment. Yet, we assume that the demonstrations show a set of semantic skills, which when followed in the target environment can lead to task success. Here the term semantic skill refers to a high-level notion of skill, like "open the microwave" or "turn on the oven", which is independent of the environment-specific low-level actions required to perform it. We further assume that both source and target environment allow for the execution of the same set of semantic skills.
42
+
43
+ Semantic imitation requires an agent to understand the semantic skills performed in the demonstrations. We use task-agnostic datasets ${\mathcal{D}}_{S}$ and ${\mathcal{D}}_{T}$ in the source and target domains to extract such semantic skills. Each ${\mathcal{D}}_{i}$ consists of state-action trajectories collected across a diverse range of prior tasks, e.g., from previously trained policies or teleoperation, as is commonly assumed in prior work $\left\lbrack {{16},{17},{14},{15}}\right\rbrack$ . We also assume discrete semantic skill annotations ${k}_{t} \in \mathcal{K}$ , denoting the skill being executed at time step $t$ . These can be collected manually, but we demonstrate how to use pre-trained action recognition models as a more scalable alternative (Sec. 5.2).
44
+
45
+ § 4 APPROACH
46
+
47
+ Algorithm 1 STAR (Semantic Transfer Accelerated RL)
48
+
49
+ Pre-Train low-level policy ${\pi }^{l}\left( {a \mid s,k,z}\right) \; \vartriangleright$ cf. Sec. 4.1
50
+
51
+ Match source demos to target states $\vartriangleright$ cf. Sec. 4.3
52
+
53
+ Pre-train ${p}^{\text{ demo }}\left( {k \mid s}\right) ,{p}^{\mathrm{{TA}}}\left( {k \mid s}\right) ,{p}^{\mathrm{{TA}}}\left( {z \mid s,k}\right) ,D\left( s\right) \; \vartriangleright$ cf. Tab. 1
54
+
55
+ for each target train iteration do
56
+
57
+ Collect online experience $\left( {s,k,z,R,{s}^{\prime }}\right)$
58
+
59
+ Update high-level policies with eq. 3 D ch
60
+
61
+ return trained high-level policies ${\pi }^{\text{ sem }}\left( {k \mid s}\right) ,{\pi }^{\text{ lat }}\left( {z \mid s,k}\right)$
62
+
63
+ Our approach STAR imitates demonstrations' semantic skills, instead of low-level actions, to enable cross-domain, semantic imitation. We use a two-layer hierarchical policy with a high-level that outputs the semantic skill and a low-level that executes the skill. We first describe our semantic skill representation, followed by the low-level and high-level policy learning. Algorithm 1 summarizes our approach.
64
+
65
+ § 4.1 SEMANTIC SKILL REPRESENTATION
66
+
67
+ A skill is characterized by both its semantics, i.e., whether to open the microwave or turn on the stove, as well as the details of its low-level execution, e.g., at what angle to approach the microwave or where to grasp its door handle. Thus, we represent skills via a low-level policy ${\pi }^{l}\left( {a \mid s,k,z}\right)$ which is conditioned on the current environment state $s$ , the semantic skill ID $k$ and a latent variable $z$ which captures the execution details. For example, when "turning on the stove", $a$ are the joint velocities, $s$ is the robot and environment state, $k$ is the semantic skill ID of this skill, and $z$ captures the robot hand orientation as it interacts with the stove. Figure 2, left depicts the training setup for ${\pi }^{l}$ . We randomly sample an $H$ -step state-action subsequence $\left( {{s}_{0 : H - 1},{a}_{0 : H - 2}}\right)$ from ${\mathcal{D}}_{T}$ . An inference network $q\left( {z \mid s,a,k}\right)$ encodes the sequence into a latent representation $z$ conditioned on the semantic skill ID $k$ at the first time step. $k$ and $z$ are passed to ${\pi }^{l}$ , which reconstructs the sampled actions. A single tuple(k, z)represents a sequence of $H$ steps, since such temporal abstraction facilitates
68
+
69
+ < g r a p h i c s >
70
+
71
+ Figure 2: Model overview for pre-training (left) and target task learning (right). We pre-train a semantic skill policy ${\pi }^{l}$ (grey) and use it to decode actions from the learned high-level policies ${\pi }^{\text{ sem }}$ and ${\pi }^{\text{ lat }}$ (blue and yellow) during target task learning. See training details in the main text.
72
+
73
+ long-horizon imitation [14], leading to the following objective:
74
+
75
+ $$
76
+ {\mathcal{L}}_{{\pi }_{l}} = {\mathbb{E}}_{q}\left\lbrack {\mathop{\prod }\limits_{{t = 0}}^{{H - 2}}\log {\pi }^{l}\left( {{a}_{t} \mid {s}_{t},k,z}\right) }\right\rbrack - \beta {D}_{\mathrm{{KL}}}\left( {q\left( {z \mid {s}_{0 : H - 1},{a}_{0 : H - 2},k}\right) ,p\left( z\right) }\right) . \tag{1}
77
+ $$
78
+
79
+ Here ${D}_{\mathrm{{KL}}}$ denotes the Kullback-Leibler divergence. We use a simple uniform Gaussian prior $p\left( z\right)$ and a weighting factor $\beta$ for the regularization objective [30]. The semantic skill ID $k$ is pre-defined, discrete and labelled, while the latent $z$ is learned and continuous. In this way, our formulation captures discrete aspects of manipulation skills (open a microwave vs. turn on a stove) while being able to continuously modulate each semantic skill (e.g., different ways of approaching the microwave).
80
+
81
+ § 4.2 SEMANTIC TRANSFER ACCELERATED RL
82
+
83
+ After pre-training the low-level policy ${\pi }^{l}\left( {a \mid s,k,z}\right)$ , we learn the high-level policy using the source domain demonstrations. Concretely, we train a policy ${\pi }^{h}\left( {k,z \mid s}\right)$ that predicts tuples(k, z)which get executed via ${\pi }^{l}$ . Note that unlike prior work [14], our high-level policy outputs both, the semantic skill $k$ and the low-level execution latent $z$ . It is thus able to choose which semantic skill to execute and tailor its execution to the target domain. Cross-domain demonstrations solely guide the semantic skill choice, since the low-level execution might vary between source and target domains. Thus, we factorize ${\pi }^{h}$ into a semantic sub-policy ${\pi }^{\text{ sem }}\left( {k \mid s}\right)$ and a latent, non-semantic sub-policy ${\pi }^{\text{ lat }}\left( {z \mid s,k}\right)$ :
84
+
85
+ $$
86
+ \pi \left( {a \mid s}\right) = \underset{\text{ skill policy }}{\underbrace{{\pi }^{l}\left( {a \mid s,k,z}\right) }} \cdot \underset{\text{ high-level policy }{\pi }^{h}\left( {k,z \mid s}\right) }{\underbrace{{\pi }^{\text{ lat }}\left( {z \mid s,k}\right) {\pi }^{\text{ sem }}\left( {k \mid s}\right) }}. \tag{2}
87
+ $$
88
+
89
+ Intuitively, this can be thought of as first deciding what skill to execute (e.g., open the microwave), followed by how to execute it. We pre-train multiple models via supervised learning for training ${\pi }^{h}$ : (1) two semantic skill priors ${p}^{\text{ demo }}\left( {k \mid s}\right)$ and ${p}^{\mathrm{{TA}}}\left( {k \mid s}\right)$ , trained to infer the semantic skill annotations from demonstrations and task-agnostic dataset ${\mathcal{D}}_{T}$ respectively,(2) a task-agnostic prior ${p}^{\mathrm{{TA}}}\left( {z \mid s,k}\right)$ over the latent skill variable $z$ , trained to match the output of the inference network on ${\mathcal{D}}_{T}$ and (3) a discriminator $D\left( s\right)$ , trained to classify whether a state is part of the demonstration trajectories. We summarize all pre-trained components and their supervised training objectives in Appendix, Table 1.
90
+
91
+ We provide an overview of our semantic imitation architecture and the used regularization terms in Figure 2, right. We build on the idea of weighted policy regularization with a learned demonstration support estimator from Pertsch et al. [14] (for a brief summary, see appendix B). We regularize the high-level semantic policy ${\pi }^{\text{ sem }}$ (blue) towards the demonstration skill distribution ${p}^{\text{ demo }}\left( {k \mid s}\right)$ when $D\left( s\right)$ classifies the current state as part of the demonstrations (green). For states which $D\left( s\right)$ classifies as outside the demonstration support, we regularize ${\pi }^{\text{ sem }}$ towards the task-agnostic prior ${p}^{\mathrm{{TA}}}\left( {k \mid s}\right)$ (red). We always regularize the non-semantic sub-policy ${\pi }^{\text{ lat }}\left( {z \mid s,k}\right)$ (yellow) towards the task-agnostic prior ${p}^{\mathrm{{TA}}}\left( {z \mid s,k}\right)$ , since execution-specific information cannot be transferred across
92
+
93
+ domains. The overall optimization objective for ${\pi }^{h}$ is:
94
+
95
+ $$
96
+ {\mathbb{E}}_{{\pi }^{h}}\left\lbrack {\widetilde{r}\left( {s,a}\right) \underset{\text{ demonstration regularization }}{\underbrace{-{\alpha }_{q}{D}_{\mathrm{{KL}}}\left( {{\pi }^{\text{ sem }}\left( {k \mid s}\right) ,{p}^{\text{ demo }}\left( {k \mid s}\right) }\right) \cdot D\left( s\right) }}\underset{\text{ task-agnostic semantic prior regularization }}{\underbrace{-{\alpha }_{p}{D}_{\mathrm{{KL}}}\left( {{\pi }^{\text{ sem }}\left( {k \mid s}\right) ,{p}^{\mathrm{{TA}}}\left( {k \mid s}\right) }\right) \cdot \left( {1 - D\left( s\right) }\right) }},}\right.
97
+ $$
98
+
99
+ $$
100
+ \left. \underset{\text{ task-agnostic execution prior regularization }}{\underbrace{-{\alpha }_{l}{D}_{\mathrm{{KL}}}\left( {{\pi }^{\mathrm{{lat}}}\left( {z \mid s,k}\right) ,{p}^{\mathrm{{TA}}}\left( {z \mid s,k}\right) }\right) }}\right\rbrack . \tag{3}
101
+ $$
102
+
103
+ ${\alpha }_{q},{\alpha }_{p}$ and ${\alpha }_{l}$ are either fixed or automatically tuned via dual gradient descent. We augment the target task reward using the discriminator $D\left( s\right)$ to encourage the policy to reach states within the demonstration support: $\widetilde{r}\left( {s,a}\right) = \left( {1 - \kappa }\right) \cdot R\left( {s,a}\right) + \kappa \cdot \left\lbrack {\log D\left( s\right) - \log \left( {1 - D\left( s\right) }\right) }\right\rbrack$ . In the setting with no target environment rewards (pure imitation learning), we rely solely on this discriminator reward for policy training (Section D). For a summary of the full procedure, see Algorithm 2.
104
+
105
+ The final challenge is that the discriminator $D\left( s\right)$ and the prior ${p}^{\text{ demo }}\left( {k \mid s}\right)$ are trained on states from the source domain, but need to be applied to the target domain. Since the domains differ substantially, we cannot expect the pre-trained networks to generalize. Instead, we need to explicitly bridge the state domain gap, as described next.
106
+
107
+ § 4.3 CROSS-DOMAIN STATE MATCHING
108
+
109
+ < g r a p h i c s >
110
+
111
+ Figure 3: State matching between source and target domain. For every source domain state from the demonstrations, we compute the task-agnostic semantic skill distribution ${p}^{\mathrm{{TA}}}\left( {k \mid s}\right)$ and find the target domain state with the most similar semantic skill distribution from the task-agnostic dataset ${\mathcal{D}}_{T}$ . We then relabel the demonstrations with these matched states from the target domain.
112
+
113
+ Demonstrations help guide the policy's decisions when prompted with multiple possible skills to execute. For example, if choosing between "open the microwave" or "turn on the stove", we want to find the demonstration state that has the same two skill choices and then guide the policy towards the demonstrated skill. In short, we want to use demonstrations to guide the exploration of semantic skills in semantically similar states, i.e., states with similar skills to choose from in source and target environments.
114
+
115
+ Following this intuition, we find corresponding states based on the similarity between the task-agnostic semantic skill prior distributions ${p}^{\mathrm{{TA}}}\left( {k \mid s}\right)$ . We illustrate an example in Figure 3: for a given source demonstration state ${s}^{S}$ with high likelihood of opening the microwave, we find a target domain state ${s}^{T}$ that has high likelihood of opening the microwave, by minimizing the symmetric KL divergence between the task-agnostic skill distributions (we omit ${\left( \cdot \right) }^{\mathrm{{TA}}}$ for brevity):
116
+
117
+ $$
118
+ \mathop{\min }\limits_{{{s}^{T} \in {\mathcal{D}}_{T}}}{D}_{\mathrm{{KL}}}\left( {{p}_{T}\left( {k \mid {s}^{T}}\right) ,{p}_{S}\left( {k \mid {s}^{S}}\right) }\right) + {D}_{\mathrm{{KL}}}\left( {{p}_{S}\left( {k \mid {s}^{S}}\right) ,{p}_{T}\left( {k \mid {s}^{T}}\right) }\right) \tag{4}
119
+ $$
120
+
121
+ In practice, states can be matched incorrectly when the task agnostic dataset chooses one skill with much higher probability than others. In such states, the divergence in equation 4 is dominated by one skill, and others are ignored, causing matching errors. Using a state's temporal context can result in more robust correspondences by reducing the influence of high likelihood skills in any single state. We compute an aggregated skill distribution $\phi \left( {k \mid s}\right)$ using a temporal window around the current state:
122
+
123
+ $$
124
+ \phi \left( {k \mid {s}_{t}}\right) = \frac{1}{Z\left( s\right) }\left( {\mathop{\sum }\limits_{{i = t}}^{T}{\gamma }_{ + }^{i}p\left( {k \mid {s}_{i}}\right) + \mathop{\sum }\limits_{{j = 1}}^{{t - 1}}{\gamma }_{ - }^{t - j}p\left( {k \mid {s}_{t - j}}\right) }\right) \tag{5}
125
+ $$
126
+
127
+ Here, ${\gamma }_{ + },{\gamma }_{ - } \in \left\lbrack {0,1}\right\rbrack$ determine the forward and backward horizon of the aggregate skill distribution. 1 $Z\left( s\right)$ ensures that the aggregate probability distribution sums to one. Instead of ${p}^{\mathrm{{TA}}}$ in equation 4, we
128
+
129
+ use $\phi \left( {k \mid s}\right)$ . By matching all source-domain demonstrations states to states in the target domain via $\phi \left( {k \mid s}\right)$ , we create a proxy dataset of target state demonstrations, which we use to pre-train the models ${p}^{\text{ demo }}\left( {k \mid s}\right)$ and $D\left( s\right)$ . Once trained, we use them for training the high-level policy via equation 3 .
130
+
131
+ § 5 EXPERIMENTS
132
+
133
+ < g r a p h i c s >
134
+
135
+ Figure 4: We evaluate on three pairs of source (top) and target (bottom) environments. Left: maze navigation. The agent needs to follow a sequence of colored rooms (red path) but the maze layout changes substantially between source and target domains. Middle: kitchen manipulation. A robotic arm executes a sequence of skills, but the layout of the kitchens differs. Right: Same as before, but with human demonstrations from a real-world kitchen.
136
+
137
+ Our experiments are designed to answer the following questions: (1) Can we leverage demonstrations across domains to accelerate learning via semantic imitation? (2) Can we use semantic imitation to teach a robot a new task from real-world videos of humans performing the task? (3) Is our approach robust to missing skills in the demonstrations? We test semantic imitation across two simulated maze and kitchen environments, as well as from real-world videos of humans to a simulated robot. Our results show that our approach can accelerate learning from cross-domain demonstrations, even with real-to-sim gap.
138
+
139
+ § 5.1 CROSS-DOMAIN IMITATION IN SIMULATION
140
+
141
+ We first test our approach STAR in two simulated settings: a maze navigation and a robot kitchen manipulation task (see Figure 4, left & middle). In the maze navigation task, both domains have corresponding rooms, indicated by their color in Figure 4. The agent needs to follow a sequence of semantic skills like "go to red room", "go to green room" etc. In the kitchen manipulation task, a Franka arm tackles long-horizon manipulation tasks in a simulated kitchen [3]. We define 7 semantic skills, like "open the microwave" or "turn on the stove" in the source and target environments. In both environments we collect demonstrations in the source domain, and task-agnostic datasets in both the source and target domains using motion planners and human teleoperation respectively. For further details on action and observation spaces, rewards and data collection, see Sec C.4.
142
+
143
+ We compare our approach to multile prior skill-based RL approaches with and without demonstration guidance: SPiRL [16] learns skills from ${\mathcal{D}}_{T}$ and then trains a high-level policy over skills; BC+RL [7, 8] pre-trains with behavioral cloning and finetunes with SAC [31]; SkillSeq, similar to Xu et al. [18], sequentially executes the semantic skills as demonstrated; SkiLD [14] is an oracle with access to demonstrations in the target domain and follows them using learned skills. For more details on the implementation of our approach and all comparisons, see appendix, Sections C.1 - C.3.
144
+
145
+ Figure 5, left, compares the performance of all approaches in both tasks. BC+RL is unable to leverage the cross-domain demonstrations and makes no progress on the task. SPiRL is able to learn the kitchen manipulation task, but requires many more environment interactions to reach the same performance as our approach. SkillSeq succeeds in approximately 20% of the maze episodes and solves on average 3 out of 4 subtasks in the kitchen manipulation environment after fine-tuning. The mixed success is due to inaccuracies in execution of the skill policies. Our approach, STAR, can use cross-domain demonstrations to match the learning efficiency of SkiLD (oracle) that has access to target domain demonstrations. This shows that our approach is effective at extracting useful information from cross-domain demonstrations. We find that this trend holds even in the "pure" imitation learning (IL) setting without environment rewards, where we solely rely on the learned discriminator reward to guide learning (see appendix, Section D for detailed results). Thus,
146
+
147
+ < g r a p h i c s >
148
+
149
+ Figure 5: Left: Performance on the simulated semantic imitation tasks. STAR, matches the performance of the oracle, SkiLD, which has access to target domain demonstrations and outperforms both SPiRL, which does not use demonstrations, and SkillSeq, which follows the demonstrated semantic skills sequentially. Right: Ablations in the kitchen environment, see main text for details.
150
+
151
+ < g r a p h i c s >
152
+
153
+ Figure 6: Semantic imitation from human demonstrations. Left: Qualitative state matching results. The top row displays frames subsampled from a task demonstration in the human kitchen source domain. The bottom row visualizes the states matched to the source frames via the procedure described in Section 4.3. The matched states represent corresponding semantic scenes in which the agent e.g., opens the microwave, turns on the stove or opens the cabinet. Right: Quantitative results on the kitchen manipulation task from human video demonstrations.
154
+
155
+ STAR can be used both, as a demonstration-guided RL algorithm and for cross-domain imitation learning. Qualitative results can be viewed at https://tinyurl.com/star-rl and in Figure 8.
156
+
157
+ To study the different components of our approach, we run ablations in the FrankaKitchen environment (Fig. 5, right). Removing the discriminator-based weighting for the demonstration regularization (-D-weight) (Eq. 4) or removing the demonstration regularization altogether (-DemoReg), leads to poor performance. In contrast, removing the discriminator-based dense reward (-D-reward) or temporal aggregation during matching (-TempAgg) affects learning speed but has the same asymptotic performance. Finally, a model without the latent variable $z\left( {-\mathbf{z}}\right)$ cannot model the diversity of skill executions in the data; the resulting skills are too imprecise to learn long-horizon tasks. We show qualitative examples of the effect of varying matching window sizes $\left\lbrack {{\gamma }^{ - },{\gamma }^{ + }}\right\rbrack$ on the project website: https://tinyurl.com/star-rl.
158
+
159
+ § 5.2 IMITATION FROM HUMAN DEMONSTRATIONS
160
+
161
+ In this section we ask: can our approach be used to leverage human video demonstrations for teaching new tasks to robots? Imitating human demonstrations presents a larger challenge since it requires bridging domain differences that span observation spaces (from images in the real-world to low-dimensional states in simulation), agent morphologies (from a bimanual human to a 7DOF robot arm), and environments (from the real-world to a simulated robotic environment). To investigate this question, we collect 20 human video demonstrations in a real-world kitchen, which demonstrate a task the robotic agent needs to learn in the target simulated domain. Instead of collecting a large, task-agnostic dataset in the human source domain and manually annotating semantic skill labels, we demonstrate a more scalable alternative: we use an action recognition model, pre-trained on the EPIC Kitchens dataset [32], zero-shot to predict semantic skill distributions on the human demonstration videos. We define a mapping from the 97 verb and 300 noun classes in EPIC Kitchens to the skills present in the target domain and then use our approach as described in Section 4.2, using the EPIC skill distributions as the task-agnostic skill prior ${p}^{\mathrm{{TA}}}\left( {k \mid s}\right)$ . For data collection details, see Section C.4.
162
+
163
+ We visualize qualitative matching results between the domains in Figure 6, left. We successfully match frames to the corresponding semantic states in the target domain. In Figure 6, right, we show that this leads to successful semantic imitation of the human demonstrations. Our approach STAR with EPIC Kitchens auto-generated skill distributions is able to reach the same asymptotic performance as the oracle approach that has access to target domain demonstrations, with only slightly reduced learning speed. It also outperforms the SkillSeq and SPiRL baselines (for qualitative results see https://tinyurl.com/star-rl).
164
+
165
+ To recap: for this experiment we did not collect a large, task-agnostic human dataset and we did not manually annotate any human videos. Collecting a few human demonstrations in an unseen kitchen was sufficient to substantially accelerate learning of the target task on the robot in simulation. This demonstrates one avenue for scaling robot learning by (1) learning from easy-to-collect human video demonstrations and (2) using pre-trained skill prediction models to bridge the domain gap.
166
+
167
+ § 5.3 ROBUSTNESS TO NOISY DEMONSTRATIONS AND LABELS
168
+
169
+ < g r a p h i c s >
170
+
171
+ Figure 7: Semantic imitation with missing skills in the demonstrations. Our approach STAR still learns the full task faster than learning without demonstrations (SPiRL), while SkillSeq get stuck at the missing skill.
172
+
173
+ In realistic scenarios agents often need to cope with noisy demonstration data, e.g., with partial demonstrations or faulty labels. Thus, we test STAR's ability to handle such noise. First, we test imitation from partial demonstrations with missing subskills. These commonly occur when there are large differences between source and target domain, e.g., the demonstration domain might already have a pot on the stove, and starts with "turn on the stove", but in the target domain we need to first place the pot on the stove. We test this in the simulated kitchen tasks by dropping individual subskills from the demonstrations ( ‘w/o Task i’ in Figure 7). Figure 7 shows that the SkillSeq approach struggles with such noise: it gets stuck whenever the corresponding skill is missing in the demonstration. In contrast, STAR can leverage demonstrations that are lacking complete subskills and still learn faster than the no-demonstration baseline SPiRL. When a skill is missing, the STAR agent finds itself off the demonstration support. Then the objective in equation 3 regularizes the policy towards the task-agnostic skill prior, encouraging the agent to explore until it finds its way (back) to the demonstration support. This allows our method to bridge "holes" in the demonstrations. We also test STAR's robustness to noisy semantic skill labels, in Section E. We find that STAR is robust to errors in the annotated skill lengths and to uncertain skill detections. Only frequent, high-confidence mis-detections of skills can lead to erroneous matches and decreased performance. Both experiments show that STAR's guidance with semantic demonstrations is robust to noise in the training and demonstration data.
174
+
175
+ § 6 CONCLUSION AND LIMITATIONS
176
+
177
+ In this work, we presented STAR, an approach for imitation based on semantic skills that can use cross-domain demonstrations for accelerating RL. STAR is effective on multiple semantic imitation problems, including using real-world human demonstration videos for learning a robotic kitchen manipulation task. Our results present a promising way to use large-scale human video datasets like EPIC Kitchens [32] for behavior learning in robotics. However, our approach assumes a pre-defined set of semantic skills and semantic skill labels on the training data. We demonstrated how such assumptions can be reduced via the use of pre-trained skill prediction models. Yet, obtaining such semantic information from cheaper-to-collect natural language descriptions of the training trajectories without a pre-defined skill set is an exciting direction for future work. Additionally, strengthening the robustness to skill mis-labelings, e.g., via a more robust state matching mechanism, can further improve performance on noisy, real-world datasets.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/Ag-vOezQ0Gw/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,301 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ROS-PyBullet Interface: A Framework for Reliable Contact Simulation and Human-Robot Interaction
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Reliable contact simulation plays a key role in the development of (semi-)autonomous robots, especially when dealing with contact-rich manipulation scenarios, an active robotics research topic. Besides simulation, components such as sensing, perception, data collection, robot hardware control, human interfaces, etc. are all key enablers towards applying machine learning algorithms or model-based approaches in real world systems. However, there is a lack of software connecting reliable contact simulation with the larger robotics ecosystem (i.e. ROS, Orocos), for a more seamless application of novel approaches, found in the literature, to existing robotic hardware. In this paper, we present the ROS-PyBullet Interface, a framework that provides a bridge between the reliable contact/impact simulator PyBullet and the Robot Operating System (ROS). Furthermore, we provide additional utilities for facilitating Human-Robot Interaction (HRI) in the simulated environment. We also present several use-cases that highlight the capabilities and usefulness of our framework. Please check our video, source code, and examples included in the supplementary material.
12
+
13
+ ## 16 1 Introduction
14
+
15
+ Dealing with contacts is a key requirement for robots to become effective in our daily lives and valuable assets in industry [1]. Examples of contact-rich tasks include pick and place [2], locomotion [3, 4, 5], wiping [6], pushing [7, 8], dyadic co-manipulation [9], and robot surgery [10, 11]. Developing approaches for (semi-)autonomous robots involving contact is thwart with practical issues (e.g. slippage, high impulsive forces, model mismatch, etc.) that may cause failure and damage to the robot or yield safety concerns for humans in close proximity.
16
+
17
+ Development of machine learning approaches require a facility to collect large datasets. However, collection on scale is non-trivial. Learning from demonstration [12] is a popular technique for endowing robots with new skills where a human provides examples. According to one paradigm, kinaesthetic teaching (e.g. [13]), the human interacts directly with the robot. However, this method requires a physical system which is cumbersome and has potential safety concerns. A second approach utilizes teleoperation (e.g. [14]), which has the benefit that the human operator can either interact with a simulated robot or at a distance with the physical system (ensuring safety). Two key considerations for our framework are that the virtual world can include a human model via telep-resence and that the human operator can experience the virtual forces generated by the simulator, through interface to haptic devices. Furthermore, to ease issues when deploying methods on robot hardware, we provide software features to easily map targets to the robot control commands.
18
+
19
+ In robotics, the system's complexity, need for several sub-processes, and multi-machine operations motivate a modular system design and message parsing functionality [15, 16, 17]. There are many packages for the Robot Operating System (ROS) [16] integrating useful functionalities for research and commercial systems, including useful data structures, control interfaces, inverse kinematics (IK) and motion planning, perception tools, etc. $\left\lbrack {{18},{19},{20},{21},{22}}\right\rbrack$ . Additionally, various visualizers $\left\lbrack {{23},{24},{25}}\right\rbrack$ and physics simulators $\left\lbrack {{26},{27},{28},{29},{30}}\right\rbrack$ allow for the development and testing of algorithms, prior to execution on robot hardware. However, popular and reliable libraries for contact-rich scenarios, such as PyBullet [29], Drake [28], and MuJoCo [27], lack integration with the ROS framework.
20
+
21
+ ![01963fbf-3562-7350-a6b8-aea478a36728_1_320_199_1158_435_0.jpg](images/01963fbf-3562-7350-a6b8-aea478a36728_1_320_199_1158_435_0.jpg)
22
+
23
+ Figure 1: Outline of the proposed framework for bridging a reliable contact simulator within the ROS ecosystem, and additional HRI interfaces. Colored boxes indicate our framework.
24
+
25
+ ### 1.1 Contributions
26
+
27
+ In this paper, we propose
28
+
29
+ - A framework (Figure 1) for simulating manipulation scenarios, allowing for seamless collection of contact-rich data and human demonstrations within a simulated environment. The framework includes
30
+
31
+ - full physics simulation utilizing PyBullet,
32
+
33
+ - sensor simulation (i.e. robot joint force-torque and point cloud),
34
+
35
+ - integration with the ROS ecosystem,
36
+
37
+ - several teleoperation interfaces (e.g. keyboard, mouse, joystick, 6D-mouse, haptic device) and robots (e.g. Kawada Nextage humanoid, KUKA LWR robot arm, Talos humanoid, Kinova robot arm, and dual-arm KUKA IIWA), and
38
+
39
+ - easy integration with robot hardware.
40
+
41
+ - Several use-cases to demonstrate the capabilities and usefulness of the framework, and full documentation (see supplementary material).
42
+
43
+ Our framework exploits modularity and extensibilty by implementing several ROS nodes and object types using class hierarchy design patterns. Furthermore, we include additional features such as: utilities for Model Predictive Control (MPC) design and development, motion planning, safe robot operation (for ensuring safety limits are satisfied before commanding the real robot), and inverse kinematics. Several recent works use the proposed framework [31, 32, 33, 34, 35].
44
+
45
+ ## 2 Proposed Framework
46
+
47
+ To enable easy prototyping, implementation, and integration with hardware we implement several tools/features in the framework. This section describes these and our design decisions.
48
+
49
+ ### 2.1 Framework features
50
+
51
+ Figure 1 shows an overview for our framework highlighting it as a central interface between the ROS ecosystem and PyBullet, human interaction, IK solvers, and real robots. The main features of the framework are listed as follows.
52
+
53
+ 1. Online, full-physics simulation using a reliable contact simulator. The framework relies on PyBullet to enable well established contact simulation for rigid/deformable bodies.
54
+
55
+ 2. Integration with the ROS ecosystem. Robot simulation and visualization of real robots/objects (utilizing sensing) are integrated via ROS. Furthermore, this enables (i) ROS packages to be integrated with PyBullet and (ii) a straightforward way to port developed algorithms to real systems.
56
+
57
+ 3. Several interfaces enabling HRI with virtual worlds and telepresence. We provide facilities for human's to provide examples in a simulated environment via several popular interfaces (including haptic devices).
58
+
59
+ 4. Modular and extensible design. Our framework adopts a modular (i.e. several ROS nodes) and highly extensible design paradigm (i.e. class hierarchy) using the Python programming language. This makes it easy to quickly develop new features for the framework.
60
+
61
+ 5. Data collection with standard ROS tools. Since the framework provides an interface to ROS, we can leverage common tools for data collection such as ROS bags [36] and data processing to common formats in machine learning applications, i.e. rosbag_pandas [37].
62
+
63
+ 6. Integration with robot and sensing hardware. Tools are provided to easily remap the virtual system to physical hardware and integrate real sensing apparatus in the PyBullet simulation (e.g. vicon).
64
+
65
+ ### 2.2 Full-Physics Contact-rich Simulation
66
+
67
+ Several simulators exist for full-physics simulation; e.g. Gazebo ${}^{1}$ with ODE [25,26], PyBullet [29], Drake [28], and MuJoCo [27]. Reliably simulating contact between several bodies complicates the models. The simulators PyBullet, Drake, and MuJoCo have been identified as reliable for modeling real world impacts [38].
68
+
69
+ We chose to use PyBullet [29] since it is free and open source, a well-known library with an active community, easy to install, well documented, and is in Python. To ensure our framework is extensible, we develop several classes that interface with PyBullet and establish communication links with ROS utilizing publishers, subscribers, timers, and services.
70
+
71
+ ### 2.3 Robots and virtual worlds building
72
+
73
+ Our framework provides several tools to build virtual worlds. The main ROS-PyBullet Interface node must specify a main YAML configuration file containing a list of robots/objects to load into PyBullet, parameters, RGB-D sensor configuration, and visualizer options - robots/objects can also be added/removed through ROS services. Various examples of robots/tasks are shown in Figure 2. Several object types (Section 2.3.1) were developed with different interaction properties and various communication channels with ROS. The specification for each object is defined in separate YAML files. See the documentation provided in the supplementary material for a full list of parameters for the configuration files, ROS topics published/subscribed, and ROS services provided.
74
+
75
+ #### 2.3.1 PyBullet Objects
76
+
77
+ Robot Incorporating robots in our framework is simple. In a configuration file the user will specify the URDF file name, and several parameters (e.g. base position, inital joint configuration, etc). The robot base frame with respect to the world frame (named rpbi/world) is set by a transform broadcast using the ROS TF library [18]. The interface optionally publishes the robot joint/link states to ROS. Mobile and floating-base robots are also supported by our framework given an initial state (i.e. pose and linear/angular velocity). ROS services expose PyBullet's IK features, allow the user to move the the robot to a given joint/end-effector state ${}^{2}$ , and return robot information (e.g. joint/link names, number of degrees of freedom, etc).
78
+
79
+ Visual robot A visualization of a robot, that does not interact with the PyBullet environment, can be instantiated by setting the is_visual_robot parameter to True in the robot configuration file. The main utility of this feature is to allow the user to map a real robot state to the PyBullet environment enabling them to easily compare the real robot with a simulated robot representing the target configuration. Only the robot information and IK services are available for visual robots.
80
+
81
+ Visual object A common requirement for simulators is to visualize objects. In PyBullet, these objects do not affect other bodies in the scene, nor react to those. Visual objects are listed in the
82
+
83
+ ---
84
+
85
+ ${}^{1}$ An incomplete Bullet interface for Gazebo exists. To the best of the authors knowledge, it is deprecated.
86
+
87
+ ${}^{2}$ When an end-effector state is given, the corresponding joint state is found using PyBullet’s IK features.
88
+
89
+ ---
90
+
91
+ ![01963fbf-3562-7350-a6b8-aea478a36728_3_317_201_795_533_0.jpg](images/01963fbf-3562-7350-a6b8-aea478a36728_3_317_201_795_533_0.jpg)
92
+
93
+ Figure 2: Examples of the ROS-PyBullet Interface: Top-row (left-right): a Kinova arm reaching for a cup, a Kuka LWR arm simulating a wiping task, and a human model waving. Bottom-row (left-right) a Talos humanoid robot reaching for a target while maintaining balance, the Nextage robot simulating a smart factory scenario, and simulated RGB-D data visualized in RViz.
94
+
95
+ main configuration file under the parameter visual_objects. A visual object was used to visualize the real pushing box (tracked with Vicon) in [33].
96
+
97
+ Collision object Modeling static objects, that cause momentum changes for other bodies upon collision, such as floors, ceilings, and walls are often necessary. These objects are listed under the parameter collision_objects in the main configuration. During the development of the experiments presented in [31], a collision object was used to represent a surface.
98
+
99
+ Dynamic object Objects whose pose evolution is completely determined by the simulator's physics engine are a key requirement for development of control algorithms. Dynamic objects, listed as dynamic_objects in the main configuration file, can be used inside PyBullet to simulate an object. A simulated pushing box was used to develop the controller in [33].
100
+
101
+ Soft object All previously described object types are rigid bodies. We also provide an interface to PyBullet deformable objects. These are similar to dynamic objects in that their evolution is defined by PyBullet and can be specified using the soft_objects parameter.
102
+
103
+ Load from URDF Finally, robots and objects can also be loaded directly into the PyBullet environment using the urdfs parameter. The evolution of these objects are defined by PyBullet. However, since the usage of these objects is ambiguous their communication with ROS is limited.
104
+
105
+ #### 2.3.2 Sensor simulation
106
+
107
+ Many control, and planning algorithms rely on sensory feedback. Integrating multiple sensory inputs, such as tactile and vision, is underdeveloped in robotic manipulation [39]. To enable future research in realistic simulated environments, we provide an interface to several sensing modalities.
108
+
109
+ $\mathbf{F}/\mathbf{T}$ sensor Through the configuration file for the robot it is possible to instantiate a simulated force-torque sensor attached to any joint on the robot. These virtual sensors publish joint reaction forces, read from PyBullet, as ROS wrench-stamped messages at a user-defined sampling frequency.
110
+
111
+ RGB-D camera Color and depth perception is a key sensing capability for object state estimation in contact-rich manipulation tasks. An RGB-D camera can be instantiated and attached to a frame through the main configuration file.
112
+
113
+ The color and depth images (image) are published together with the intrinsic camera parameters (camera_info). The camera intrinsic parameters are derived from the OpenGL projection matrix and can be used to back-project the images to a colored point cloud (Figure 4). This is natively supported in ROS, e.g. via the RViz DepthCloud plugin or via the rgbd_launch package. Optionally, the interface can compute and publish the point cloud data directly.
114
+
115
+ On a discrete GPU (NVIDIA GeForce GTX 1650 Mobile) this will achieve about ${27}\mathrm{\;{Hz}}$ , while on an integrated GPU (Intel UHD Graphics 630) this will reduce to about ${18}\mathrm{\;{Hz}}$ .
116
+
117
+ ![01963fbf-3562-7350-a6b8-aea478a36728_4_312_201_1172_321_0.jpg](images/01963fbf-3562-7350-a6b8-aea478a36728_4_312_201_1172_321_0.jpg)
118
+
119
+ Figure 4: RGB-D observation of a scene with two objects: (a) colour image, (b) depth image, (c) projected colour point cloud in RViz.
120
+
121
+ ### 2.4 Human interaction
122
+
123
+ Developing contact-aware algorithms is a key aspect of future work for the robotics community [39,40]. Clearly there are safety concerns for HRI tasks. Simulation is a necessary step in any development cycle involving robots. This means the only way a human can interact with the virtual environment is through some interface (e.g. haptic device). To remedy this issue, we have developed a plugin for several human interfaces so that the human can be realized in the virtual environment and also receive virtual feedback from that environment.
124
+
125
+ Operator node Several common human interfaces and their drivers are provided in our framework out-of-the-box, these include: keyboard, mouse, joystick, space mouse, and a haptic device. Mapping the interface signals to a control space is non-trivial [41]. Choosing this mapping is simplified by allowing the user to specify the teleoperation mode in the launch file. We provide ROS nodes in the operator_node package that take raw interface signals as input and maps them to a control space of the users specification.
126
+
127
+ Logging signals There are several advantages for an intermediary node mapping driver signals to operator commands. First, modularity allows the user to easily swap out interfaces/mappings to compare modes. Second, methods utilizing moving horizon estimation (e.g. [31]) need to track a window of signals. We provide a node that enables this functionality in the operator_interface_logger node. Third, such a structure enables collection/comparison of data streams using ROS bags removing the need for extensive post-processing.
128
+
129
+ Scale node Several systems utilize joysticks, often the scaled value of a joystick axis defines velocity in certain dimensions $\left\lbrack {{42},{43}}\right\rbrack$ . We provide scale_node.py that appropriately orders and scales the interfaces axes.
130
+
131
+ Isometric node Point-mass systems are common models for teleoperation, e.g. [31, 32]. The individual axes of the interface is often in the range $\left\lbrack {-1,1}\right\rbrack$ . If we scaled the joystick axes, as before, then the magnitude of the maximum velocity is non-uniform for all interface states. We provide isometric node , py that ensures the maximum velocity magnitude is isometric.
132
+
133
+ ### 2.5 Interfacing with real hardware
134
+
135
+ Interfacing with hardware is straightforward using our framework. Each object can publish/broadcast its state in several formats (e.g. joint states, float arrays, transforms, wrenches). This means the simulator can act, at development time, as the real system. When porting to the real hardware we provide nodes that remap the current system states to the required robot/hardware drivers: see the remap_joint_state_to_floatarray and remap_joint_state nodes in the custom_ros_tools package. Setup for these is as simple as remapping topics in a launch file or enabling a ROS re-mapper. The framework can also visualize the current state of physical robots/objects in PyBullet. This is very useful when debugging software and hardware.
136
+
137
+ ### 2.6 Additional utilities
138
+
139
+ We also provide several utilities to facilitate development and safe robot operation. These utilities are described below.
140
+
141
+ ![01963fbf-3562-7350-a6b8-aea478a36728_5_307_201_1181_364_0.jpg](images/01963fbf-3562-7350-a6b8-aea478a36728_5_307_201_1181_364_0.jpg)
142
+
143
+ Figure 5: A human interacting with a virtual world. (a) The experimental setup where the human interacts with the virtual world using a haptic device. (b) The Kuka LWR robot in PyBullet about to interact with a static collision object including the coordinate frames (the frames are not presented to the user). (c) The force-feeback evolution that is rendered to the human via the haptic device.
144
+
145
+ Model Predictive Control When developing MPC methods [31, 33], it is useful to slowly iterate through individual MPC iterations. Start, stop, and step ROS services are provided that allow the user to easily debug MPC controllers further facilitated by a GUI interface (see rpbi_controls_node.py in the rpbi_utils package).
146
+
147
+ Inverse Kinematics Inverse kinematics is a key requirement for robotic systems with several established libraries $\left\lbrack {{44},{45},{46},{21}}\right\rbrack$ . We provide a standardized interface ik_ros that allows a user to easily swap out and compare different solvers. Several solvers are interfaced out-of-the box $\left\lbrack {{45},{21},{44},{29}}\right\rbrack$ . The implementation is extensible so that additional solvers can be easily included.
148
+
149
+ Interpolation Control often requires smaller time resolution than planning. In this case, interpolation between knot points is necessary. The framework provides interpolation_node.py in the rpbi_utils package that performs interpolation of planned trajectories.
150
+
151
+ Time-sync ROS with PyBullet Synchronizing time between processes and a simulated time can be useful. We provide an option so that the user can synchronize the ROS clock with PyBullet's simulation time.
152
+
153
+ Safe robot operation Safely operating robots is of paramount importance - especially when conducting HRI experiments. We provide a package safe_robot that ensures safe robot motions acting as a guard between target and commanded states - every target is checked (e.g. link and joint position/velocity limits, and self-collision) prior to being commanded on the real system.
154
+
155
+ ## 3 Use-cases
156
+
157
+ This section describes four use-cases highlighting the features of the proposed framework: (i) human 2 interaction with a virtual world, (ii) learning from demonstration using dynamic movement primi- 3 tives, (iii) full-body telepresence, and (iv) hardware realization. The code for running examples (i) and (ii) has been made fully open source. We plan to make the driver for the Xsens suit, used in (iii), open source along with an example.
158
+
159
+ ### 3.1 Human interaction with virtual worlds
160
+
161
+ Developing robust learning and control techniques in contact-rich scenarios utilizing human input requires the human to interact with the virtual world. In order to prototype contact-rich machine learning and optimization-based methods with realistic interaction sequences requires such a haptic interface and a simulator for generating realistic force-feedback.
162
+
163
+ In this use-case, we present the user with a haptic interface, a simulated Kuka LWR robot arm, and a static collision box object (Figure 5a). Using the interface, the user controls a target position defined in the ${z}_{W}$ axis that is constrained in the ${x}_{W},{y}_{W}$ axes (Figure 5b). The task for the robot is to minimize the distance between the end-effector position and the target while keeping the ${z}_{E}$ and ${z}_{W}$ axes aligned. The joint motion is generated using the IK features in PyBullet, we interface with
164
+
165
+ ![01963fbf-3562-7350-a6b8-aea478a36728_6_319_208_1156_276_0.jpg](images/01963fbf-3562-7350-a6b8-aea478a36728_6_319_208_1156_276_0.jpg)
166
+
167
+ Figure 6: Pipeline for the learning from demonstration use-case.
168
+
169
+ this functionality using the ik_ros package described in Section 2.6. A simulated Force-Torque sensor is attached to the robot at the wrist joint. When the end-effector comes into contact with the static collision object, the force measured by the sensor in the ${z}_{E}$ axis is rendered to the user via force-feedback. The evolution of the force feedback for a single interaction with the the box object is shown in Figure 5c. To run this example, attach a haptic device (3D Systems Touch X), and execute the command roslaunch rpbi_examples human_interaction.launch.
170
+
171
+ ### 3.2 Learning from demonstration
172
+
173
+ Dynamic movement primitives (DMPs) [47] are a widely used mathematical formulation for modelling motor control of biological systems. Over recent years they have become a key component of learning from demonstration [48]. Many packages, examples, and code exists for learning and executing DMPs - several have been integrated in ROS. To highlight the flexibility and potential for learning from demonstration utlizing our framework we leverage a standard ROS package for learning DMPs [49].
174
+
175
+ The goal in this section is to demonstrate how to learn a DMP from a teleoperated demonstration using our framework. In this use-case, the user interfaces with the system using the keyboard. A Kuka LWR robot arm is presented in PyBullet, and controlled in position control mode (Figure 6). The interface commands $h$ are mapped to end-effector velocity in two dimensions. We use the EXOTica [21] plugin in the ik_ros package link-anonymized to perform inverse kinematics - the box and end-effector states ${x}_{d},{u}_{d}$ are saved as the human demonstration. The goal is for the human to demonstrate how to push a box from a starting location ${x}_{0}$ to the goal position ${x}_{g}$ ; this behavior is then learned from the human demonstration ${u}_{d}$ using a DMP $\widehat{\theta }$ . A random starting location for the end effector is chosen and the DMP is used to plan a motion $\widehat{x},\widehat{u}$ . When the DMP is executed, a starting position is chosen randomly. The data can be collected using standard ROS tools and processed to a Pandas data frame for analysis. To run this example, open a terminal and execute roslaunch rpbi_examples lfd.launch.
176
+
177
+ ### 3.3 Full-body virtual-presence
178
+
179
+ In this use-case, we show that not only can we setup various teleoperation examples for the development of learning and control, but also we can integrate a realization of a virtual human that can interact with the PyBullet environment. We equip a human with an XSens suit and publish the 3D positions of each human body part into ROS as a transformation. Consecutively, we align each link of the human model in PyBullet with the positions of each body part of the human to realize a full-body virtual-presence of a humanoid figure. The setup for this use-case is shown via an assistive dressing scenario in Figure 7a, full details of the work can be found in [35].
180
+
181
+ ### 3.4 Integration with hardware and MPC
182
+
183
+ A key feature of our framework is its ability to easily integrate with robot hardware and develop methods for MPC [31, 33]. In this use-case we highlight this feature of our framework using a pushing task deployed on the Kawada Nextage humanoid robot. The setup is shown in Figure 7c. A typical development cycle for research is to develop first in simulation, and then port the work to the real system. Due to the complexity of robotic systems, data collection, hardware issues, etc., the latter step can be quite time consuming. Our goal during development of the framework was to minimize this difficulty. We developed the facility to remap target joint states to several formats required by real systems in our lab, and several object types that can interface with real sensors (e.g. Vicon, AprilTags) or broadcast transforms (similar to a real object equipped with Vicon markers or AprilTags). The interface makes it simple to swap between testing/prototyping the system in simulation, and deploying on the real system - our demonstration reduces to setting a flag (real_robot $=$ True or False). Furthermore, this use-case highlights the utility of the time-stepping feature of the interface for developing MPC algorithms, described in Section 2.6. Typically this can only be achieved in simulation, however since our system maps the simulator state to the real robot and has the facility to track objects using online sensing, e.g. we used Vicon, the framework can execute an MPC iteration on the real system by the user clicking a button. This significantly reduces the development time for laborious tasks such as parameter tuning.
184
+
185
+ ![01963fbf-3562-7350-a6b8-aea478a36728_7_333_201_1127_355_0.jpg](images/01963fbf-3562-7350-a6b8-aea478a36728_7_333_201_1127_355_0.jpg)
186
+
187
+ Figure 7: Integrating robot hardware and sensing. (a) and (b) Real-time virtual-presence of a human in the simulation environment during assistive dressing. (a) The configuration of the human is sensed with the Xsens suit. (b) A humanoid figure that corresponds to the human is shown, along with a green visual object (Section 2.3.1) used to illustrate the estimated region of the occluded human elbow. (b) The Kawada Nextage humanoid robot performing a pushing task.
188
+
189
+ ## 4 Limitations
190
+
191
+ We implemented the framework using Python, a popular programming language with numerous libraries-making it suitable for our goals of easy prototyping and extensiblity. Despite Python being unsuitable for real-time systems, the robotics community has broadly adopted it as the language for implementing robotic experiments. Furthermore, we have found no latency issues in all of our experimental setups, running at frequencies of up to ${200}\mathrm{\;{Hz}}$ .
192
+
193
+ Currently, the framework only supports ROS Noetic-thus the only way to run the framework with ROS2 is via the ROS1 bridge [50]. Porting the framework to ROS2 is, at the time of writing this manuscript, in-development.
194
+
195
+ ## 5 Conclusions
196
+
197
+ In this paper, we have proposed a framework for simulating/collecting data for contact-rich manipulation scenarios including: full physics simulation using PyBullet (known for reliable impact/contact modeling), integration with the ROS ecosystem, and several teleoperation interfaces and robots. Furthermore, the framework easily and demonstrably interfaces with robot hardware. We have specifically designed the implementation to exploit modularity and extensibility and to be highly flexible and easily developed.
198
+
199
+ We hope students, researchers, and industry will make use this framework to facilitate the development of their control algorithms and machine learning approaches in scenarios involving contact. The supplementary material contains full documentation. The main code base is open source at link-anonymized, which contains several examples, documentation, and videos. Dependencies are linked to from the documentation and system requirements are detailed. The framework is released under the LGPL license. The suggested setup is ROS Noetic, Python 3, on Ubuntu 20.04.
200
+
201
+ References
202
+
203
+ [1] I. Kao, K. Lynch, and J. W. Burdick. Contact Modeling and Manipulation, pages 647-669. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008. doi:10.1007/978-3-540-30301-5.28.
204
+
205
+ [2] R. A. Brooks. Planning collision- free motions for pick-and-place operations. The International Journal of Robotics Research, 2(4):19-44, 1983. doi:10.1177/027836498300200402.
206
+
207
+ [3] A. W. Winkler, C. D. Bellicoso, M. Hutter, and J. Buchli. Gait and trajectory optimization for legged systems through phase-based end-effector parameterization. IEEE Robotics and Automation Letters, 3(3):1560-1567, 2018. doi:10.1109/LRA.2018.2798285.
208
+
209
+ [4] C. Mastalli, R. Budhiraja, W. Merkt, G. Saurel, B. Hammoud, M. Naveau, J. Carpentier, L. Righetti, S. Vijayakumar, and N. Mansard. Crocoddyl: An Efficient and Versatile Framework for Multi-Contact Optimal Control. In IEEE International Conference on Robotics and Automation (ICRA), 2020.
210
+
211
+ [5] M. Posa, C. Cantu, and R. Tedrake. A direct method for trajectory optimization of rigid bodies through contact. The International Journal of Robotics Research, 33(1):69-81, 2014. doi: 10.1177/0278364913506757.
212
+
213
+ [6] C. E. Mower, J. Moura, and S. Vijayakumar. Skill-based Shared Control. In Proceedings of Robotics: Science and Systems (R:SS), Virtual, July 2021. doi:10.15607/RSS.2021.XVII.028.
214
+
215
+ [7] F. R. Hogan and A. Rodriguez. Reactive planar non-prehensile manipulation with hybrid model predictive control. The International Journal of Robotics Research, 39(7):755-773, 2020. doi: 10.1177/0278364920913938.
216
+
217
+ [8] J. Moura, T. Stouraitis, and S. Vijayakumar. Non-prehensile planar manipulation via trajectory optimization with complementarity constraints. In Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Jan. 2022.
218
+
219
+ [9] T. Stouraitis, I. Chatzinikolaidis, M. Gienger, and S. Vijayakumar. Online hybrid motion planning for dyadic collaborative manipulation via bilevel optimization. IEEE Transactions on Robotics, 36(5):1452-1471, 2020. doi:10.1109/TRO.2020.2992987.
220
+
221
+ [10] F. Rydén and H. J. Chizeck. Forbidden-region virtual fixtures from streaming point clouds: Remotely touching and protecting a beating heart. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3308-3313, 2012. doi:10.1109/IROS.2012. 6386012.
222
+
223
+ [11] H. Saeidi, J. D. Opfermann, M. Kam, S. Wei, S. Leonard, M. H. Hsieh, J. U. Kang, and A. Krieger. Autonomous robotic laparoscopic surgery for intestinal anastomosis. Science Robotics, 7(62), 2022. doi:10.1126/scirobotics.abj2908.
224
+
225
+ [12] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5):469-483, 2009. doi:https://doi.org/ 10.1016/j.robot.2008.10.024.
226
+
227
+ [13] L. Armesto, J. Moura, V. Ivan, M. S. Erden, A. Sala, and S. Vijayakumar. Constraint-aware learning of policies by demonstration. The International Journal of Robotics Research, 37 (13-14):1673-1689, 2018. doi:10.1177/0278364918784354.
228
+
229
+ [14] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. BC-z: Zero-shot task generalization with robotic imitation learning. In 5th Annual Conference on Robot Learning (CoRL), 2021. URL https://openreview.net/forum?id=8kbp23tSGYv.
230
+
231
+ [15] A. S. Huang, E. Olson, and D. C. Moore. LCM: Lightweight communications and marshalling. In IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), pages 4057- 4062, 2010. doi:10.1109/IROS.2010.5649358.
232
+
233
+ [16] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, A. Y. Ng, et al. ROS: an open-source robot operating system. In ICRA workshop on open source software, volume 3, page 5. Kobe, Japan, 2009.
234
+
235
+ [17] H. Bruyninckx, P. Soetens, and B. Koninckx. The real-time motion control core of the Orocos project. In IEEE International Conference on Robotics and Automation (ICRA), volume 2, pages 2766-2771 vol.2, 2003. doi:10.1109/ROBOT.2003.1242011.
236
+
237
+ [18] T. Foote. tf: The transform library. In IEEE Conference on Technologies for Practical Robot Applications (TePRA), pages 1-6, 2013. doi:10.1109/TePRA.2013.6556373.
238
+
239
+ [19] D. Coleman, I. Sucan, S. Chitta, and N. Correll. Reducing the barrier to entry of complex robotic software: a MoveIt! case study, 2014. URL https://arxiv.org/abs/1404.3785.
240
+
241
+ [20] S. Chitta, E. Marder-Eppstein, W. Meeussen, V. Pradeep, A. R. Tsouroukdissian, J. Bohren, D. Coleman, B. Magyar, G. Raiola, M. Lüdtke, and E. F. Perdomo. ros_control: A generic and simple control framework for ros. Journal of Open Source Software, 2(20):456, 2017. doi:10.21105/joss.00456. URL https://doi.org/10.21105/joss.00456.
242
+
243
+ [21] V. Ivan, Y. Yang, W. Merkt, M. P. Camilleri, and S. Vijayakumar. EXOTica: An Extensible Optimization Toolset for Prototyping and Benchmarking Motion Planning and Control, pages 211-240. Springer International Publishing, Cham, 2019. ISBN 978-3-319-91590-6. doi: 10.1007/978-3-319-91590-6_7.
244
+
245
+ [22] R. B. Rusu and S. Cousins. 3D is here: Point Cloud Library (PCL). In IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, May 9-13 2011. IEEE.
246
+
247
+ [23] W. Schroeder, K. Martin, B. Lorensen, and I. Kitware. The Visualization Toolkit: An Object-oriented Approach to 3D Graphics. Kitware, 2006. ISBN 9781930934191. URL https: //books.google.co.uk/books?id=rx4vPwAACAAJ.
248
+
249
+ [24] H. R. Kam, S.-H. Lee, T. Park, and C.-H. Kim. RViz: A toolkit for real domain data visualization. Telecommun. Syst., 60(2):337-345, oct 2015. doi:10.1007/s11235-015-0034-5.
250
+
251
+ [25] N. Koenig and A. Howard. Design and use paradigms for Gazebo, an open-source multi-robot simulator. In IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), pages 2149-2154, Sendai, Japan, Sep 2004.
252
+
253
+ [26] R. Smith et al. Open dynamics engine. http://ode.org, 2005.
254
+
255
+ [27] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), pages 5026- 5033, 2012. doi:10.1109/IROS.2012.6386109.
256
+
257
+ [28] R. Tedrake et al. Drake: A planning, control, and analysis toolbox for nonlinear dynamical systems, 2014.
258
+
259
+ [29] E. Coumans and Y. Bai. Pybullet, a python module for physics simulation for games, robotics and machine learning. http://pybullet.org, 2016-2020.
260
+
261
+ [30] K. Werling, D. Omens, J. Lee, I. Exarchos, and C. K. Liu. Fast and Feature-Complete Differentiable Physics Engine for Articulated Rigid Bodies with Contact Constraints. In Proceedings of Robotics: Science and Systems (R:SS), Virtual, July 2021. doi:10.15607/RSS.2021.XVII.034.
262
+
263
+ [31] Anonymized Authors. Anonymized Title. In Proceedings of Robotics: Science and Systems (R:SS), Virtual, July 2021.
264
+
265
+ [32] 1st Anonymized Author. Anonymized Title. PhD thesis, Anonymized University, 2021.
266
+
267
+ [33] Anonymized Authors. Anonymized Title. In Proceedings of IEEE International Conference on Robotics and Automation (ICRA) 2022, Jan. 2022.
268
+
269
+ [34] 2nd Anonymized Author. Anonymized Title. PhD thesis, Anonymized University, 2021.
270
+
271
+ [35] Anonymized Authors. Anonymized Title. IEEE Robotics and Automation Letters, 2022.
272
+
273
+ [36] T. Field, J. Leibs, J. Bowman, and D. Thomas. rosbag. http://wiki.ros.org/rosbag, 2020. Online; accessed 14 June 2022.
274
+
275
+ [37] A. Taylor. rosbag_pandas. http://wiki.ros.org/rosbag_pandas, 2019. Online; accessed 14 June 2022.
276
+
277
+ [38] B. Acosta, W. Yang, and M. Posa. Validating robotics simulators on real-world impacts. IEEE Robotics and Automation Letters, pages 1-1, 2022. doi:10.1109/LRA.2022.3174367.
278
+
279
+ [39] N. Fazeli, M. Oller, J. Wu, Z. Wu, J. B. Tenenbaum, and A. Rodriguez. See, feel, act: Hierarchical learning for complex manipulation skills with multisensory fusion. Science Robotics, 4 (26), 2019. doi:10.1126/scirobotics.aav3123.
280
+
281
+ [40] A. Rodriguez. The unstable queen: Uncertainty, mechanics, and tactile feedback. Science Robotics, 6(54), 2021. doi:10.1126/scirobotics.abi4667.
282
+
283
+ [41] Anonymized Authors. Anonymized title. In 2019 IEEE 15th International Conference on Automation Science and Engineering (CASE), pages 1497-1504, 2019.
284
+
285
+ [42] W. Merkt, Y. Yang, T. Stouraitis, C. E. Mower, M. Fallon, and S. Vijayakumar. Robust shared autonomy for mobile manipulation with continuous scene monitoring. In 13th IEEE Conference on Automation Science and Engineering (CASE), pages 130-137, 2017. doi: 10.1109/COASE.2017.8256092.
286
+
287
+ [43] T. Klamt, D. Rodriguez, M. Schwarz, C. Lenz, D. Pavlichenko, D. Droeschel, and S. Behnke. Supervised autonomous locomotion and manipulation for disaster response with a centaur-like robot. In IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), pages 1-8. IEEE, 2018. doi:10.1109/IROS.2018.8594509.
288
+
289
+ [44] P. Beeson and B. Ames. TRAC-IK: An open-source library for improved solving of generic inverse kinematics. In IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), pages 928-935, 2015. doi:10.1109/HUMANOIDS.2015.7363472.
290
+
291
+ [45] M. L. Felis. RBDL: an efficient rigid-body dynamics library using recursive algorithms. Autonomous Robots, pages 1-17, 2016. doi:10.1007/s10514-016-9574-0.
292
+
293
+ [46] D. Rakita, B. Mutlu, and M. Gleicher. RelaxedIK: Real-time Synthesis of Accurate and Feasible Robot Arm Motion. In Proceedings of Robotics: Science and Systems (R:SS), Pittsburgh, Pennsylvania, June 2018. doi:10.15607/RSS.2018.XIV.043.
294
+
295
+ [47] S. Schaal. Dynamic movement primitives-a framework for motor control in humans and humanoid robotics. In Adaptive motion of animals and machines, pages 261-280. Springer, 2006. doi:10.1007/4-431-31381-8_23.
296
+
297
+ [48] M. Saveriano, F. J. Abu-Dakka, A. Kramberger, and L. Peternel. Dynamic movement primitives in robotics: A tutorial survey, 2021.
298
+
299
+ [49] S. Niekum. dmp. http://wiki.ros.org/dmp, 2015. Online; accessed 14 June 2022.
300
+
301
+ [50] D. Thomas and J. Perron. rosl_bridge. https://index.ros.org/p/ros1_bridge/, 2022. Online; accessed 14 June 2022.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/Ag-vOezQ0Gw/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § ROS-PYBULLET INTERFACE: A FRAMEWORK FOR RELIABLE CONTACT SIMULATION AND HUMAN-ROBOT INTERACTION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Reliable contact simulation plays a key role in the development of (semi-)autonomous robots, especially when dealing with contact-rich manipulation scenarios, an active robotics research topic. Besides simulation, components such as sensing, perception, data collection, robot hardware control, human interfaces, etc. are all key enablers towards applying machine learning algorithms or model-based approaches in real world systems. However, there is a lack of software connecting reliable contact simulation with the larger robotics ecosystem (i.e. ROS, Orocos), for a more seamless application of novel approaches, found in the literature, to existing robotic hardware. In this paper, we present the ROS-PyBullet Interface, a framework that provides a bridge between the reliable contact/impact simulator PyBullet and the Robot Operating System (ROS). Furthermore, we provide additional utilities for facilitating Human-Robot Interaction (HRI) in the simulated environment. We also present several use-cases that highlight the capabilities and usefulness of our framework. Please check our video, source code, and examples included in the supplementary material.
12
+
13
+ § 16 1 INTRODUCTION
14
+
15
+ Dealing with contacts is a key requirement for robots to become effective in our daily lives and valuable assets in industry [1]. Examples of contact-rich tasks include pick and place [2], locomotion [3, 4, 5], wiping [6], pushing [7, 8], dyadic co-manipulation [9], and robot surgery [10, 11]. Developing approaches for (semi-)autonomous robots involving contact is thwart with practical issues (e.g. slippage, high impulsive forces, model mismatch, etc.) that may cause failure and damage to the robot or yield safety concerns for humans in close proximity.
16
+
17
+ Development of machine learning approaches require a facility to collect large datasets. However, collection on scale is non-trivial. Learning from demonstration [12] is a popular technique for endowing robots with new skills where a human provides examples. According to one paradigm, kinaesthetic teaching (e.g. [13]), the human interacts directly with the robot. However, this method requires a physical system which is cumbersome and has potential safety concerns. A second approach utilizes teleoperation (e.g. [14]), which has the benefit that the human operator can either interact with a simulated robot or at a distance with the physical system (ensuring safety). Two key considerations for our framework are that the virtual world can include a human model via telep-resence and that the human operator can experience the virtual forces generated by the simulator, through interface to haptic devices. Furthermore, to ease issues when deploying methods on robot hardware, we provide software features to easily map targets to the robot control commands.
18
+
19
+ In robotics, the system's complexity, need for several sub-processes, and multi-machine operations motivate a modular system design and message parsing functionality [15, 16, 17]. There are many packages for the Robot Operating System (ROS) [16] integrating useful functionalities for research and commercial systems, including useful data structures, control interfaces, inverse kinematics (IK) and motion planning, perception tools, etc. $\left\lbrack {{18},{19},{20},{21},{22}}\right\rbrack$ . Additionally, various visualizers $\left\lbrack {{23},{24},{25}}\right\rbrack$ and physics simulators $\left\lbrack {{26},{27},{28},{29},{30}}\right\rbrack$ allow for the development and testing of algorithms, prior to execution on robot hardware. However, popular and reliable libraries for contact-rich scenarios, such as PyBullet [29], Drake [28], and MuJoCo [27], lack integration with the ROS framework.
20
+
21
+ < g r a p h i c s >
22
+
23
+ Figure 1: Outline of the proposed framework for bridging a reliable contact simulator within the ROS ecosystem, and additional HRI interfaces. Colored boxes indicate our framework.
24
+
25
+ § 1.1 CONTRIBUTIONS
26
+
27
+ In this paper, we propose
28
+
29
+ * A framework (Figure 1) for simulating manipulation scenarios, allowing for seamless collection of contact-rich data and human demonstrations within a simulated environment. The framework includes
30
+
31
+ * full physics simulation utilizing PyBullet,
32
+
33
+ * sensor simulation (i.e. robot joint force-torque and point cloud),
34
+
35
+ * integration with the ROS ecosystem,
36
+
37
+ * several teleoperation interfaces (e.g. keyboard, mouse, joystick, 6D-mouse, haptic device) and robots (e.g. Kawada Nextage humanoid, KUKA LWR robot arm, Talos humanoid, Kinova robot arm, and dual-arm KUKA IIWA), and
38
+
39
+ * easy integration with robot hardware.
40
+
41
+ * Several use-cases to demonstrate the capabilities and usefulness of the framework, and full documentation (see supplementary material).
42
+
43
+ Our framework exploits modularity and extensibilty by implementing several ROS nodes and object types using class hierarchy design patterns. Furthermore, we include additional features such as: utilities for Model Predictive Control (MPC) design and development, motion planning, safe robot operation (for ensuring safety limits are satisfied before commanding the real robot), and inverse kinematics. Several recent works use the proposed framework [31, 32, 33, 34, 35].
44
+
45
+ § 2 PROPOSED FRAMEWORK
46
+
47
+ To enable easy prototyping, implementation, and integration with hardware we implement several tools/features in the framework. This section describes these and our design decisions.
48
+
49
+ § 2.1 FRAMEWORK FEATURES
50
+
51
+ Figure 1 shows an overview for our framework highlighting it as a central interface between the ROS ecosystem and PyBullet, human interaction, IK solvers, and real robots. The main features of the framework are listed as follows.
52
+
53
+ 1. Online, full-physics simulation using a reliable contact simulator. The framework relies on PyBullet to enable well established contact simulation for rigid/deformable bodies.
54
+
55
+ 2. Integration with the ROS ecosystem. Robot simulation and visualization of real robots/objects (utilizing sensing) are integrated via ROS. Furthermore, this enables (i) ROS packages to be integrated with PyBullet and (ii) a straightforward way to port developed algorithms to real systems.
56
+
57
+ 3. Several interfaces enabling HRI with virtual worlds and telepresence. We provide facilities for human's to provide examples in a simulated environment via several popular interfaces (including haptic devices).
58
+
59
+ 4. Modular and extensible design. Our framework adopts a modular (i.e. several ROS nodes) and highly extensible design paradigm (i.e. class hierarchy) using the Python programming language. This makes it easy to quickly develop new features for the framework.
60
+
61
+ 5. Data collection with standard ROS tools. Since the framework provides an interface to ROS, we can leverage common tools for data collection such as ROS bags [36] and data processing to common formats in machine learning applications, i.e. rosbag_pandas [37].
62
+
63
+ 6. Integration with robot and sensing hardware. Tools are provided to easily remap the virtual system to physical hardware and integrate real sensing apparatus in the PyBullet simulation (e.g. vicon).
64
+
65
+ § 2.2 FULL-PHYSICS CONTACT-RICH SIMULATION
66
+
67
+ Several simulators exist for full-physics simulation; e.g. Gazebo ${}^{1}$ with ODE [25,26], PyBullet [29], Drake [28], and MuJoCo [27]. Reliably simulating contact between several bodies complicates the models. The simulators PyBullet, Drake, and MuJoCo have been identified as reliable for modeling real world impacts [38].
68
+
69
+ We chose to use PyBullet [29] since it is free and open source, a well-known library with an active community, easy to install, well documented, and is in Python. To ensure our framework is extensible, we develop several classes that interface with PyBullet and establish communication links with ROS utilizing publishers, subscribers, timers, and services.
70
+
71
+ § 2.3 ROBOTS AND VIRTUAL WORLDS BUILDING
72
+
73
+ Our framework provides several tools to build virtual worlds. The main ROS-PyBullet Interface node must specify a main YAML configuration file containing a list of robots/objects to load into PyBullet, parameters, RGB-D sensor configuration, and visualizer options - robots/objects can also be added/removed through ROS services. Various examples of robots/tasks are shown in Figure 2. Several object types (Section 2.3.1) were developed with different interaction properties and various communication channels with ROS. The specification for each object is defined in separate YAML files. See the documentation provided in the supplementary material for a full list of parameters for the configuration files, ROS topics published/subscribed, and ROS services provided.
74
+
75
+ § 2.3.1 PYBULLET OBJECTS
76
+
77
+ Robot Incorporating robots in our framework is simple. In a configuration file the user will specify the URDF file name, and several parameters (e.g. base position, inital joint configuration, etc). The robot base frame with respect to the world frame (named rpbi/world) is set by a transform broadcast using the ROS TF library [18]. The interface optionally publishes the robot joint/link states to ROS. Mobile and floating-base robots are also supported by our framework given an initial state (i.e. pose and linear/angular velocity). ROS services expose PyBullet's IK features, allow the user to move the the robot to a given joint/end-effector state ${}^{2}$ , and return robot information (e.g. joint/link names, number of degrees of freedom, etc).
78
+
79
+ Visual robot A visualization of a robot, that does not interact with the PyBullet environment, can be instantiated by setting the is_visual_robot parameter to True in the robot configuration file. The main utility of this feature is to allow the user to map a real robot state to the PyBullet environment enabling them to easily compare the real robot with a simulated robot representing the target configuration. Only the robot information and IK services are available for visual robots.
80
+
81
+ Visual object A common requirement for simulators is to visualize objects. In PyBullet, these objects do not affect other bodies in the scene, nor react to those. Visual objects are listed in the
82
+
83
+ ${}^{1}$ An incomplete Bullet interface for Gazebo exists. To the best of the authors knowledge, it is deprecated.
84
+
85
+ ${}^{2}$ When an end-effector state is given, the corresponding joint state is found using PyBullet’s IK features.
86
+
87
+ < g r a p h i c s >
88
+
89
+ Figure 2: Examples of the ROS-PyBullet Interface: Top-row (left-right): a Kinova arm reaching for a cup, a Kuka LWR arm simulating a wiping task, and a human model waving. Bottom-row (left-right) a Talos humanoid robot reaching for a target while maintaining balance, the Nextage robot simulating a smart factory scenario, and simulated RGB-D data visualized in RViz.
90
+
91
+ main configuration file under the parameter visual_objects. A visual object was used to visualize the real pushing box (tracked with Vicon) in [33].
92
+
93
+ Collision object Modeling static objects, that cause momentum changes for other bodies upon collision, such as floors, ceilings, and walls are often necessary. These objects are listed under the parameter collision_objects in the main configuration. During the development of the experiments presented in [31], a collision object was used to represent a surface.
94
+
95
+ Dynamic object Objects whose pose evolution is completely determined by the simulator's physics engine are a key requirement for development of control algorithms. Dynamic objects, listed as dynamic_objects in the main configuration file, can be used inside PyBullet to simulate an object. A simulated pushing box was used to develop the controller in [33].
96
+
97
+ Soft object All previously described object types are rigid bodies. We also provide an interface to PyBullet deformable objects. These are similar to dynamic objects in that their evolution is defined by PyBullet and can be specified using the soft_objects parameter.
98
+
99
+ Load from URDF Finally, robots and objects can also be loaded directly into the PyBullet environment using the urdfs parameter. The evolution of these objects are defined by PyBullet. However, since the usage of these objects is ambiguous their communication with ROS is limited.
100
+
101
+ § 2.3.2 SENSOR SIMULATION
102
+
103
+ Many control, and planning algorithms rely on sensory feedback. Integrating multiple sensory inputs, such as tactile and vision, is underdeveloped in robotic manipulation [39]. To enable future research in realistic simulated environments, we provide an interface to several sensing modalities.
104
+
105
+ $\mathbf{F}/\mathbf{T}$ sensor Through the configuration file for the robot it is possible to instantiate a simulated force-torque sensor attached to any joint on the robot. These virtual sensors publish joint reaction forces, read from PyBullet, as ROS wrench-stamped messages at a user-defined sampling frequency.
106
+
107
+ RGB-D camera Color and depth perception is a key sensing capability for object state estimation in contact-rich manipulation tasks. An RGB-D camera can be instantiated and attached to a frame through the main configuration file.
108
+
109
+ The color and depth images (image) are published together with the intrinsic camera parameters (camera_info). The camera intrinsic parameters are derived from the OpenGL projection matrix and can be used to back-project the images to a colored point cloud (Figure 4). This is natively supported in ROS, e.g. via the RViz DepthCloud plugin or via the rgbd_launch package. Optionally, the interface can compute and publish the point cloud data directly.
110
+
111
+ On a discrete GPU (NVIDIA GeForce GTX 1650 Mobile) this will achieve about ${27}\mathrm{\;{Hz}}$ , while on an integrated GPU (Intel UHD Graphics 630) this will reduce to about ${18}\mathrm{\;{Hz}}$ .
112
+
113
+ < g r a p h i c s >
114
+
115
+ Figure 4: RGB-D observation of a scene with two objects: (a) colour image, (b) depth image, (c) projected colour point cloud in RViz.
116
+
117
+ § 2.4 HUMAN INTERACTION
118
+
119
+ Developing contact-aware algorithms is a key aspect of future work for the robotics community [39,40]. Clearly there are safety concerns for HRI tasks. Simulation is a necessary step in any development cycle involving robots. This means the only way a human can interact with the virtual environment is through some interface (e.g. haptic device). To remedy this issue, we have developed a plugin for several human interfaces so that the human can be realized in the virtual environment and also receive virtual feedback from that environment.
120
+
121
+ Operator node Several common human interfaces and their drivers are provided in our framework out-of-the-box, these include: keyboard, mouse, joystick, space mouse, and a haptic device. Mapping the interface signals to a control space is non-trivial [41]. Choosing this mapping is simplified by allowing the user to specify the teleoperation mode in the launch file. We provide ROS nodes in the operator_node package that take raw interface signals as input and maps them to a control space of the users specification.
122
+
123
+ Logging signals There are several advantages for an intermediary node mapping driver signals to operator commands. First, modularity allows the user to easily swap out interfaces/mappings to compare modes. Second, methods utilizing moving horizon estimation (e.g. [31]) need to track a window of signals. We provide a node that enables this functionality in the operator_interface_logger node. Third, such a structure enables collection/comparison of data streams using ROS bags removing the need for extensive post-processing.
124
+
125
+ Scale node Several systems utilize joysticks, often the scaled value of a joystick axis defines velocity in certain dimensions $\left\lbrack {{42},{43}}\right\rbrack$ . We provide scale_node.py that appropriately orders and scales the interfaces axes.
126
+
127
+ Isometric node Point-mass systems are common models for teleoperation, e.g. [31, 32]. The individual axes of the interface is often in the range $\left\lbrack {-1,1}\right\rbrack$ . If we scaled the joystick axes, as before, then the magnitude of the maximum velocity is non-uniform for all interface states. We provide isometric node, py that ensures the maximum velocity magnitude is isometric.
128
+
129
+ § 2.5 INTERFACING WITH REAL HARDWARE
130
+
131
+ Interfacing with hardware is straightforward using our framework. Each object can publish/broadcast its state in several formats (e.g. joint states, float arrays, transforms, wrenches). This means the simulator can act, at development time, as the real system. When porting to the real hardware we provide nodes that remap the current system states to the required robot/hardware drivers: see the remap_joint_state_to_floatarray and remap_joint_state nodes in the custom_ros_tools package. Setup for these is as simple as remapping topics in a launch file or enabling a ROS re-mapper. The framework can also visualize the current state of physical robots/objects in PyBullet. This is very useful when debugging software and hardware.
132
+
133
+ § 2.6 ADDITIONAL UTILITIES
134
+
135
+ We also provide several utilities to facilitate development and safe robot operation. These utilities are described below.
136
+
137
+ < g r a p h i c s >
138
+
139
+ Figure 5: A human interacting with a virtual world. (a) The experimental setup where the human interacts with the virtual world using a haptic device. (b) The Kuka LWR robot in PyBullet about to interact with a static collision object including the coordinate frames (the frames are not presented to the user). (c) The force-feeback evolution that is rendered to the human via the haptic device.
140
+
141
+ Model Predictive Control When developing MPC methods [31, 33], it is useful to slowly iterate through individual MPC iterations. Start, stop, and step ROS services are provided that allow the user to easily debug MPC controllers further facilitated by a GUI interface (see rpbi_controls_node.py in the rpbi_utils package).
142
+
143
+ Inverse Kinematics Inverse kinematics is a key requirement for robotic systems with several established libraries $\left\lbrack {{44},{45},{46},{21}}\right\rbrack$ . We provide a standardized interface ik_ros that allows a user to easily swap out and compare different solvers. Several solvers are interfaced out-of-the box $\left\lbrack {{45},{21},{44},{29}}\right\rbrack$ . The implementation is extensible so that additional solvers can be easily included.
144
+
145
+ Interpolation Control often requires smaller time resolution than planning. In this case, interpolation between knot points is necessary. The framework provides interpolation_node.py in the rpbi_utils package that performs interpolation of planned trajectories.
146
+
147
+ Time-sync ROS with PyBullet Synchronizing time between processes and a simulated time can be useful. We provide an option so that the user can synchronize the ROS clock with PyBullet's simulation time.
148
+
149
+ Safe robot operation Safely operating robots is of paramount importance - especially when conducting HRI experiments. We provide a package safe_robot that ensures safe robot motions acting as a guard between target and commanded states - every target is checked (e.g. link and joint position/velocity limits, and self-collision) prior to being commanded on the real system.
150
+
151
+ § 3 USE-CASES
152
+
153
+ This section describes four use-cases highlighting the features of the proposed framework: (i) human 2 interaction with a virtual world, (ii) learning from demonstration using dynamic movement primi- 3 tives, (iii) full-body telepresence, and (iv) hardware realization. The code for running examples (i) and (ii) has been made fully open source. We plan to make the driver for the Xsens suit, used in (iii), open source along with an example.
154
+
155
+ § 3.1 HUMAN INTERACTION WITH VIRTUAL WORLDS
156
+
157
+ Developing robust learning and control techniques in contact-rich scenarios utilizing human input requires the human to interact with the virtual world. In order to prototype contact-rich machine learning and optimization-based methods with realistic interaction sequences requires such a haptic interface and a simulator for generating realistic force-feedback.
158
+
159
+ In this use-case, we present the user with a haptic interface, a simulated Kuka LWR robot arm, and a static collision box object (Figure 5a). Using the interface, the user controls a target position defined in the ${z}_{W}$ axis that is constrained in the ${x}_{W},{y}_{W}$ axes (Figure 5b). The task for the robot is to minimize the distance between the end-effector position and the target while keeping the ${z}_{E}$ and ${z}_{W}$ axes aligned. The joint motion is generated using the IK features in PyBullet, we interface with
160
+
161
+ < g r a p h i c s >
162
+
163
+ Figure 6: Pipeline for the learning from demonstration use-case.
164
+
165
+ this functionality using the ik_ros package described in Section 2.6. A simulated Force-Torque sensor is attached to the robot at the wrist joint. When the end-effector comes into contact with the static collision object, the force measured by the sensor in the ${z}_{E}$ axis is rendered to the user via force-feedback. The evolution of the force feedback for a single interaction with the the box object is shown in Figure 5c. To run this example, attach a haptic device (3D Systems Touch X), and execute the command roslaunch rpbi_examples human_interaction.launch.
166
+
167
+ § 3.2 LEARNING FROM DEMONSTRATION
168
+
169
+ Dynamic movement primitives (DMPs) [47] are a widely used mathematical formulation for modelling motor control of biological systems. Over recent years they have become a key component of learning from demonstration [48]. Many packages, examples, and code exists for learning and executing DMPs - several have been integrated in ROS. To highlight the flexibility and potential for learning from demonstration utlizing our framework we leverage a standard ROS package for learning DMPs [49].
170
+
171
+ The goal in this section is to demonstrate how to learn a DMP from a teleoperated demonstration using our framework. In this use-case, the user interfaces with the system using the keyboard. A Kuka LWR robot arm is presented in PyBullet, and controlled in position control mode (Figure 6). The interface commands $h$ are mapped to end-effector velocity in two dimensions. We use the EXOTica [21] plugin in the ik_ros package link-anonymized to perform inverse kinematics - the box and end-effector states ${x}_{d},{u}_{d}$ are saved as the human demonstration. The goal is for the human to demonstrate how to push a box from a starting location ${x}_{0}$ to the goal position ${x}_{g}$ ; this behavior is then learned from the human demonstration ${u}_{d}$ using a DMP $\widehat{\theta }$ . A random starting location for the end effector is chosen and the DMP is used to plan a motion $\widehat{x},\widehat{u}$ . When the DMP is executed, a starting position is chosen randomly. The data can be collected using standard ROS tools and processed to a Pandas data frame for analysis. To run this example, open a terminal and execute roslaunch rpbi_examples lfd.launch.
172
+
173
+ § 3.3 FULL-BODY VIRTUAL-PRESENCE
174
+
175
+ In this use-case, we show that not only can we setup various teleoperation examples for the development of learning and control, but also we can integrate a realization of a virtual human that can interact with the PyBullet environment. We equip a human with an XSens suit and publish the 3D positions of each human body part into ROS as a transformation. Consecutively, we align each link of the human model in PyBullet with the positions of each body part of the human to realize a full-body virtual-presence of a humanoid figure. The setup for this use-case is shown via an assistive dressing scenario in Figure 7a, full details of the work can be found in [35].
176
+
177
+ § 3.4 INTEGRATION WITH HARDWARE AND MPC
178
+
179
+ A key feature of our framework is its ability to easily integrate with robot hardware and develop methods for MPC [31, 33]. In this use-case we highlight this feature of our framework using a pushing task deployed on the Kawada Nextage humanoid robot. The setup is shown in Figure 7c. A typical development cycle for research is to develop first in simulation, and then port the work to the real system. Due to the complexity of robotic systems, data collection, hardware issues, etc., the latter step can be quite time consuming. Our goal during development of the framework was to minimize this difficulty. We developed the facility to remap target joint states to several formats required by real systems in our lab, and several object types that can interface with real sensors (e.g. Vicon, AprilTags) or broadcast transforms (similar to a real object equipped with Vicon markers or AprilTags). The interface makes it simple to swap between testing/prototyping the system in simulation, and deploying on the real system - our demonstration reduces to setting a flag (real_robot $=$ True or False). Furthermore, this use-case highlights the utility of the time-stepping feature of the interface for developing MPC algorithms, described in Section 2.6. Typically this can only be achieved in simulation, however since our system maps the simulator state to the real robot and has the facility to track objects using online sensing, e.g. we used Vicon, the framework can execute an MPC iteration on the real system by the user clicking a button. This significantly reduces the development time for laborious tasks such as parameter tuning.
180
+
181
+ < g r a p h i c s >
182
+
183
+ Figure 7: Integrating robot hardware and sensing. (a) and (b) Real-time virtual-presence of a human in the simulation environment during assistive dressing. (a) The configuration of the human is sensed with the Xsens suit. (b) A humanoid figure that corresponds to the human is shown, along with a green visual object (Section 2.3.1) used to illustrate the estimated region of the occluded human elbow. (b) The Kawada Nextage humanoid robot performing a pushing task.
184
+
185
+ § 4 LIMITATIONS
186
+
187
+ We implemented the framework using Python, a popular programming language with numerous libraries-making it suitable for our goals of easy prototyping and extensiblity. Despite Python being unsuitable for real-time systems, the robotics community has broadly adopted it as the language for implementing robotic experiments. Furthermore, we have found no latency issues in all of our experimental setups, running at frequencies of up to ${200}\mathrm{\;{Hz}}$ .
188
+
189
+ Currently, the framework only supports ROS Noetic-thus the only way to run the framework with ROS2 is via the ROS1 bridge [50]. Porting the framework to ROS2 is, at the time of writing this manuscript, in-development.
190
+
191
+ § 5 CONCLUSIONS
192
+
193
+ In this paper, we have proposed a framework for simulating/collecting data for contact-rich manipulation scenarios including: full physics simulation using PyBullet (known for reliable impact/contact modeling), integration with the ROS ecosystem, and several teleoperation interfaces and robots. Furthermore, the framework easily and demonstrably interfaces with robot hardware. We have specifically designed the implementation to exploit modularity and extensibility and to be highly flexible and easily developed.
194
+
195
+ We hope students, researchers, and industry will make use this framework to facilitate the development of their control algorithms and machine learning approaches in scenarios involving contact. The supplementary material contains full documentation. The main code base is open source at link-anonymized, which contains several examples, documentation, and videos. Dependencies are linked to from the documentation and system requirements are detailed. The framework is released under the LGPL license. The suggested setup is ROS Noetic, Python 3, on Ubuntu 20.04.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/AmPeAFzU3a4/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,321 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MIRA: Mental Imagery for Robotic Affordances
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Humans form mental images of 3D scenes to support counterfactual imagination, planning, and motor control. Our abilities to predict the appearance and affordance of the scene from previously unobserved viewpoints aid us in performing manipulation tasks (e.g., 6-DoF kitting) with a level of ease that is currently out of reach for existing robot learning frameworks. In this work, we aim to build artificial systems that can analogously plan actions on top of imagined images. To this end, we introduce Mental Imagery for Robotic Affordances (MIRA), an action reasoning framework that optimizes actions with novel-view synthesis and affordance prediction in the loop. Given a set of 2D RGB images, MIRA builds a consistent 3D scene representation, through which we synthesize novel orthographic views amenable to pixel-wise affor-dances prediction for action optimization. We illustrate how this optimization process enables us to generalize to unseen out-of-plane rotations for 6-DoF robotic manipulation tasks given a limited number of demonstrations, paving the way toward machines that autonomously learn to understand the world around them for planning actions.
12
+
13
+ Keywords: Neural Radiance Fields, Rearrangement, Robotic Manipulation
14
+
15
+ ## 1 Introduction
16
+
17
+ Suppose you are shown a small, unfamiliar object and asked if it could fit through an M-shaped slot. How might you solve this task? One approach would be to "rotate" the object in your mind's eye and see if, from some particular angle, the object’s profile fits into an $\mathrm{M}$ . To put the object through the slot would then just require orienting it to that particular imagined angle. In their famous experiments on "mental rotation", Shepard & Metzler argued that this is the approach humans use when reasoning about the relative poses of novel shapes [1]. Decades of work in psychology have documented numerous other ways that "mental images", i.e. pictures in our heads, can aid human cognition [2]. In this paper, we ask: can we give robots a similar ability, where they use mental imagery to aid their spatial reasoning?
18
+
19
+ Fortunately, the generic ability to perform imagined translations and rotations of a scene, also known as novel view synthesis, has seen a recent explosion of research in the computer vision and graphics community $\left\lbrack {3,4,5}\right\rbrack$ . Our work builds in particular upon Neural Radiance Fields (NeRFs) [6], which can render what a scene would look like from any camera pose. We treat a NeRF as a robot's "mind's eye", a virtual camera it may use to imagine how the scene would look were the robot to reposition itself. We couple this ability with an affordance model [7], which predicts, from any given view of the scene, what actions are currently afforded. Then the robot must just search, in its imagination, for the mental image that best affords the action it wishes to execute, then execute the action corresponding to that mental image.
20
+
21
+ We test this framework on 6-DoF rearrangement tasks [8], where the affordance model simply predicts, for each pixel in a given camera view, what is the action value of picking (or placing) at that pixel's coordinates. Using NeRF as a virtual camera for this task has several advantages over prior works which used physical cameras:
22
+
23
+ - Out-of-plane rotation. Prior works have applied affordance maps to 2-dimensional top-down camera views, allowing only the selection of top-down picking and placing actions $\left\lbrack {7,9,{10}}\right\rbrack$ . We instead formulate the pick and place problem as an action optimization process that searches across different novel synthesized views and their affordances of the scene. We demonstrate that this optimization process can handle the multi-modality of picking and placing while naturally supporting actions that involve out-of-plane rotations.
24
+
25
+ ![01963fe6-3da5-717f-9e17-d937c587fab7_1_319_208_1121_363_0.jpg](images/01963fe6-3da5-717f-9e17-d937c587fab7_1_319_208_1121_363_0.jpg)
26
+
27
+ Figure 1: Overview of MIRA. (a) Given a set of multi-view RGB images as input, we optimize a neural radiance field representation of the scene via volume rendering with perspective ray casting. (b) After the NeRF is optimized, we perform volume rendering with orthographic ray casting to render the scene from $V$ viewpoints. (c) The rendered orthographic images are fed into the policy for predicting pixel-wise action-values that correlate with picking and placing success. (d) The pixel with the highest action-value is selected, and its estimated depth and associated view orientation are used to parameterize the robot's motion primitive.
28
+
29
+ - Orthographic ray casting. A NeRF trained with images from consumer cameras can be used to synthesize novel views from novel kinds of cameras that are more suitable to action reasoning. Most physical cameras use perspective projection, in which the apparent size of an object in the image plane is inversely proportional to that object's distance from the camera - a relationship that any vision algorithm must comprehend and disentangle. NeRF can instead create images under other rendering procedures; we show that orthographic ray casting is particularly useful, which corresponds to a non-physical "camera" that is infinitely large and infinitely distant from the scene. This yields images in which an object's size in the image plane is invariant to its distance from the camera, and its appearance is equivariant with respect to translation parallel to the image plane. In essence, this novel usage of NeRF allows us to generate "blueprints" for the scene that complement the inductive biases of algorithms that encode translational equivariance (such as ConvNets).
30
+
31
+ - RGB-only. Prior rearrangement methods [11, 12, 13] commonly require 3D sensors (e.g. via structured light, stereo, or time-of-flight), and these are error-prone when objects contain thin structures or are composed of specular or semi-transparent materials-a common occurrence. These limitations drastically restrict the set of tasks, objects, and surfaces these prior works can reason over.
32
+
33
+ We term our method Mental Imagery for Robotic Affordances, or MIRA. To test MIRA, we perform experiments in both simulation and the real world. For simulation, we extend the Ravens [9] benchmark to include tasks that require 6-DoF actions. Our model demonstrates superior performance to existing state-of-the-art methods for object rearrangement $\left\lbrack {{14},9}\right\rbrack$ , despite not requiring depth sensors. Importantly, the optimization process with novel view synthesis and affordance prediction in the loop enables our framework to generalize to out-of-distribution object configurations, where the baselines struggle. In summary, we contribute (i) a framework that uses NeRFs as the scene representation to perform novel view synthesis for precise object rearrangement, (ii) an orthographic ray casting procedure for NeRFs rendering that facilitates the policy's translation equivariance, (iii) an extended benchmark of 6-DoF manipulation tasks in Ravens [9], and (iv) empirical results on a broad range of manipulation tasks, validated with real-robot experiments.
34
+
35
+ ## 2 Related Works
36
+
37
+ ### 2.1 Vision-based Manipulation.
38
+
39
+ Object-centric. Classical methods in visual perception for robotic manipulation mainly focus on representing instances with6-DoF poses [15,16,17,18,19,20,21]. However,6-DoF poses cannot represent the states of deformable objects or granular media, and cannot capture large intra-category variations of unseen instances [11]. Alternative methods that represent objects with dense descriptors [22, 23, 24] or keypoints $\left\lbrack {{11},{25},{26},{27}}\right\rbrack$ improve generalization, but they require a dedicated data collection procedure (e.g., configuring scenes with single objects).
40
+
41
+ Action-centric. Recent methods based on end-to-end learning directly predict actions given visual observations $\left\lbrack {{28},{29},{30},{31},{32},{33}}\right\rbrack$ . These methods can potentially work with deformable objects or granular media, and do not require any object-specific data collection procedures. However, these methods are known to be sample inefficient and challenging to debug. Recently, several works [9, 13, 34, 35, 36, 37, 38] have proposed to incorporate spatial structure into action reasoning for improved performance and better sample efficiency. Among them, the closest work to ours is Song et al. [13] which relies on view synthesis to plan 6-DoF picking. Our work differs in that it 1) uses NeRF whereas [13] uses TSDF [39], 2) does not require depth sensors, 3) uses orthographic image representation, 4) does not directly use the camera pose as actions, and 5) shows results on rearrangement tasks that require both picking and placing.
42
+
43
+ ### 2.2 Neural Fields for Robotics
44
+
45
+ Neural fields have emerged as a promising tool to represent 2D images [40], 3D geometry [41, 42], appearance [6, 43, 44], touch [45], and audio [46, 47]. They offer several advantages over classic representations (e.g., voxels, point clouds, and meshes) including reconstruction quality, and memory efficiency. Several works have explored the usage of neural fields for robotic applications including localization [48, 49], SLAM [50, 51, 52], navigation [53], dynamics modeling [54, 55, 56, 57], and reinforcement learning [58]. For robotic manipulation, GIGA [59] jointly trains a grasping network and an occupancy network for synergy. Dex-NeRF [60] infers the geometry of transparent objects with NeRF and determines the grasp poses with Dex-Net [30]. NDF [12] uses the features of occupancy networks [42] as object descriptors for few-shot imitation learning. NeRF-Supervision [61] uses NeRF as a dataset generator to learn dense object descriptors for picking.
46
+
47
+ ## 3 Method
48
+
49
+ Our goal is to predict actions ${a}_{t}$ , given RGB-only visual observations ${o}_{t}$ , and trained from only a limited number of demonstrations. We parameterize our action space with two-pose primitives ${a}_{t} = \left( {{\mathcal{T}}_{\text{pick }},{\mathcal{T}}_{\text{place }}}\right)$ , which are able to flexibly parameterize rearrangement tasks [9]. This problem is challenging due to the high degrees of freedom of ${a}_{t}$ (12 degrees of freedom for two full SE(3) poses), a lack of information about the underlying object state (such as object poses), and limited data. Our method (illustrated in Fig. 1) factorizes action reasoning into two modules: 1) a continuous neural radiance field that can synthesize virtual views of the scene at novel viewpoints, and 2) an optimization procedure which optimizes actions by predicting per-pixel affordances across different synthesized virtual pixels. We discuss these two modules in Sec. 3.1 and Sec. 3.2 respectively, followed by training details in Sec. 3.3.
50
+
51
+ ### 3.1 Scene Representation with Neural Radiance Field
52
+
53
+ To provide high-fidelity novel-view synthesis of virtual cameras, we represent the scene with a neural radiance field (NeRF) [6]. For our purposes, a key feature of NeRF is that it renders individual rays (pixels) rather than whole images, which enables flexible parameterization of rendering at inference time, including camera models that are non-physical (e.g., orthographic cameras) and not provided in the training set.
54
+
55
+ To render a pixel, NeRF casts a ray $\mathbf{r}\left( t\right) = \mathbf{o} + t\mathbf{d}$ from some origin $\mathbf{o}$ along the direction $\mathbf{d}$ passing through that pixel on an image plane. In particular, these rays are casted into a field ${F}_{\Theta }$ whose input is a $3\mathrm{D}$ location $\mathbf{x} = \left( {x, y, z}\right)$ and unit-norm viewing direction $\mathbf{d}$ , and whose output is an emitted color $c = \left( {r, g, b}\right)$ and volume density $\sigma$ . Along each ray, $K$ discrete points ${\left\{ {\mathbf{x}}_{k} = \mathbf{r}\left( {t}_{k}\right) \right\} }_{k = 1}^{K}$ are sampled for use as input to ${F}_{\Theta }$ , which outputs a set of densities and colors ${\left\{ {\sigma }_{k},{\mathbf{c}}_{k}\right\} }_{k = 1}^{K} = {\left\{ {F}_{\Theta }\left( {\mathbf{x}}_{k},\mathbf{d}\right) \right\} }_{k = 1}^{K}$ . Volume rendering [62] with a numerical quadrature approximation [63] is performed using these values to produce the color $\widehat{\mathbf{C}}\left( \mathbf{r}\right)$ of that pixel:
56
+
57
+ $$
58
+ \widehat{\mathbf{C}}\left( \mathbf{r}\right) = \mathop{\sum }\limits_{{k = 1}}^{K}{T}_{k}\left( {1 - \exp \left( {-{\sigma }_{k}\left( {{t}_{k + 1} - {t}_{k}}\right) }\right) }\right) {\mathbf{c}}_{k},\;{T}_{k} = \exp \left( {-\mathop{\sum }\limits_{{{k}^{\prime } < k}}{\sigma }_{{k}^{\prime }}\left( {{t}_{{k}^{\prime } + 1} - {t}_{{k}^{\prime }}}\right) }\right) . \tag{1}
59
+ $$
60
+
61
+ ![01963fe6-3da5-717f-9e17-d937c587fab7_3_337_219_1133_333_0.jpg](images/01963fe6-3da5-717f-9e17-d937c587fab7_3_337_219_1133_333_0.jpg)
62
+
63
+ Figure 2: Perspective vs. Orthographic Ray Casting. (a) A 3D world showing two objects, with the camera located at the top. (b) The procedure of perspective ray casting and a perspective rendering of the scene. The nearby object is large, the distant object is small, and both objects appear "tilted" according to their position. (c) The procedure of orthographic ray casting and an orthographic rendering of the scene, which does not correspond to any real consumer camera, wherein the size and appearance of both objects are invariant to their distances and equivariant to their locations. By using NeRF to synthesize these orthographic images, which correspond to non-physical cameras, we are able to construct RGB inputs that are equivariant with translation.
64
+
65
+ where ${T}_{k}$ represents the probability that the ray successfully transmits to point $\mathbf{r}\left( {t}_{k}\right)$ . At the beginning of each pick-and-place, our system takes multi-view posed RGB images as input and optimizes $\Theta$ by minimizing a photometric loss ${\mathcal{L}}_{\text{photo }} = \mathop{\sum }\limits_{{\mathbf{r} \in \mathcal{R}}}\parallel \widehat{\mathbf{C}}\left( \mathbf{r}\right) - \mathbf{C}\left( \mathbf{r}\right) {\parallel }_{2}^{2}$ , using some sampled set of rays $\mathbf{r} \in \mathcal{R}$ , where $\mathbf{C}\left( \mathbf{r}\right)$ is the observed RGB value of the pixel corresponding to ray $\mathbf{r}$ in an input image. In practice, we use instant-NGP [64] to accelerate NeRF training and inference.
66
+
67
+ Orthographic Ray Casting. In Fig. 2 we illustrate the difference between perspective and orthographic cameras. Though renderings from a NeRF ${F}_{\Theta }$ are highly realistic, the perspective ray casting procedure used by default in NeRF's volume rendering, which we visualize in Fig. 2(b), may cause scene content to appear distorted or scaled depending on the viewing angle and the camera's field of view - more distant objects will appear smaller in the image plane. Specifically, given a pixel coordinate(u, v)and camera pose(R, t), NeRF forms a ray $\mathbf{r} = \left( {\mathbf{o},\mathbf{d}}\right)$ using the perspective camera model:
68
+
69
+ $$
70
+ \mathbf{o} = \mathbf{t},\;\mathbf{d} = \mathbf{R}\left\lbrack \begin{matrix} \left( {u - {c}_{x}}\right) /{f}_{x} \\ \left( {v - {c}_{y}}\right) /{f}_{y} \\ 1 \end{matrix}\right\rbrack . \tag{2}
71
+ $$
72
+
73
+ This model is a reasonable proxy for the geometry of most consumer RGB cameras, hence its use by NeRF during training and evaluation. However, the distortion and scaling effects caused by perspective ray casting degrades the performance of the downstream optimization procedure that takes as input the synthesized images rendered by NeRF, as we will demonstrate in our results (Sec. 4.1). To address this issue, we modify the rendering procedure of NeRF after it is optimized by replacing perspective ray casting with orthographic ray casting:
74
+
75
+ $$
76
+ \mathbf{o} = \mathbf{t} + \mathbf{R}\left\lbrack \begin{matrix} \left( {u - {c}_{x}}\right) /{f}_{x} \\ \left( {v - {c}_{y}}\right) /{f}_{y} \\ 0 \end{matrix}\right\rbrack ,\;\mathbf{d} = \mathbf{R}\left\lbrack \begin{array}{l} 0 \\ 0 \\ 1 \end{array}\right\rbrack . \tag{3}
77
+ $$
78
+
79
+ We visualize this procedure in Fig. 2(c). Orthographic ray casting marches parallel rays into the scene, so that each rendered pixel represents a parallel window of $3\mathrm{D}$ space. This property removes the dependence between an object's appearance and its distance to the camera: an object looks the same if it is either far or nearby. Further, as all rays for a given camera rotation $\mathbf{R}$ are parallel, this provides equivariance to the in-plane camera center ${c}_{x},{c}_{y}$ . These attributes facilitate downstream learning to be equivariant to objects’ 3D locations and thereby encourages generalization. As open-source instant-NGP [64] did not support orthographic projection, we implement this ourselves as a CUDA kernel (see Supp.).
80
+
81
+ Our decision of choosing an orthographic view of the scene draws inspiration from previous works that have used single-view orthographic scene representations $\left\lbrack {{16},9}\right\rbrack$ , but critically differs in the following two aspects: (a) we create orthographic scene representations by casting rays into a radiance field rather than by point-cloud reprojection, thereby significantly reducing image artifacts (see Supp.); (b) we rely on multi-view RGB images to recover scene geometry, instead of depth sensors.
82
+
83
+ ### 3.2 Policy Representation: Affordance Raycasts in a Radiance Field
84
+
85
+ To address ${SE}\left( 3\right)$ -parameterized actions, we formulate action selection as an optimization problem over synthesized novel-view pixels and their affordances. Using our NeRF-based scene representation, we densely sample $V$ camera poses around the workspace and render images ${\widehat{I}}_{{v}_{t}} = {F}_{\Theta }\left( {\mathcal{T}}_{{v}_{t}}\right)$ for each pose, $\forall {v}_{t} = 0,1,\cdots , V$ . One valid approach is to search for actions directly in the space of camera poses for the best ${\mathcal{T}}_{{v}_{t}}$ , but orders of magnitude computation may be saved by instead considering actions that correspond to each pixel within each image, and sharing computation between all pixels in the image (e.g., by processing each image with just a single pass through a ConvNet). This (i) extends the paradigm of pixel-wise affordances [7] into full 6-DOF, novel-view-enabled action spaces, and (ii) alleviates the search over poses due to translational equivariance provided by orthographic rendering (Sec. 3.1).
86
+
87
+ Accordingly, we formulate each pixel in each synthesized view as parameterizing a robot action, and we learn a dense action-value function $E$ which outputs per-pixel action values of shape ${\mathbb{R}}^{\mathrm{H} \times \mathrm{W}}$ given a novel-view image of shape ${\mathbb{R}}^{\mathrm{H} \times \mathrm{W} \times 3}$ . Actions are selected by simultaneously searching across all pixels $\mathbf{u}$ in all synthesized views ${v}_{t}$ :
88
+
89
+ $$
90
+ {\mathbf{u}}_{t}^{ * },{v}_{t}^{ * } = \mathop{\operatorname{argmin}}\limits_{{{\mathbf{u}}_{t},{v}_{t}}}E\left( {{\widehat{I}}_{{v}_{t}},{\mathbf{u}}_{t}}\right) ,\;\forall {v}_{t} = 0,1,\cdots , V \tag{4}
91
+ $$
92
+
93
+ where the pixel ${\mathbf{u}}_{t}^{ * }$ and the associated estimated depth $d\left( {\mathbf{u}}_{t}^{ * }\right)$ from NeRF are used to determine the $3\mathrm{D}$ translation, and the orientation of ${\mathcal{T}}_{{v}_{t}^{ * }}$ is used to determine the $3\mathrm{D}$ rotation of the predicted action. Our approach employs multiple strategies for equivariance: $3\mathrm{D}$ translational equivariance is in part enabled by orthographic raycasting and synergizes well with translationally-equivariant dense model architectures for $E$ such as ConvNets $\left\lbrack {{65},{66}}\right\rbrack$ , meanwhile 3D rotational equivariance is also encouraged, as synthesized rotated views can densely cover novel orientations of objects.
94
+
95
+ While the formulation above may be used to predict the picking pose ${\mathcal{T}}_{\text{pick }}$ and the placing pose ${\mathcal{T}}_{\text{place }}$ independently, intuitively the prediction of ${\mathcal{T}}_{\text{pick }}$ affects the prediction of ${\mathcal{T}}_{\text{place }}$ due to the latter’s geometric dependence on the former. We therefore decompose the action-value function into (i) picking and (ii) pick-conditioned placing, similar to prior work [9]:
96
+
97
+ $$
98
+ {\mathbf{u}}_{\text{pick }}^{ * },{v}_{\text{pick }}^{ * } = \mathop{\operatorname{argmin}}\limits_{{{\mathbf{u}}_{\text{pick }},{v}_{\text{pick }}}}{E}_{\text{pick }}\left( {{\widehat{I}}_{{v}_{\text{pick }}},{\mathbf{u}}_{\text{pick }}}\right) ,\;\forall {v}_{\text{pick }} = 0,1,\cdots , V \tag{5}
99
+ $$
100
+
101
+ $$
102
+ {\mathbf{u}}_{\text{place }}^{ * },{v}_{\text{place }}^{ * } = \mathop{\operatorname{argmin}}\limits_{{{\mathbf{u}}_{\text{place }},{v}_{\text{place }}}}{E}_{\text{place }}\left( {{\widehat{I}}_{{v}_{\text{place }}},{\mathbf{u}}_{\text{place }} \mid {\mathbf{u}}_{\text{pick }}^{ * },{v}_{\text{pick }}^{ * }}\right) ,\;\forall {v}_{\text{place }} = 0,1,\cdots , V \tag{6}
103
+ $$
104
+
105
+ where ${E}_{\text{place }}$ uses the Transport operation from Zeng et al. [9] to convolve the feature map of ${\widehat{I}}_{{v}_{\text{pick }}^{ * }}$ around ${\mathbf{u}}_{\text{pick }}^{ * }$ with the feature maps of ${\left\{ {\widehat{I}}_{{v}_{\text{piace }}}\right\} }_{{v}_{\text{piace }} = 1}^{V}$ for action-value prediction. We refer readers to [9] for details on this coupling.
106
+
107
+ ### 3.3 Training
108
+
109
+ We train the action-value function as an energy-based model (EBM) [67, 68, 69]. For each expert demonstration, we construct a tuple $\mathcal{D} = \left\{ {{\widehat{I}}_{{v}_{\text{nick }}^{ * }}^{ * },{\mathbf{u}}_{\text{pick }}^{ * },{\widehat{I}}_{{v}_{\text{nick }}^{ * }}^{ * },{\mathbf{u}}_{\text{place }}^{ * }}\right\}$ , where ${\widehat{I}}_{{v}_{\text{nick }}^{ * }}$ and ${\widehat{I}}_{{v}_{\text{nick }}^{ * }}^{ * }$ are the synthesized images whose viewing directions are aligned with the end-effector’s rotations; ${\mathbf{u}}_{\text{pick }}^{ * }$ and ${\mathbf{u}}_{\text{place }}^{ * }$ are the best pixels in those views annotated by experts. We use a Monte-Carlo log-likelihood training objective
110
+
111
+ ![01963fe6-3da5-717f-9e17-d937c587fab7_5_318_203_1170_234_0.jpg](images/01963fe6-3da5-717f-9e17-d937c587fab7_5_318_203_1170_234_0.jpg)
112
+
113
+ Figure 3: Simulation qualitative results. MIRA only requires RGB inputs and can solve different 6-DoF tasks: (a) hanging-disks, (b) place-red-in-green, (c) stacking-objects, and (d) block-insertion.
114
+
115
+ <table><tr><td>Task</td><td>precise placing</td><td>multi-modal placing</td><td>distractors</td><td>unseen colors</td><td>unseen objects</td></tr><tr><td>block-insertion</td><td>✓</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td></tr><tr><td>place-red-in-green</td><td>✘</td><td>✓</td><td>✓</td><td>✘</td><td>✘</td></tr><tr><td>hanging-disks</td><td>✓</td><td>✘</td><td>✘</td><td>✓</td><td>✘</td></tr><tr><td>stacking-objects</td><td>✓</td><td>✘</td><td>✘</td><td>✘</td><td>✓</td></tr></table>
116
+
117
+ Table 1: Simulation tasks. We extend Ravens [9] with four new 6-DoF tasks and summarize their associated challenges.
118
+
119
+ [70], where we draw pixels ${\left\{ {\widehat{\mathbf{u}}}_{j} \mid {\widehat{\mathbf{u}}}_{j} \neq {\mathbf{u}}_{\text{pick }}^{ * },{\widehat{\mathbf{u}}}_{j} \neq {\mathbf{u}}_{\text{place }}^{ * }\right\} }_{j = 1}^{{N}_{\text{neg }}}$ from randomly synthesized images ${\widehat{I}}_{\text{neg }}$ as negative samples. For brevity, we omit the subscript for pick and place and present the loss function that is used to train both action-value functions:
120
+
121
+ $$
122
+ \mathcal{L}\left( \mathcal{D}\right) = - \log p\left( {{\mathbf{u}}^{ * } \mid \widehat{I},{\widehat{I}}_{\text{neg }},{\left\{ {\widehat{\mathbf{u}}}_{j}\right\} }_{j = 1}^{{N}_{\text{reg }}}}\right) ,\;p\left( {{\mathbf{u}}^{ * } \mid \widehat{I},{\widehat{I}}_{\text{neg }},{\left\{ {\widehat{\mathbf{u}}}_{j}\right\} }_{j = 1}^{{N}_{\text{reg }}}}\right) = \frac{{e}^{-{E}_{\theta }\left( {\widehat{I},\mathbf{u}}\right) }}{{e}^{-{E}_{\theta }\left( {\widehat{I},\mathbf{u}}\right) } + \mathop{\sum }\limits_{{j = 1}}^{{N}_{\text{reg }}}{e}^{-{E}_{\theta }\left( {{\widehat{I}}_{\text{neg }},{\widehat{\mathbf{u}}}_{j}}\right) }} \tag{7}
123
+ $$
124
+
125
+ A key innovation of our objective function compared to previous works $\left\lbrack {9,{10},{36},{71}}\right\rbrack$ is the inclusion of negative samples ${\left\{ {\widehat{\mathbf{u}}}_{j}\right\} }_{j = 1}^{{N}_{\text{reg }}}$ from imagined views ${\widehat{I}}_{\text{neg }}$ . We study the effects of ablating negative samples in Sec. 4.1 and show that they are essential for successfully training action-value functions.
126
+
127
+ ## 4 Results
128
+
129
+ We execute experiments in both simulation and real-world settings to evaluate the proposed method across various tasks.
130
+
131
+ ### 4.1 Simulation Experiments
132
+
133
+ Environment. We propose four new 6-DoF tasks based on Ravens [9] and use them as the benchmark. We show qualitative examples of these tasks in Fig. 3 and summarize their associated challenges in Table 1. All simulated experiments are conducted in PyBullet [72] using a Universal Robot UR5e with a suction gripper. The input observations for MIRA are 30 RGB images from different cameras pointing toward the center. For all the baselines, we additionally supply the corresponding noiseless depth images. Each image has a resolution of ${640} \times {480}$ . The camera has focal length $f = {450}$ and camera center $\left( {{c}_{x},{c}_{y}}\right) = \left( {{320},{240}}\right)$ .
134
+
135
+ Evaluation. For each task, we perform evaluations under two settings: in-distribution configures objects with random rotations $\left( {{\theta }_{x},{\theta }_{y} \in \left\lbrack {-\frac{\pi }{6},\frac{\pi }{6}}\right\rbrack ,{\theta }_{z} \in \left\lbrack {-\pi ,\pi }\right\rbrack }\right)$ . This is also the distribution we used to construct the training set. out-of-distribution instead configures objects with random rotations $\left( {{\theta }_{x},{\theta }_{y} \in \left\lbrack {-\frac{\pi }{4}, - \frac{\pi }{6}}\right\rbrack \cup \left\lbrack {\frac{\pi }{6},\frac{\pi }{4}}\right\rbrack ,{\theta }_{z} \in \left\lbrack {-\pi ,\pi }\right\rbrack }\right)$ . We note that these rotations are outside the training distribution and also correspond to larger out-of-plane rotations. Thus, this setting requires stronger generalization. We use a binary score ( 0 for failure and 1 for success) and report results on 100 evaluation runs for agents trained with $n = 1,{10},{100}$ demonstrations.
136
+
137
+ Baseline Methods. Although our method only requires RGB images, we benchmark against published baselines that additionally require depth images as inputs. Form2Fit [14] predicts the placing action by estimating dense descriptors of the scene for geometric matching. Transporter- ${SE}\left( 2\right)$ and Transporter- ${SE}\left( 3\right)$ are both introduced in Zeng et al. [9]. Although Transporter- ${SE}\left( 2\right)$ is not designed to solve manipulation tasks that require 6-DoF actions, its inclusion helps indicate what level of task success can be achieved on the shown tasks by simply ignoring out-of-plane rotations. Transporter- ${SE}\left( 3\right)$ predicts6-DoF actions
138
+
139
+ by first using Transporter- ${SE}\left( 2\right)$ to estimate ${SE}\left( 2\right)$ actions, and then feeding them into a regression model to predict the remaining rotational $\left( {{r}_{x},{r}_{y}}\right)$ and translational (z-height) degrees of freedom. Additionally, we benchmark against a baseline, GT-State MLP, that assumes perfect object poses. It takes ground truth state (object poses) as inputs and trains an MLP to regress two ${SE}\left( 3\right)$ poses for ${\mathcal{T}}_{\text{pick }}$ and ${\mathcal{T}}_{\text{place }}$ .
140
+
141
+ <table><tr><td rowspan="2">Method</td><td colspan="3">block-insertion in-distribution-poses</td><td colspan="3">block-insertion out-of-distribution-poses</td><td colspan="3">place-red-in-greens in-distribution-poses</td><td colspan="3">place-red-in-greens out-of-distribution-poses</td></tr><tr><td>1</td><td>10</td><td>100</td><td>1</td><td>10</td><td>100</td><td>1</td><td>10</td><td>100</td><td>1</td><td>10</td><td>100</td></tr><tr><td>GT-State MLP</td><td>0</td><td>1</td><td>1</td><td>0</td><td>0</td><td>1</td><td>0</td><td>1</td><td>3</td><td>0</td><td>1</td><td>1</td></tr><tr><td>Form2Fit [14]</td><td>0</td><td>1</td><td>10</td><td>0</td><td>0</td><td>0</td><td>35</td><td>79</td><td>96</td><td>21</td><td>30</td><td>61</td></tr><tr><td>Transporter- ${SE}\left( 2\right) \left\lbrack 9\right\rbrack$</td><td>25</td><td>69</td><td>73</td><td>1</td><td>21</td><td>20</td><td>30</td><td>74</td><td>83</td><td>25</td><td>18</td><td>36</td></tr><tr><td>Transporter- ${SE}\left( 3\right) \left\lbrack 9\right\rbrack$</td><td>26</td><td>70</td><td>77</td><td>0</td><td>20</td><td>22</td><td>29</td><td>77</td><td>85</td><td>23</td><td>20</td><td>38</td></tr><tr><td>Ours</td><td>0</td><td>84</td><td>89</td><td>0</td><td>74</td><td>78</td><td>27</td><td>89</td><td>96</td><td>22</td><td>56</td><td>77</td></tr><tr><td rowspan="2">Method</td><td colspan="3">hanging-disks in-distribution-poses</td><td colspan="3">hanging-disks out-of-distribution-poses</td><td colspan="3">stacking-objects in-distribution-poses</td><td colspan="3">stacking-objects out-of-distribution-poses</td></tr><tr><td>1</td><td>10</td><td>100</td><td>1</td><td>10</td><td>100</td><td>1</td><td>10</td><td>100</td><td>1</td><td>10</td><td>100</td></tr><tr><td>GT-State MLP</td><td>0</td><td>0</td><td>3</td><td>0</td><td>0</td><td>1</td><td>0</td><td>1</td><td>1</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Form2Fit [14]</td><td>0</td><td>11</td><td>5</td><td>4</td><td>1</td><td>3</td><td>1</td><td>7</td><td>12</td><td>0</td><td>4</td><td>5</td></tr><tr><td>Transporter-SE(2) [9]</td><td>6</td><td>65</td><td>72</td><td>3</td><td>32</td><td>17</td><td>0</td><td>46</td><td>40</td><td>1</td><td>18</td><td>35</td></tr><tr><td>Transporter- ${SE}\left( 3\right) \left\lbrack 9\right\rbrack$</td><td>6</td><td>66</td><td>75</td><td>0</td><td>32</td><td>20</td><td>0</td><td>42</td><td>40</td><td>0</td><td>16</td><td>34</td></tr><tr><td>Ours</td><td>13</td><td>68</td><td>100</td><td>0</td><td>43</td><td>71</td><td>13</td><td>21</td><td>76</td><td>0</td><td>3</td><td>74</td></tr></table>
142
+
143
+ Table 2: Quantitative results. Task success rate (mean %) vs. # of demonstration episodes (1,10,100) used in training. Tasks labeled with in-distribution configure objects with random rotations $\left( {{\theta }_{x},{\theta }_{y} \in \left\lbrack {-\frac{\pi }{6},\frac{\pi }{6}}\right\rbrack ,{\theta }_{z} \in \left\lbrack {-\pi ,\pi }\right\rbrack }\right.$ ). This is also the rotation distribution used for creating the training set. Tasks labeled with out-of-distribution configure objects with rotations $\left( {{\theta }_{x},{\theta }_{y} \in \left\lbrack {-\frac{\pi }{4}, - \frac{\pi }{6}}\right\rbrack \cup \left\lbrack {\frac{\pi }{6},\frac{\pi }{4}}\right\rbrack ,{\theta }_{z} \in \left\lbrack {-\pi ,\pi }\right\rbrack }\right)$ that are (i) outside the training pose distribution and (ii) larger out-of-plane rotations.
144
+
145
+ Results. Fig. 4 shows the average scores of all methods trained on different numbers of demonstrations. GT-State MLP fails completely; Form2Fit cannot achieve ${40}\%$ success rate under any setting; Both Transporter- ${SE}\left( 2\right)$ and Transporter- ${SE}\left( 3\right)$ are able to achieve $\sim {70}\%$ success rate when the object poses are sampled from the training distribution, but the success rate drops to $\sim {30}\%$ when the object poses are outside the training distribution. MIRA outperforms all baselines by a large margin when there are enough demonstrations of the task. Its success rate is $\sim {90}\%$ under the setting of in-distribution and $\sim {80}\%$ under the setting of out-of-distribution. The performance improvement over baselines demonstrates its generalization ability thanks to the action optimization process with novel view synthesis and affordance prediction in the loop. Interestingly, we found that MIRA sometimes performs worse than baselines when only 1 demonstration is supported. We hypothesize that this is because our action-value function needs more data to understand the subtle differences between images rendered from different views in order to select the best view. We show the full quantitative results in Table 2.
146
+
147
+ ![01963fe6-3da5-717f-9e17-d937c587fab7_6_721_1273_764_320_0.jpg](images/01963fe6-3da5-717f-9e17-d937c587fab7_6_721_1273_764_320_0.jpg)
148
+
149
+ Figure 4: Average scores of all methods under both in-distribution and out-of-distribution settings.
150
+
151
+ Ablation studies. To understand the importance of different components within our framework, we benchmark against two variants: (i) ours w/ perspective ray casting, and (ii) ours w/o multi-view negative samples in Eq. (7). We show the quantitative results in Table 3. We find that ours w/ perspective ray casting fail to learn the action-value functions because the perspective images (a) contain distorted appearances of the objects that are challenging for CNNs to comprehend and (b) the groudtruth picking or placing locations may be occluded by the robot arms. We visualize these challenges in the supplementary materials. Our method with orthographic ray casting circumvents both challenges by controlling the near/far planes to ignore occlusions without worrying about the distortion and scaling. Ours w/o multi-view negative samples
152
+
153
+ ![01963fe6-3da5-717f-9e17-d937c587fab7_7_316_204_1169_461_0.jpg](images/01963fe6-3da5-717f-9e17-d937c587fab7_7_316_204_1169_461_0.jpg)
154
+
155
+ Figure 5: Real-world qualitative results. We validate MIRA in the real world with a kitting task that includes objects with reflective or transparent materials. We show that our framework can successfully pick up a stainless steel ice sphere and place it into 10+ different cups configured with random translations and out-of-plane rotations.
156
+
157
+ also fails to learn reliable action-value functions, and this may be due to a distribution shift between training and test: during training, it has only been supervised to choose the best pixel given an image. However, it is tasked to both select the best pixel and best view during the test time.
158
+
159
+ <table><tr><td rowspan="2">Method</td><td colspan="3">block-insertion in-distribution-poses</td><td colspan="3">block-insertion out-of-distribution-poses</td></tr><tr><td>1</td><td>10</td><td>100</td><td>1</td><td>10</td><td>100</td></tr><tr><td>Ours w/ perspective</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Ours w/o multi-view negatives</td><td>0</td><td>11</td><td>13</td><td>0</td><td>2</td><td>4</td></tr><tr><td>Ours</td><td>0</td><td>84</td><td>89</td><td>0</td><td>74</td><td>78</td></tr></table>
160
+
161
+ Table 3: Ablation studies. We study the effects of ablating orthographic ray casting or multiview negative samples from our system.
162
+
163
+ 242
164
+
165
+ 243
166
+
167
+ ### 4.2 Real-world Experiments
168
+
169
+ We validate our framework with a kitting task in the real world and show qualitative results in Fig. 5. Our system consists of a UR5 arm, a customized suction gripper, and a wrist-mounted camera. We show that our method can successfully pick up a stainless steel ice sphere and place it into 10+ different cups configured with random translations and out-of-plane rotations. This task is challenging because (i) it includes objects with reflective or transparent materials, which makes this task not amenable to existing works that require depth sensors $\left\lbrack {9,{12},{14}}\right\rbrack$ , and (ii) it requires out-of-plane action reasoning. The action-value functions are trained with 20 demonstrations using these cups. At the beginning of each pick-and-place, our system gathers ${301280} \times {720}\mathrm{{RGB}}$ images of the scene with the wrist-mounted camera. Each image’s camera pose is derived from the robotic manipulator's end-effector pose and a calibrated transformation between the end-effector and the camera. This data collection procedure is advantageous as industrial robotic manipulators feature sub-millimeter repeatability, which provides accurate camera poses for building NeRF. In practice, we search through $V = {121}$ virtual views that uniformly cover the workspace and predict their affordances for optimizing actions. The optimization process currently takes around 2 seconds using a single NVIDIA RTX 2080 Ti GPU. This step can be straightforwardly accelerated by parallelizing the computations with multiple GPUs.
170
+
171
+ ## 5 Limitations And Conclusion
172
+
173
+ In terms of limitations, our system currently requires training a NeRF of the scene for each step of the manipulation. An instant-NGP [64] requires approximately 10 seconds to converge using a single NVIDIA RTX 2080 Ti GPU, and moving the robot arms around to collect 30 multi-view RGB images of the scene takes nearly 1 minute. We believe observing the scene with multiple mounted cameras or learning a prior over instant-NGP could drastically reduce the runtime. In the future, we plan to explore the usage of mental imagery for other robotics applications such as navigation and mobile manipulation.
174
+
175
+ ## References
176
+
177
+ [1] R. N. Shepard and J. Metzler. Mental rotation of three-dimensional objects. Science, 171(3972): 701-703, 1971.
178
+
179
+ [2] A. Richardson. Mental imagery. Springer, 2013.
180
+
181
+ [3] F. Dellaert and L. Yen-Chen. Neural volume rendering: Nerf and beyond. arXiv preprint arXiv:2101.05204, 2020.
182
+
183
+ [4] Y. Xie, T. Takikawa, S. Saito, O. Litany, S. Yan, N. Khan, F. Tombari, J. Tompkin, V. Sitzmann, and S. Sridhar. Neural fields in visual computing and beyond. In Computer Graphics Forum, 2022.
184
+
185
+ [5] A. Tewari, J. Thies, B. Mildenhall, P. Srinivasan, E. Tretschk, W. Yifan, C. Lassner, V. Sitzmann, R. Martin-Brualla, S. Lombardi, et al. Advances in neural rendering. In Computer Graphics Forum, 2022.
186
+
187
+ [6] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ${ECCV},{2020}$ .
188
+
189
+ [7] A. Zeng. Learning visual affordances for robotic manipulation. PhD thesis, Princeton University, 2019.
190
+
191
+ [8] D. Batra, A. X. Chang, S. Chernova, A. J. Davison, J. Deng, V. Koltun, S. Levine, J. Malik, I. Mordatch, R. Mottaghi, et al. Rearrangement: A challenge for embodied ai. arXiv preprint arXiv:2011.01975, 2020.
192
+
193
+ [9] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin, D. Duong, V. Sindhwani, and J. Lee. Transporter networks: Rearranging the visual world for robotic manipulation. CoRL, 2020.
194
+
195
+ [10] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipulation. In ${CoRL},{2021}$ .
196
+
197
+ [11] L. Manuelli, W. Gao, P. Florence, and R. Tedrake. kpam: Keypoint affordances for category-level robotic manipulation. In ISRR, 2019.
198
+
199
+ [12] A. Simeonov, Y. Du, A. Tagliasacchi, J. B. Tenenbaum, A. Rodriguez, P. Agrawal, and V. Sitzmann. Neural descriptor fields: Se (3)-equivariant object representations for manipulation. In ICRA, 2022.
200
+
201
+ [13] S. Song, A. Zeng, J. Lee, and T. Funkhouser. Grasping in the wild: Learning 6dof closed-loop grasping from low-cost demonstrations. In IROS, 2020.
202
+
203
+ [14] K. Zakka, A. Zeng, J. Lee, and S. Song. Form2fit: Learning shape priors for generalizable assembly from disassembly. In ${ICRA},{2020}$ .
204
+
205
+ [15] M. Zhu, K. G. Derpanis, Y. Yang, S. Brahmbhatt, M. Zhang, C. Phillips, M. Lecce, and K. Daniilidis. Single image 3d object detection and pose estimation for grasping. In ICRA, 2014.
206
+
207
+ [16] A. Zeng, K.-T. Yu, S. Song, D. Suo, E. Walker, A. Rodriguez, and J. Xiao. Multi-view self-supervised deep learning for $6\mathrm{\;d}$ pose estimation in the amazon picking challenge. In ICRA,2017.
208
+
209
+ [17] Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox. PoseCNN: A convolutional neural network for 6d object pose estimation in cluttered scenes. In RSS, 2018.
210
+
211
+ [18] X. Deng, Y. Xiang, A. Mousavian, C. Eppner, T. Bretl, and D. Fox. Self-supervised 6d object pose estimation for robot manipulation. In ${ICRA},{2020}$ .
212
+
213
+ [19] H. Wang, S. Sridhar, J. Huang, J. Valentin, S. Song, and L. J. Guibas. Normalized object coordinate space for category-level 6d object pose and size estimation. In CVPR, 2019.
214
+
215
+ [20] X. Chen, Z. Dong, J. Song, A. Geiger, and O. Hilliges. Category level object pose estimation via neural analysis-by-synthesis. In ${ECCV},{2020}$ .
216
+
217
+ [21] X. Li, H. Wang, L. Yi, L. J. Guibas, A. L. Abbott, and S. Song. Category-level articulated object pose estimation. In ${CVPR},{2020}$ .
218
+
219
+ [22] P. R. Florence, L. Manuelli, and R. Tedrake. Dense object nets: Learning dense visual object descriptors by and for robotic manipulation. In ${CoRL},{2018}$ .
220
+
221
+ [23] P. Florence, L. Manuelli, and R. Tedrake. Self-supervised correspondence in visuomotor policy learning. ${RA} - L,{2019}$ .
222
+
223
+ [24] P. Sundaresan, J. Grannen, B. Thananjeyan, A. Balakrishna, M. Laskey, K. Stone, J. E. Gonzalez, and K. Goldberg. Learning rope manipulation policies using dense object descriptors trained on synthetic depth data. In ${ICRA},{2020}$ .
224
+
225
+ [25] T. D. Kulkarni, A. Gupta, C. Ionescu, S. Borgeaud, M. Reynolds, A. Zisserman, and V. Mnih. Unsupervised learning of object keypoints for perception and control. NeurIPS, 32, 2019.
226
+
227
+ [26] X. Liu, R. Jonschkowski, A. Angelova, and K. Konolige. Keypose: Multi-view 3d labeling and keypoint estimation for transparent objects. In CVPR, 2020.
228
+
229
+ [27] Y. You, L. Shao, T. Migimatsu, and J. Bohg. Omnihang: Learning to hang arbitrary objects using contact point correspondences and neural collision estimation. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 5921-5927. IEEE, 2021.
230
+
231
+ [28] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. JMLR, 2016.
232
+
233
+ [29] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrish-nan, V. Vanhoucke, and S. Levine. Scalable deep reinforcement learning for vision-based robotic manipulation. In ${CoRL},{2018}$ .
234
+
235
+ [30] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg. Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. arXiv preprint arXiv:1703.09312, 2017.
236
+
237
+ [31] A. Zeng, S. Song, K.-T. Yu, E. Donlon, F. R. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu, E. Romo, et al. Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching. In ICRA, 2018.
238
+
239
+ [32] A. ten Pas, M. Gualtieri, K. Saenko, and R. Platt. Grasp pose detection in point clouds. IJRR, 2017.
240
+
241
+ [33] A. Mousavian, C. Eppner, and D. Fox. 6-dof graspnet: Variational grasp generation for object manipulation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019.
242
+
243
+ [34] J. Wu, X. Sun, A. Zeng, S. Song, J. Lee, S. Rusinkiewicz, and T. Funkhouser. Spatial action maps for mobile manipulation. arXiv preprint arXiv:2004.09141, 2020.
244
+
245
+ [35] S. James, K. Wada, T. Laidlow, and A. J. Davison. Coarse-to-fine q-attention: Efficient learning for visual robotic manipulation via discretisation. In CVPR, 2022.
246
+
247
+ [36] H. Huang, D. Wang, R. Walter, and R. Platt. Equivariant transporter network. In RSS, 2022.
248
+
249
+ [37] D. Wang, R. Walters, and R. Platt. So (2) equivariant reinforcement learning. In ICLR, 2022.
250
+
251
+ [38] X. Zhu, D. Wang, O. Biza, G. Su, R. Walters, and R. Platt. Sample efficient grasp learning using equivariant models. In RSS, 2022.
252
+
253
+ [39] B. Curless and M. Levoy. A volumetric method for building complex models from range images. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, 1996.
254
+
255
+ [40] T. Karras, M. Aittala, S. Laine, E. Härkönen, J. Hellsten, J. Lehtinen, and T. Aila. Alias-free generative adversarial networks. NeurIPS, 34, 2021.
256
+
257
+ [41] J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In CVPR, 2019.
258
+
259
+ [42] L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger. Occupancy networks: Learning 3d reconstruction in function space. In ${CVPR},{2019}$ .
260
+
261
+ [43] V. Sitzmann, M. Zollhöfer, and G. Wetzstein. Scene representation networks: Continuous 3d-structure-aware neural scene representations. In NeurIPS, 2019.
262
+
263
+ [44] V. Sitzmann, S. Rezchikov, B. Freeman, J. Tenenbaum, and F. Durand. Light field networks: Neural scene representations with single-evaluation rendering. NeurIPS, 34, 2021.
264
+
265
+ [45] R. Gao, Y.-Y. Chang, S. Mall, L. Fei-Fei, and J. Wu. Objectfolder: A dataset of objects with implicit visual, auditory, and tactile representations. In CoRL, 2021.
266
+
267
+ [46] V. Sitzmann, J. Martel, A. Bergman, D. Lindell, and G. Wetzstein. Implicit neural representations with periodic activation functions. In NeurIPS, 2020.
268
+
269
+ [47] A. Luo, Y. Du, M. J. Tarr, J. B. Tenenbaum, A. Torralba, and C. Gan. Learning neural acoustic fields. arXiv preprint arXiv:2204.00628, 2022.
270
+
271
+ [48] L. Yen-Chen, P. Florence, J. T. Barron, A. Rodriguez, P. Isola, and T.-Y. Lin. iNeRF: Inverting neural radiance fields for pose estimation. In IROS, 2021.
272
+
273
+ [49] A. Moreau, N. Piasco, D. Tsishkou, B. Stanciulescu, and A. de La Fortelle. Lens: Localization enhanced by nerf synthesis. In Conference on Robot Learning, 2022.
274
+
275
+ [50] E. Sucar, S. Liu, J. Ortiz, and A. Davison. iMAP: Implicit mapping and positioning in real-time. In ICCV, 2021.
276
+
277
+ [51] Z. Zhu, S. Peng, V. Larsson, W. Xu, H. Bao, Z. Cui, M. R. Oswald, and M. Pollefeys. Nice-slam: Neural implicit scalable encoding for slam. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12786-12796, 2022.
278
+
279
+ [52] J. Ortiz, A. Clegg, J. Dong, E. Sucar, D. Novotny, M. Zollhoefer, and M. Mukadam. isdf: Real-time neural signed distance fields for robot perception. In RSS, 2022.
280
+
281
+ [53] M. Adamkiewicz, T. Chen, A. Caccavale, R. Gardner, P. Culbertson, J. Bohg, and M. Schwager. Vision-only robot navigation in a neural radiance world. In ${RA} - L$ , 2022.
282
+
283
+ [54] Y. Li, S. Li, V. Sitzmann, P. Agrawal, and A. Torralba. 3d neural scene representations for visuomotor control. In ${CoRL},{2021}$ .
284
+
285
+ [55] D. Driess, Z. Huang, Y. Li, R. Tedrake, and M. Toussaint. Learning multi-object dynamics with compositional neural radiance fields. arXiv preprint arXiv:2202.11855, 2022.
286
+
287
+ [56] Y. Wi, P. Florence, A. Zeng, and N. Fazeli. Virdo: Visio-tactile implicit representations of deformable objects. In ${ICRA},{2022}$ .
288
+
289
+ [57] B. Shen, Z. Jiang, C. Choy, L. J. Guibas, S. Savarese, A. Anandkumar, and Y. Zhu. Acid: Action-conditional implicit visual dynamics for deformable object manipulation. In RSS, 2022.
290
+
291
+ [58] D. Driess, I. Schubert, P. Florence, Y. Li, and M. Toussaint. Reinforcement learning with neural radiance fields. arXiv preprint arXiv:2206.01634, 2022.
292
+
293
+ [59] Z. Jiang, Y. Zhu, M. Svetlik, K. Fang, and Y. Zhu. Synergies between affordance and geometry: 6-dof grasp detection via implicit representations. In RSS, 2021.
294
+
295
+ [60] J. Ichnowski*, Y. Avigal*, J. Kerr, and K. Goldberg. Dex-NeRF: Using a neural radiance field to
296
+
297
+ grasp transparent objects. In ${CoRL},{2020}$ .
298
+
299
+ [61] L. Yen-Chen, P. Florence, J. T. Barron, T.-Y. Lin, A. Rodriguez, and P. Isola. NeRF-Supervision: Learning dense object descriptors from neural radiance fields. In ${ICRA},{2022}$ .
300
+
301
+ [62] J. T. Kajiya and B. P. V. Herzen. Ray tracing volume densities. SIGGRAPH, 1984.
302
+
303
+ [63] N. Max. Optical models for direct volume rendering. IEEE TVCG, 1995.
304
+
305
+ [64] T. Müller, A. Evans, C. Schied, and A. Keller. Instant neural graphics primitives with a multiresolution hash encoding. In SIGGRAPH, 2022.
306
+
307
+ [65] Y. LeCun, Y. Bengio, et al. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995, 1995.
308
+
309
+ [66] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431-3440, 2015.
310
+
311
+ [67] Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato, and F. Huang. A tutorial on energy-based learning. Predicting structured data, 1(0), 2006.
312
+
313
+ [68] Y. Du and I. Mordatch. Implicit generation and generalization in energy-based models. arXiv preprint arXiv:1903.08689, 2019.
314
+
315
+ [69] P. Florence, C. Lynch, A. Zeng, O. A. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee, I. Mordatch, and J. Tompson. Implicit behavioral cloning. In Conference on Robot Learning, pages 158-168. PMLR, 2022.
316
+
317
+ [70] S. Li, Y. Du, G. M. van de Ven, and I. Mordatch. Energy-based models for continual learning. arXiv preprint arXiv:2011.12216, 2020.
318
+
319
+ [71] D. Seita, P. Florence, J. Tompson, E. Coumans, V. Sindhwani, K. Goldberg, and A. Zeng. Learning to rearrange deformable cables, fabrics, and bags with goal-conditioned transporter networks. In 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021.
320
+
321
+ [72] E. Coumans and Y. Bai. Pybullet, a python module for physics simulation for games, robotics and machine learning. 2016.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/AmPeAFzU3a4/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,248 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § MIRA: MENTAL IMAGERY FOR ROBOTIC AFFORDANCES
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Humans form mental images of 3D scenes to support counterfactual imagination, planning, and motor control. Our abilities to predict the appearance and affordance of the scene from previously unobserved viewpoints aid us in performing manipulation tasks (e.g., 6-DoF kitting) with a level of ease that is currently out of reach for existing robot learning frameworks. In this work, we aim to build artificial systems that can analogously plan actions on top of imagined images. To this end, we introduce Mental Imagery for Robotic Affordances (MIRA), an action reasoning framework that optimizes actions with novel-view synthesis and affordance prediction in the loop. Given a set of 2D RGB images, MIRA builds a consistent 3D scene representation, through which we synthesize novel orthographic views amenable to pixel-wise affor-dances prediction for action optimization. We illustrate how this optimization process enables us to generalize to unseen out-of-plane rotations for 6-DoF robotic manipulation tasks given a limited number of demonstrations, paving the way toward machines that autonomously learn to understand the world around them for planning actions.
12
+
13
+ Keywords: Neural Radiance Fields, Rearrangement, Robotic Manipulation
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Suppose you are shown a small, unfamiliar object and asked if it could fit through an M-shaped slot. How might you solve this task? One approach would be to "rotate" the object in your mind's eye and see if, from some particular angle, the object’s profile fits into an $\mathrm{M}$ . To put the object through the slot would then just require orienting it to that particular imagined angle. In their famous experiments on "mental rotation", Shepard & Metzler argued that this is the approach humans use when reasoning about the relative poses of novel shapes [1]. Decades of work in psychology have documented numerous other ways that "mental images", i.e. pictures in our heads, can aid human cognition [2]. In this paper, we ask: can we give robots a similar ability, where they use mental imagery to aid their spatial reasoning?
18
+
19
+ Fortunately, the generic ability to perform imagined translations and rotations of a scene, also known as novel view synthesis, has seen a recent explosion of research in the computer vision and graphics community $\left\lbrack {3,4,5}\right\rbrack$ . Our work builds in particular upon Neural Radiance Fields (NeRFs) [6], which can render what a scene would look like from any camera pose. We treat a NeRF as a robot's "mind's eye", a virtual camera it may use to imagine how the scene would look were the robot to reposition itself. We couple this ability with an affordance model [7], which predicts, from any given view of the scene, what actions are currently afforded. Then the robot must just search, in its imagination, for the mental image that best affords the action it wishes to execute, then execute the action corresponding to that mental image.
20
+
21
+ We test this framework on 6-DoF rearrangement tasks [8], where the affordance model simply predicts, for each pixel in a given camera view, what is the action value of picking (or placing) at that pixel's coordinates. Using NeRF as a virtual camera for this task has several advantages over prior works which used physical cameras:
22
+
23
+ * Out-of-plane rotation. Prior works have applied affordance maps to 2-dimensional top-down camera views, allowing only the selection of top-down picking and placing actions $\left\lbrack {7,9,{10}}\right\rbrack$ . We instead formulate the pick and place problem as an action optimization process that searches across different novel synthesized views and their affordances of the scene. We demonstrate that this optimization process can handle the multi-modality of picking and placing while naturally supporting actions that involve out-of-plane rotations.
24
+
25
+ < g r a p h i c s >
26
+
27
+ Figure 1: Overview of MIRA. (a) Given a set of multi-view RGB images as input, we optimize a neural radiance field representation of the scene via volume rendering with perspective ray casting. (b) After the NeRF is optimized, we perform volume rendering with orthographic ray casting to render the scene from $V$ viewpoints. (c) The rendered orthographic images are fed into the policy for predicting pixel-wise action-values that correlate with picking and placing success. (d) The pixel with the highest action-value is selected, and its estimated depth and associated view orientation are used to parameterize the robot's motion primitive.
28
+
29
+ * Orthographic ray casting. A NeRF trained with images from consumer cameras can be used to synthesize novel views from novel kinds of cameras that are more suitable to action reasoning. Most physical cameras use perspective projection, in which the apparent size of an object in the image plane is inversely proportional to that object's distance from the camera - a relationship that any vision algorithm must comprehend and disentangle. NeRF can instead create images under other rendering procedures; we show that orthographic ray casting is particularly useful, which corresponds to a non-physical "camera" that is infinitely large and infinitely distant from the scene. This yields images in which an object's size in the image plane is invariant to its distance from the camera, and its appearance is equivariant with respect to translation parallel to the image plane. In essence, this novel usage of NeRF allows us to generate "blueprints" for the scene that complement the inductive biases of algorithms that encode translational equivariance (such as ConvNets).
30
+
31
+ * RGB-only. Prior rearrangement methods [11, 12, 13] commonly require 3D sensors (e.g. via structured light, stereo, or time-of-flight), and these are error-prone when objects contain thin structures or are composed of specular or semi-transparent materials-a common occurrence. These limitations drastically restrict the set of tasks, objects, and surfaces these prior works can reason over.
32
+
33
+ We term our method Mental Imagery for Robotic Affordances, or MIRA. To test MIRA, we perform experiments in both simulation and the real world. For simulation, we extend the Ravens [9] benchmark to include tasks that require 6-DoF actions. Our model demonstrates superior performance to existing state-of-the-art methods for object rearrangement $\left\lbrack {{14},9}\right\rbrack$ , despite not requiring depth sensors. Importantly, the optimization process with novel view synthesis and affordance prediction in the loop enables our framework to generalize to out-of-distribution object configurations, where the baselines struggle. In summary, we contribute (i) a framework that uses NeRFs as the scene representation to perform novel view synthesis for precise object rearrangement, (ii) an orthographic ray casting procedure for NeRFs rendering that facilitates the policy's translation equivariance, (iii) an extended benchmark of 6-DoF manipulation tasks in Ravens [9], and (iv) empirical results on a broad range of manipulation tasks, validated with real-robot experiments.
34
+
35
+ § 2 RELATED WORKS
36
+
37
+ § 2.1 VISION-BASED MANIPULATION.
38
+
39
+ Object-centric. Classical methods in visual perception for robotic manipulation mainly focus on representing instances with6-DoF poses [15,16,17,18,19,20,21]. However,6-DoF poses cannot represent the states of deformable objects or granular media, and cannot capture large intra-category variations of unseen instances [11]. Alternative methods that represent objects with dense descriptors [22, 23, 24] or keypoints $\left\lbrack {{11},{25},{26},{27}}\right\rbrack$ improve generalization, but they require a dedicated data collection procedure (e.g., configuring scenes with single objects).
40
+
41
+ Action-centric. Recent methods based on end-to-end learning directly predict actions given visual observations $\left\lbrack {{28},{29},{30},{31},{32},{33}}\right\rbrack$ . These methods can potentially work with deformable objects or granular media, and do not require any object-specific data collection procedures. However, these methods are known to be sample inefficient and challenging to debug. Recently, several works [9, 13, 34, 35, 36, 37, 38] have proposed to incorporate spatial structure into action reasoning for improved performance and better sample efficiency. Among them, the closest work to ours is Song et al. [13] which relies on view synthesis to plan 6-DoF picking. Our work differs in that it 1) uses NeRF whereas [13] uses TSDF [39], 2) does not require depth sensors, 3) uses orthographic image representation, 4) does not directly use the camera pose as actions, and 5) shows results on rearrangement tasks that require both picking and placing.
42
+
43
+ § 2.2 NEURAL FIELDS FOR ROBOTICS
44
+
45
+ Neural fields have emerged as a promising tool to represent 2D images [40], 3D geometry [41, 42], appearance [6, 43, 44], touch [45], and audio [46, 47]. They offer several advantages over classic representations (e.g., voxels, point clouds, and meshes) including reconstruction quality, and memory efficiency. Several works have explored the usage of neural fields for robotic applications including localization [48, 49], SLAM [50, 51, 52], navigation [53], dynamics modeling [54, 55, 56, 57], and reinforcement learning [58]. For robotic manipulation, GIGA [59] jointly trains a grasping network and an occupancy network for synergy. Dex-NeRF [60] infers the geometry of transparent objects with NeRF and determines the grasp poses with Dex-Net [30]. NDF [12] uses the features of occupancy networks [42] as object descriptors for few-shot imitation learning. NeRF-Supervision [61] uses NeRF as a dataset generator to learn dense object descriptors for picking.
46
+
47
+ § 3 METHOD
48
+
49
+ Our goal is to predict actions ${a}_{t}$ , given RGB-only visual observations ${o}_{t}$ , and trained from only a limited number of demonstrations. We parameterize our action space with two-pose primitives ${a}_{t} = \left( {{\mathcal{T}}_{\text{ pick }},{\mathcal{T}}_{\text{ place }}}\right)$ , which are able to flexibly parameterize rearrangement tasks [9]. This problem is challenging due to the high degrees of freedom of ${a}_{t}$ (12 degrees of freedom for two full SE(3) poses), a lack of information about the underlying object state (such as object poses), and limited data. Our method (illustrated in Fig. 1) factorizes action reasoning into two modules: 1) a continuous neural radiance field that can synthesize virtual views of the scene at novel viewpoints, and 2) an optimization procedure which optimizes actions by predicting per-pixel affordances across different synthesized virtual pixels. We discuss these two modules in Sec. 3.1 and Sec. 3.2 respectively, followed by training details in Sec. 3.3.
50
+
51
+ § 3.1 SCENE REPRESENTATION WITH NEURAL RADIANCE FIELD
52
+
53
+ To provide high-fidelity novel-view synthesis of virtual cameras, we represent the scene with a neural radiance field (NeRF) [6]. For our purposes, a key feature of NeRF is that it renders individual rays (pixels) rather than whole images, which enables flexible parameterization of rendering at inference time, including camera models that are non-physical (e.g., orthographic cameras) and not provided in the training set.
54
+
55
+ To render a pixel, NeRF casts a ray $\mathbf{r}\left( t\right) = \mathbf{o} + t\mathbf{d}$ from some origin $\mathbf{o}$ along the direction $\mathbf{d}$ passing through that pixel on an image plane. In particular, these rays are casted into a field ${F}_{\Theta }$ whose input is a $3\mathrm{D}$ location $\mathbf{x} = \left( {x,y,z}\right)$ and unit-norm viewing direction $\mathbf{d}$ , and whose output is an emitted color $c = \left( {r,g,b}\right)$ and volume density $\sigma$ . Along each ray, $K$ discrete points ${\left\{ {\mathbf{x}}_{k} = \mathbf{r}\left( {t}_{k}\right) \right\} }_{k = 1}^{K}$ are sampled for use as input to ${F}_{\Theta }$ , which outputs a set of densities and colors ${\left\{ {\sigma }_{k},{\mathbf{c}}_{k}\right\} }_{k = 1}^{K} = {\left\{ {F}_{\Theta }\left( {\mathbf{x}}_{k},\mathbf{d}\right) \right\} }_{k = 1}^{K}$ . Volume rendering [62] with a numerical quadrature approximation [63] is performed using these values to produce the color $\widehat{\mathbf{C}}\left( \mathbf{r}\right)$ of that pixel:
56
+
57
+ $$
58
+ \widehat{\mathbf{C}}\left( \mathbf{r}\right) = \mathop{\sum }\limits_{{k = 1}}^{K}{T}_{k}\left( {1 - \exp \left( {-{\sigma }_{k}\left( {{t}_{k + 1} - {t}_{k}}\right) }\right) }\right) {\mathbf{c}}_{k},\;{T}_{k} = \exp \left( {-\mathop{\sum }\limits_{{{k}^{\prime } < k}}{\sigma }_{{k}^{\prime }}\left( {{t}_{{k}^{\prime } + 1} - {t}_{{k}^{\prime }}}\right) }\right) . \tag{1}
59
+ $$
60
+
61
+ < g r a p h i c s >
62
+
63
+ Figure 2: Perspective vs. Orthographic Ray Casting. (a) A 3D world showing two objects, with the camera located at the top. (b) The procedure of perspective ray casting and a perspective rendering of the scene. The nearby object is large, the distant object is small, and both objects appear "tilted" according to their position. (c) The procedure of orthographic ray casting and an orthographic rendering of the scene, which does not correspond to any real consumer camera, wherein the size and appearance of both objects are invariant to their distances and equivariant to their locations. By using NeRF to synthesize these orthographic images, which correspond to non-physical cameras, we are able to construct RGB inputs that are equivariant with translation.
64
+
65
+ where ${T}_{k}$ represents the probability that the ray successfully transmits to point $\mathbf{r}\left( {t}_{k}\right)$ . At the beginning of each pick-and-place, our system takes multi-view posed RGB images as input and optimizes $\Theta$ by minimizing a photometric loss ${\mathcal{L}}_{\text{ photo }} = \mathop{\sum }\limits_{{\mathbf{r} \in \mathcal{R}}}\parallel \widehat{\mathbf{C}}\left( \mathbf{r}\right) - \mathbf{C}\left( \mathbf{r}\right) {\parallel }_{2}^{2}$ , using some sampled set of rays $\mathbf{r} \in \mathcal{R}$ , where $\mathbf{C}\left( \mathbf{r}\right)$ is the observed RGB value of the pixel corresponding to ray $\mathbf{r}$ in an input image. In practice, we use instant-NGP [64] to accelerate NeRF training and inference.
66
+
67
+ Orthographic Ray Casting. In Fig. 2 we illustrate the difference between perspective and orthographic cameras. Though renderings from a NeRF ${F}_{\Theta }$ are highly realistic, the perspective ray casting procedure used by default in NeRF's volume rendering, which we visualize in Fig. 2(b), may cause scene content to appear distorted or scaled depending on the viewing angle and the camera's field of view - more distant objects will appear smaller in the image plane. Specifically, given a pixel coordinate(u, v)and camera pose(R, t), NeRF forms a ray $\mathbf{r} = \left( {\mathbf{o},\mathbf{d}}\right)$ using the perspective camera model:
68
+
69
+ $$
70
+ \mathbf{o} = \mathbf{t},\;\mathbf{d} = \mathbf{R}\left\lbrack \begin{matrix} \left( {u - {c}_{x}}\right) /{f}_{x} \\ \left( {v - {c}_{y}}\right) /{f}_{y} \\ 1 \end{matrix}\right\rbrack . \tag{2}
71
+ $$
72
+
73
+ This model is a reasonable proxy for the geometry of most consumer RGB cameras, hence its use by NeRF during training and evaluation. However, the distortion and scaling effects caused by perspective ray casting degrades the performance of the downstream optimization procedure that takes as input the synthesized images rendered by NeRF, as we will demonstrate in our results (Sec. 4.1). To address this issue, we modify the rendering procedure of NeRF after it is optimized by replacing perspective ray casting with orthographic ray casting:
74
+
75
+ $$
76
+ \mathbf{o} = \mathbf{t} + \mathbf{R}\left\lbrack \begin{matrix} \left( {u - {c}_{x}}\right) /{f}_{x} \\ \left( {v - {c}_{y}}\right) /{f}_{y} \\ 0 \end{matrix}\right\rbrack ,\;\mathbf{d} = \mathbf{R}\left\lbrack \begin{array}{l} 0 \\ 0 \\ 1 \end{array}\right\rbrack . \tag{3}
77
+ $$
78
+
79
+ We visualize this procedure in Fig. 2(c). Orthographic ray casting marches parallel rays into the scene, so that each rendered pixel represents a parallel window of $3\mathrm{D}$ space. This property removes the dependence between an object's appearance and its distance to the camera: an object looks the same if it is either far or nearby. Further, as all rays for a given camera rotation $\mathbf{R}$ are parallel, this provides equivariance to the in-plane camera center ${c}_{x},{c}_{y}$ . These attributes facilitate downstream learning to be equivariant to objects’ 3D locations and thereby encourages generalization. As open-source instant-NGP [64] did not support orthographic projection, we implement this ourselves as a CUDA kernel (see Supp.).
80
+
81
+ Our decision of choosing an orthographic view of the scene draws inspiration from previous works that have used single-view orthographic scene representations $\left\lbrack {{16},9}\right\rbrack$ , but critically differs in the following two aspects: (a) we create orthographic scene representations by casting rays into a radiance field rather than by point-cloud reprojection, thereby significantly reducing image artifacts (see Supp.); (b) we rely on multi-view RGB images to recover scene geometry, instead of depth sensors.
82
+
83
+ § 3.2 POLICY REPRESENTATION: AFFORDANCE RAYCASTS IN A RADIANCE FIELD
84
+
85
+ To address ${SE}\left( 3\right)$ -parameterized actions, we formulate action selection as an optimization problem over synthesized novel-view pixels and their affordances. Using our NeRF-based scene representation, we densely sample $V$ camera poses around the workspace and render images ${\widehat{I}}_{{v}_{t}} = {F}_{\Theta }\left( {\mathcal{T}}_{{v}_{t}}\right)$ for each pose, $\forall {v}_{t} = 0,1,\cdots ,V$ . One valid approach is to search for actions directly in the space of camera poses for the best ${\mathcal{T}}_{{v}_{t}}$ , but orders of magnitude computation may be saved by instead considering actions that correspond to each pixel within each image, and sharing computation between all pixels in the image (e.g., by processing each image with just a single pass through a ConvNet). This (i) extends the paradigm of pixel-wise affordances [7] into full 6-DOF, novel-view-enabled action spaces, and (ii) alleviates the search over poses due to translational equivariance provided by orthographic rendering (Sec. 3.1).
86
+
87
+ Accordingly, we formulate each pixel in each synthesized view as parameterizing a robot action, and we learn a dense action-value function $E$ which outputs per-pixel action values of shape ${\mathbb{R}}^{\mathrm{H} \times \mathrm{W}}$ given a novel-view image of shape ${\mathbb{R}}^{\mathrm{H} \times \mathrm{W} \times 3}$ . Actions are selected by simultaneously searching across all pixels $\mathbf{u}$ in all synthesized views ${v}_{t}$ :
88
+
89
+ $$
90
+ {\mathbf{u}}_{t}^{ * },{v}_{t}^{ * } = \mathop{\operatorname{argmin}}\limits_{{{\mathbf{u}}_{t},{v}_{t}}}E\left( {{\widehat{I}}_{{v}_{t}},{\mathbf{u}}_{t}}\right) ,\;\forall {v}_{t} = 0,1,\cdots ,V \tag{4}
91
+ $$
92
+
93
+ where the pixel ${\mathbf{u}}_{t}^{ * }$ and the associated estimated depth $d\left( {\mathbf{u}}_{t}^{ * }\right)$ from NeRF are used to determine the $3\mathrm{D}$ translation, and the orientation of ${\mathcal{T}}_{{v}_{t}^{ * }}$ is used to determine the $3\mathrm{D}$ rotation of the predicted action. Our approach employs multiple strategies for equivariance: $3\mathrm{D}$ translational equivariance is in part enabled by orthographic raycasting and synergizes well with translationally-equivariant dense model architectures for $E$ such as ConvNets $\left\lbrack {{65},{66}}\right\rbrack$ , meanwhile 3D rotational equivariance is also encouraged, as synthesized rotated views can densely cover novel orientations of objects.
94
+
95
+ While the formulation above may be used to predict the picking pose ${\mathcal{T}}_{\text{ pick }}$ and the placing pose ${\mathcal{T}}_{\text{ place }}$ independently, intuitively the prediction of ${\mathcal{T}}_{\text{ pick }}$ affects the prediction of ${\mathcal{T}}_{\text{ place }}$ due to the latter’s geometric dependence on the former. We therefore decompose the action-value function into (i) picking and (ii) pick-conditioned placing, similar to prior work [9]:
96
+
97
+ $$
98
+ {\mathbf{u}}_{\text{ pick }}^{ * },{v}_{\text{ pick }}^{ * } = \mathop{\operatorname{argmin}}\limits_{{{\mathbf{u}}_{\text{ pick }},{v}_{\text{ pick }}}}{E}_{\text{ pick }}\left( {{\widehat{I}}_{{v}_{\text{ pick }}},{\mathbf{u}}_{\text{ pick }}}\right) ,\;\forall {v}_{\text{ pick }} = 0,1,\cdots ,V \tag{5}
99
+ $$
100
+
101
+ $$
102
+ {\mathbf{u}}_{\text{ place }}^{ * },{v}_{\text{ place }}^{ * } = \mathop{\operatorname{argmin}}\limits_{{{\mathbf{u}}_{\text{ place }},{v}_{\text{ place }}}}{E}_{\text{ place }}\left( {{\widehat{I}}_{{v}_{\text{ place }}},{\mathbf{u}}_{\text{ place }} \mid {\mathbf{u}}_{\text{ pick }}^{ * },{v}_{\text{ pick }}^{ * }}\right) ,\;\forall {v}_{\text{ place }} = 0,1,\cdots ,V \tag{6}
103
+ $$
104
+
105
+ where ${E}_{\text{ place }}$ uses the Transport operation from Zeng et al. [9] to convolve the feature map of ${\widehat{I}}_{{v}_{\text{ pick }}^{ * }}$ around ${\mathbf{u}}_{\text{ pick }}^{ * }$ with the feature maps of ${\left\{ {\widehat{I}}_{{v}_{\text{ piace }}}\right\} }_{{v}_{\text{ piace }} = 1}^{V}$ for action-value prediction. We refer readers to [9] for details on this coupling.
106
+
107
+ § 3.3 TRAINING
108
+
109
+ We train the action-value function as an energy-based model (EBM) [67, 68, 69]. For each expert demonstration, we construct a tuple $\mathcal{D} = \left\{ {{\widehat{I}}_{{v}_{\text{ nick }}^{ * }}^{ * },{\mathbf{u}}_{\text{ pick }}^{ * },{\widehat{I}}_{{v}_{\text{ nick }}^{ * }}^{ * },{\mathbf{u}}_{\text{ place }}^{ * }}\right\}$ , where ${\widehat{I}}_{{v}_{\text{ nick }}^{ * }}$ and ${\widehat{I}}_{{v}_{\text{ nick }}^{ * }}^{ * }$ are the synthesized images whose viewing directions are aligned with the end-effector’s rotations; ${\mathbf{u}}_{\text{ pick }}^{ * }$ and ${\mathbf{u}}_{\text{ place }}^{ * }$ are the best pixels in those views annotated by experts. We use a Monte-Carlo log-likelihood training objective
110
+
111
+ < g r a p h i c s >
112
+
113
+ Figure 3: Simulation qualitative results. MIRA only requires RGB inputs and can solve different 6-DoF tasks: (a) hanging-disks, (b) place-red-in-green, (c) stacking-objects, and (d) block-insertion.
114
+
115
+ max width=
116
+
117
+ Task precise placing multi-modal placing distractors unseen colors unseen objects
118
+
119
+ 1-6
120
+ block-insertion ✓ ✘ ✘ ✘ ✘
121
+
122
+ 1-6
123
+ place-red-in-green ✘ ✓ ✓ ✘ ✘
124
+
125
+ 1-6
126
+ hanging-disks ✓ ✘ ✘ ✓ ✘
127
+
128
+ 1-6
129
+ stacking-objects ✓ ✘ ✘ ✘ ✓
130
+
131
+ 1-6
132
+
133
+ Table 1: Simulation tasks. We extend Ravens [9] with four new 6-DoF tasks and summarize their associated challenges.
134
+
135
+ [70], where we draw pixels ${\left\{ {\widehat{\mathbf{u}}}_{j} \mid {\widehat{\mathbf{u}}}_{j} \neq {\mathbf{u}}_{\text{ pick }}^{ * },{\widehat{\mathbf{u}}}_{j} \neq {\mathbf{u}}_{\text{ place }}^{ * }\right\} }_{j = 1}^{{N}_{\text{ neg }}}$ from randomly synthesized images ${\widehat{I}}_{\text{ neg }}$ as negative samples. For brevity, we omit the subscript for pick and place and present the loss function that is used to train both action-value functions:
136
+
137
+ $$
138
+ \mathcal{L}\left( \mathcal{D}\right) = - \log p\left( {{\mathbf{u}}^{ * } \mid \widehat{I},{\widehat{I}}_{\text{ neg }},{\left\{ {\widehat{\mathbf{u}}}_{j}\right\} }_{j = 1}^{{N}_{\text{ reg }}}}\right) ,\;p\left( {{\mathbf{u}}^{ * } \mid \widehat{I},{\widehat{I}}_{\text{ neg }},{\left\{ {\widehat{\mathbf{u}}}_{j}\right\} }_{j = 1}^{{N}_{\text{ reg }}}}\right) = \frac{{e}^{-{E}_{\theta }\left( {\widehat{I},\mathbf{u}}\right) }}{{e}^{-{E}_{\theta }\left( {\widehat{I},\mathbf{u}}\right) } + \mathop{\sum }\limits_{{j = 1}}^{{N}_{\text{ reg }}}{e}^{-{E}_{\theta }\left( {{\widehat{I}}_{\text{ neg }},{\widehat{\mathbf{u}}}_{j}}\right) }} \tag{7}
139
+ $$
140
+
141
+ A key innovation of our objective function compared to previous works $\left\lbrack {9,{10},{36},{71}}\right\rbrack$ is the inclusion of negative samples ${\left\{ {\widehat{\mathbf{u}}}_{j}\right\} }_{j = 1}^{{N}_{\text{ reg }}}$ from imagined views ${\widehat{I}}_{\text{ neg }}$ . We study the effects of ablating negative samples in Sec. 4.1 and show that they are essential for successfully training action-value functions.
142
+
143
+ § 4 RESULTS
144
+
145
+ We execute experiments in both simulation and real-world settings to evaluate the proposed method across various tasks.
146
+
147
+ § 4.1 SIMULATION EXPERIMENTS
148
+
149
+ Environment. We propose four new 6-DoF tasks based on Ravens [9] and use them as the benchmark. We show qualitative examples of these tasks in Fig. 3 and summarize their associated challenges in Table 1. All simulated experiments are conducted in PyBullet [72] using a Universal Robot UR5e with a suction gripper. The input observations for MIRA are 30 RGB images from different cameras pointing toward the center. For all the baselines, we additionally supply the corresponding noiseless depth images. Each image has a resolution of ${640} \times {480}$ . The camera has focal length $f = {450}$ and camera center $\left( {{c}_{x},{c}_{y}}\right) = \left( {{320},{240}}\right)$ .
150
+
151
+ Evaluation. For each task, we perform evaluations under two settings: in-distribution configures objects with random rotations $\left( {{\theta }_{x},{\theta }_{y} \in \left\lbrack {-\frac{\pi }{6},\frac{\pi }{6}}\right\rbrack ,{\theta }_{z} \in \left\lbrack {-\pi ,\pi }\right\rbrack }\right)$ . This is also the distribution we used to construct the training set. out-of-distribution instead configures objects with random rotations $\left( {{\theta }_{x},{\theta }_{y} \in \left\lbrack {-\frac{\pi }{4}, - \frac{\pi }{6}}\right\rbrack \cup \left\lbrack {\frac{\pi }{6},\frac{\pi }{4}}\right\rbrack ,{\theta }_{z} \in \left\lbrack {-\pi ,\pi }\right\rbrack }\right)$ . We note that these rotations are outside the training distribution and also correspond to larger out-of-plane rotations. Thus, this setting requires stronger generalization. We use a binary score ( 0 for failure and 1 for success) and report results on 100 evaluation runs for agents trained with $n = 1,{10},{100}$ demonstrations.
152
+
153
+ Baseline Methods. Although our method only requires RGB images, we benchmark against published baselines that additionally require depth images as inputs. Form2Fit [14] predicts the placing action by estimating dense descriptors of the scene for geometric matching. Transporter- ${SE}\left( 2\right)$ and Transporter- ${SE}\left( 3\right)$ are both introduced in Zeng et al. [9]. Although Transporter- ${SE}\left( 2\right)$ is not designed to solve manipulation tasks that require 6-DoF actions, its inclusion helps indicate what level of task success can be achieved on the shown tasks by simply ignoring out-of-plane rotations. Transporter- ${SE}\left( 3\right)$ predicts6-DoF actions
154
+
155
+ by first using Transporter- ${SE}\left( 2\right)$ to estimate ${SE}\left( 2\right)$ actions, and then feeding them into a regression model to predict the remaining rotational $\left( {{r}_{x},{r}_{y}}\right)$ and translational (z-height) degrees of freedom. Additionally, we benchmark against a baseline, GT-State MLP, that assumes perfect object poses. It takes ground truth state (object poses) as inputs and trains an MLP to regress two ${SE}\left( 3\right)$ poses for ${\mathcal{T}}_{\text{ pick }}$ and ${\mathcal{T}}_{\text{ place }}$ .
156
+
157
+ max width=
158
+
159
+ 2*Method 3|c|block-insertion in-distribution-poses 3|c|block-insertion out-of-distribution-poses 3|c|place-red-in-greens in-distribution-poses 3|c|place-red-in-greens out-of-distribution-poses
160
+
161
+ 2-13
162
+ 1 10 100 1 10 100 1 10 100 1 10 100
163
+
164
+ 1-13
165
+ GT-State MLP 0 1 1 0 0 1 0 1 3 0 1 1
166
+
167
+ 1-13
168
+ Form2Fit [14] 0 1 10 0 0 0 35 79 96 21 30 61
169
+
170
+ 1-13
171
+ Transporter- ${SE}\left( 2\right) \left\lbrack 9\right\rbrack$ 25 69 73 1 21 20 30 74 83 25 18 36
172
+
173
+ 1-13
174
+ Transporter- ${SE}\left( 3\right) \left\lbrack 9\right\rbrack$ 26 70 77 0 20 22 29 77 85 23 20 38
175
+
176
+ 1-13
177
+ Ours 0 84 89 0 74 78 27 89 96 22 56 77
178
+
179
+ 1-13
180
+ 2*Method 3|c|hanging-disks in-distribution-poses 3|c|hanging-disks out-of-distribution-poses 3|c|stacking-objects in-distribution-poses 3|c|stacking-objects out-of-distribution-poses
181
+
182
+ 2-13
183
+ 1 10 100 1 10 100 1 10 100 1 10 100
184
+
185
+ 1-13
186
+ GT-State MLP 0 0 3 0 0 1 0 1 1 0 0 0
187
+
188
+ 1-13
189
+ Form2Fit [14] 0 11 5 4 1 3 1 7 12 0 4 5
190
+
191
+ 1-13
192
+ Transporter-SE(2) [9] 6 65 72 3 32 17 0 46 40 1 18 35
193
+
194
+ 1-13
195
+ Transporter- ${SE}\left( 3\right) \left\lbrack 9\right\rbrack$ 6 66 75 0 32 20 0 42 40 0 16 34
196
+
197
+ 1-13
198
+ Ours 13 68 100 0 43 71 13 21 76 0 3 74
199
+
200
+ 1-13
201
+
202
+ Table 2: Quantitative results. Task success rate (mean %) vs. # of demonstration episodes (1,10,100) used in training. Tasks labeled with in-distribution configure objects with random rotations $\left( {{\theta }_{x},{\theta }_{y} \in \left\lbrack {-\frac{\pi }{6},\frac{\pi }{6}}\right\rbrack ,{\theta }_{z} \in \left\lbrack {-\pi ,\pi }\right\rbrack }\right.$ ). This is also the rotation distribution used for creating the training set. Tasks labeled with out-of-distribution configure objects with rotations $\left( {{\theta }_{x},{\theta }_{y} \in \left\lbrack {-\frac{\pi }{4}, - \frac{\pi }{6}}\right\rbrack \cup \left\lbrack {\frac{\pi }{6},\frac{\pi }{4}}\right\rbrack ,{\theta }_{z} \in \left\lbrack {-\pi ,\pi }\right\rbrack }\right)$ that are (i) outside the training pose distribution and (ii) larger out-of-plane rotations.
203
+
204
+ Results. Fig. 4 shows the average scores of all methods trained on different numbers of demonstrations. GT-State MLP fails completely; Form2Fit cannot achieve ${40}\%$ success rate under any setting; Both Transporter- ${SE}\left( 2\right)$ and Transporter- ${SE}\left( 3\right)$ are able to achieve $\sim {70}\%$ success rate when the object poses are sampled from the training distribution, but the success rate drops to $\sim {30}\%$ when the object poses are outside the training distribution. MIRA outperforms all baselines by a large margin when there are enough demonstrations of the task. Its success rate is $\sim {90}\%$ under the setting of in-distribution and $\sim {80}\%$ under the setting of out-of-distribution. The performance improvement over baselines demonstrates its generalization ability thanks to the action optimization process with novel view synthesis and affordance prediction in the loop. Interestingly, we found that MIRA sometimes performs worse than baselines when only 1 demonstration is supported. We hypothesize that this is because our action-value function needs more data to understand the subtle differences between images rendered from different views in order to select the best view. We show the full quantitative results in Table 2.
205
+
206
+ < g r a p h i c s >
207
+
208
+ Figure 4: Average scores of all methods under both in-distribution and out-of-distribution settings.
209
+
210
+ Ablation studies. To understand the importance of different components within our framework, we benchmark against two variants: (i) ours w/ perspective ray casting, and (ii) ours w/o multi-view negative samples in Eq. (7). We show the quantitative results in Table 3. We find that ours w/ perspective ray casting fail to learn the action-value functions because the perspective images (a) contain distorted appearances of the objects that are challenging for CNNs to comprehend and (b) the groudtruth picking or placing locations may be occluded by the robot arms. We visualize these challenges in the supplementary materials. Our method with orthographic ray casting circumvents both challenges by controlling the near/far planes to ignore occlusions without worrying about the distortion and scaling. Ours w/o multi-view negative samples
211
+
212
+ < g r a p h i c s >
213
+
214
+ Figure 5: Real-world qualitative results. We validate MIRA in the real world with a kitting task that includes objects with reflective or transparent materials. We show that our framework can successfully pick up a stainless steel ice sphere and place it into 10+ different cups configured with random translations and out-of-plane rotations.
215
+
216
+ also fails to learn reliable action-value functions, and this may be due to a distribution shift between training and test: during training, it has only been supervised to choose the best pixel given an image. However, it is tasked to both select the best pixel and best view during the test time.
217
+
218
+ max width=
219
+
220
+ 2*Method 3|c|block-insertion in-distribution-poses 3|c|block-insertion out-of-distribution-poses
221
+
222
+ 2-7
223
+ 1 10 100 1 10 100
224
+
225
+ 1-7
226
+ Ours w/ perspective 0 0 0 0 0 0
227
+
228
+ 1-7
229
+ Ours w/o multi-view negatives 0 11 13 0 2 4
230
+
231
+ 1-7
232
+ Ours 0 84 89 0 74 78
233
+
234
+ 1-7
235
+
236
+ Table 3: Ablation studies. We study the effects of ablating orthographic ray casting or multiview negative samples from our system.
237
+
238
+ 242
239
+
240
+ 243
241
+
242
+ § 4.2 REAL-WORLD EXPERIMENTS
243
+
244
+ We validate our framework with a kitting task in the real world and show qualitative results in Fig. 5. Our system consists of a UR5 arm, a customized suction gripper, and a wrist-mounted camera. We show that our method can successfully pick up a stainless steel ice sphere and place it into 10+ different cups configured with random translations and out-of-plane rotations. This task is challenging because (i) it includes objects with reflective or transparent materials, which makes this task not amenable to existing works that require depth sensors $\left\lbrack {9,{12},{14}}\right\rbrack$ , and (ii) it requires out-of-plane action reasoning. The action-value functions are trained with 20 demonstrations using these cups. At the beginning of each pick-and-place, our system gathers ${301280} \times {720}\mathrm{{RGB}}$ images of the scene with the wrist-mounted camera. Each image’s camera pose is derived from the robotic manipulator's end-effector pose and a calibrated transformation between the end-effector and the camera. This data collection procedure is advantageous as industrial robotic manipulators feature sub-millimeter repeatability, which provides accurate camera poses for building NeRF. In practice, we search through $V = {121}$ virtual views that uniformly cover the workspace and predict their affordances for optimizing actions. The optimization process currently takes around 2 seconds using a single NVIDIA RTX 2080 Ti GPU. This step can be straightforwardly accelerated by parallelizing the computations with multiple GPUs.
245
+
246
+ § 5 LIMITATIONS AND CONCLUSION
247
+
248
+ In terms of limitations, our system currently requires training a NeRF of the scene for each step of the manipulation. An instant-NGP [64] requires approximately 10 seconds to converge using a single NVIDIA RTX 2080 Ti GPU, and moving the robot arms around to collect 30 multi-view RGB images of the scene takes nearly 1 minute. We believe observing the scene with multiple mounted cameras or learning a prior over instant-NGP could drastically reduce the runtime. In the future, we plan to explore the usage of mental imagery for other robotics applications such as navigation and mobile manipulation.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/Bf6on28H0Jv/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,295 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Masked World Models for Visual Control
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: We introduce a visual model-based reinforcement learning (RL) framework that decouples visual representation learning and dynamics learning. For visual representation learning, we train an autoencoder with convolutional layers and vision transformers (ViT) to reconstruct pixels given masked convolutional features, inspired by the recent success of self-supervised learning with ViT and masked image modeling. Moreover, in order to encode task-relevant information, we introduce an auxiliary reward prediction objective for the autoencoder. For dynamics learning, we learn a latent dynamics model that operates on visual representations from the autoencoder. Our framework continually updates both the autoencoder and dynamics model using online samples collected from environment interaction. We demonstrate that our decoupling approach achieves state-of-the-art performance on a variety of visual robotic tasks from Meta-world and RLBench, e.g., we achieve ${81.7}\%$ success rate on 50 visual robotic manipulation tasks from Meta-world, while the baseline achieves 67.9%.
12
+
13
+ Keywords: Visual model-based RL, World models
14
+
15
+ ## 1 Introduction
16
+
17
+ Model-based reinforcement learning (RL) holds the promise of sample-efficient robot learning by learning a world model and leveraging it for planning $\left\lbrack {1,2,3}\right\rbrack$ or generating imaginary states for behavior learning $\left\lbrack {4,5}\right\rbrack$ . These approaches have also previously been applied to environments with visual observations, by learning an action-conditional video prediction model [6, 7] or a latent dynamics model that predicts compact representations in an abstract latent space $\left\lbrack {8,9}\right\rbrack$ . However, learning world models on environments with complex visual observations, e.g., modeling interactions between multiple small objects, as accurately as state-based world models still remains a challenge.
18
+
19
+ We argue that this difficulty comes from the design of current approaches that typically learn a single model to excel at learning both visual representations and dynamics [9, 10]. This imposes a trade-off between visual representation learning and dynamics learning, thus making it difficult to learn world models in environments where visual representation learning alone is already a challenging problem. In contrast, Ha and Schmidhuber [11] investigated the approach that separately trains a variational autoencoder (VAE) [12] and a dynamics model on top of the VAE features. By enabling us to leverage any visual representation learning and dynamics learning scheme, this approach has the potential to be a generic approach for learning world models. However, it is also limited in that VAE representations may not be amenable to dynamics learning $\left\lbrack {8,{10}}\right\rbrack$ or may not capture task-relevant information [11].
20
+
21
+ We present MWM: Masked World Models, a visual model-based RL framework that decouples visual representation learning and dynamics learning by learning a latent dynamics model on top of a self-supervised vision transformer (ViT) [13], inspired by the recent success of self-supervised learning methods with ViT and masked image modeling [14, 15]. Specifically, we separately update visual representations and dynamics by repeating the iterative processes of (i) training an autoencoder with convolutional feature masking and reward prediction, and (ii) learning a latent dynamics model that predicts visual representations from the autoencoder (see Figure 1).
22
+
23
+ ![01963f18-f084-74df-a1ab-3cf4fb2b6034_1_318_208_1152_428_0.jpg](images/01963f18-f084-74df-a1ab-3cf4fb2b6034_1_318_208_1152_428_0.jpg)
24
+
25
+ Figure 1: Illustration of our approach. We continually update visual representations and dynamics using online samples collected from environment interaction, by repeating iterative processes of training (Left) an autoencoder with convolutional feature masking and reward prediction and (Right) a latent dynamics model in the latent space of the autoencoder. We note that autoencoder parameters are not updated during dynamics learning.
26
+
27
+ ## Contributions We highlight the contributions of our paper below:
28
+
29
+ - We demonstrate the effectiveness of decoupling visual representation learning and dynamics learning for visual model-based RL. MWM significantly outperforms a state-of-the-art model-based baseline [16] on various visual control tasks from Meta-world [17] and RLBench [18].
30
+
31
+ - We show that a self-supervised ViT trained to reconstruct visual observations with convolutional feature masking can be effective for visual model-based RL. Interestingly, we find that masking convolutional features can be more effective than pixel patch masking [14], by allowing for capturing fine-grained details within patches. This is in contrast to the observation in Touvron et al. [19], where both perform similarly on the ImageNet classification task [20].
32
+
33
+ - We show that an auxiliary reward prediction task can significantly improve performance by encoding task-relevant information into visual representations.
34
+
35
+ ## 2 Related Work
36
+
37
+ World models from visual observations There have been several approaches to learn visual representations for model-based via image reconstruction [6,7,8,9,10,11,16,21,22], e.g., learning a video prediction model [6] or a latent dynamics model $\left\lbrack {8,9,{10}}\right\rbrack$ . This has been followed by a series of works that demonstrated the effectiveness of model-based approaches for solving video games $\left\lbrack {{16},{23},{24}}\right\rbrack$ and visual robot control tasks $\left\lbrack {7,{22},{25},{26}}\right\rbrack$ . There also have been several works that considered different objectives, including bisimulation [27] and contrastive learning [28, 29,30]. While most prior works introduce a single model that learns both visual representations and dynamics, we instead develop a framework that decouples visual representation learning and dynamics learning.
38
+
39
+ Self-supervised vision transformers Self-supervised learning with vision transformers (ViT) [13] has been actively studied. For instance, Chen et al. [31] introduced MoCo-v3 which trains a ViT with contrastive learning. Caron et al. [32] introduced DINO which utilizes a self-distillation loss [33], and demonstrated that self-supervised ViTs contain information about the semantic layout of images. Training self-supervised ViTs with masked image modeling [14, 15, 34, 35, 36, 37, 38] has also been successful. In particular, He et al. [14] proposed a masked autoencoder (MAE) that reconstructs masked pixel patches with an asymmetric encoder-decoder architecture. Unlike MAE, we propose to randomly mask features from early convolutional layers [39] instead of pixel patches and demonstrate that self-supervised ViTs can also be effective for visual model-based RL.
40
+
41
+ We provide more discussion on related works in more detail in Appendix C.
42
+
43
+ ![01963f18-f084-74df-a1ab-3cf4fb2b6034_2_315_216_1163_268_0.jpg](images/01963f18-f084-74df-a1ab-3cf4fb2b6034_2_315_216_1163_268_0.jpg)
44
+
45
+ Figure 2: Examples of visual observations used in our experiments. We consider a variety of visual robot control tasks from Meta-world [17], RLBench [18], and DeepMind Control Suite [40].
46
+
47
+ ## 3 Preliminaries
48
+
49
+ Problem formulation We formulate a visual control task as a partially observable Markov decision process (POMDP) [41], which is defined as a tuple $\left( {\mathcal{O},\mathcal{A}, p, r,\gamma }\right) .\mathcal{O}$ is the observation space, $\mathcal{A}$ is the action space, $p\left( {{o}_{t} \mid {o}_{ < t},{a}_{ < t}}\right)$ is the transition dynamics, $r$ is the reward function that maps previous observations and actions to a reward ${r}_{t} = r\left( {{o}_{ \leq t},{a}_{ < t}}\right)$ , and $\gamma \in \lbrack 0,1)$ is the discount factor.
50
+
51
+ Dreamer Dreamer [16, 22] is a visual model-based RL method that learns world models from pixels and trains an actor-critic model via latent imagination. Specifically, Dreamer learns a Recurrent State Space Model (RSSM) [9], which consists of following four components:
52
+
53
+ Representation model: ${s}_{t} \sim {q}_{\theta }\left( {{s}_{t} \mid {s}_{t - 1},{a}_{t - 1},{o}_{t}}\right) \;$ Image decoder: $\;{\widehat{o}}_{t} \sim {p}_{\theta }\left( {{\widehat{o}}_{t} \mid {s}_{t}}\right)$(1)Transition model: $\;{\widehat{s}}_{t} \sim {p}_{\theta }\left( {{\widehat{s}}_{t} \mid {s}_{t - 1},{a}_{t - 1}}\right) \;$ Reward predictor: $\;{\widehat{r}}_{t} \sim {p}_{\theta }\left( {{\widehat{r}}_{t} \mid {s}_{t}}\right)$
54
+
55
+ The representation model extracts model state ${s}_{t}$ from previous model state ${s}_{t - 1}$ , previous action ${a}_{t - 1}$ , and current observation ${o}_{t}$ . The transition model predicts future state ${\widehat{s}}_{t}$ without the access to current observation ${o}_{t}$ . The image decoder reconstructs raw pixels to provide learning signal, and the reward predictor enables us to compute rewards from future model states without decoding future frames. All model parameters $\theta$ are trained to jointly learn visual representations and environment dynamics by minimizing the negative variational lower bound [12]:
56
+
57
+ $$
58
+ \mathcal{L}\left( \theta \right) \doteq {\mathbb{E}}_{{q}_{\theta }\left( {{s}_{1 : T} \mid {a}_{1 : T},{o}_{1 : T}}\right) } \tag{2}
59
+ $$
60
+
61
+ $$
62
+ \left. {\mathop{\sum }\limits_{{t = 1}}^{T}\left( {-\ln {p}_{\theta }\left( {{o}_{t} \mid {s}_{t}}\right) - \ln {p}_{\theta }\left( {{r}_{t} \mid {s}_{t}}\right) + \beta \operatorname{KL}\left\lbrack {{q}_{\theta }\left( {{s}_{t} \mid {s}_{t - 1},{a}_{t - 1},{o}_{t}}\right) \parallel {p}_{\theta }\left( {{\widehat{s}}_{t} \mid {s}_{t - 1},{a}_{t - 1}}\right) }\right\rbrack }\right) }\right\rbrack ,
63
+ $$
64
+
65
+ where $\beta$ is a hyperparameter that controls the tradeoff between the quality of visual representation learning and the accuracy of dynamics learning [42]. Then, the critic is learned to regress the values computed from imaginary rollouts, and the actor is trained to maximize the values by propagating analytic gradients back through the transition model (see Appendix A for the details).
66
+
67
+ Masked autoencoder Masked autoencoder (MAE) [14] is a self-supervised visual representation technique that trains an autoencoder to reconstruct raw pixels with randomly masked patches consisting of pixels. Following a scheme introduced in vision transformer (ViT) [13], the observation ${o}_{t} \in {\mathbb{R}}^{H \times W \times C}$ is processed with a patchify stem that reshapes ${o}_{t}$ into a sequence of 2D patches ${h}_{t} \in {\mathbb{R}}^{N \times \left( {{P}^{2}C}\right) }$ , where $P$ is the patch size and $N = {HW}/{P}^{2}$ is the number of patches. Then a subset of patches is randomly masked with a ratio of $m$ to construct ${h}_{t}^{m} \in {\mathbb{R}}^{M \times \left( {{P}^{2}C}\right) }$ .
68
+
69
+ $$
70
+ \text{Patchify stem:}{h}_{t} = {f}_{\phi }^{\text{patch }}\left( {o}_{t}\right) \;\text{Masking:}\;{h}_{t}^{m} \sim {p}^{\text{mask }}\left( {{h}_{t}^{m} \mid {h}_{t}, m}\right) \tag{3}
71
+ $$
72
+
73
+ A ViT encoder embeds only the remaining patches ${h}_{t}^{m}$ into $D$ -dimensional vectors, concatenates the embedded tokens with a learnable CLS token, and processes them through a series of Transformer layers [43]. Finally, a ViT decoder reconstructs the observation by processing tokens from the encoder and learnable mask tokens through Transformer layers followed by a linear output head:
74
+
75
+ $$
76
+ \text{ViT encoder:}{z}_{t}^{m} \sim {p}_{\phi }\left( {{z}_{t}^{m} \mid {h}_{t}^{m}}\right) \;\text{ViT decoder:}\;{\widehat{o}}_{t} \sim {p}_{\phi }\left( {{\widehat{o}}_{t} \mid {z}_{t}^{m}}\right) \tag{4}
77
+ $$
78
+
79
+ All the components paramaterized by $\phi$ are jointly optimized to minimize the mean squared error (MSE) between the reconstructed and original pixel patches. MAE computes ${z}_{t}^{0}$ without masking, and utilizes its first component (i.e., CLS representation) for downstream tasks (e.g., image classification).
80
+
81
+ ## 4 Method
82
+
83
+ In this section, we present Masked World Models (MWM), a visual model-based RL framework for learning accurate world models by separately learning visual representations and environment dynamics. Our method repeats (i) updating an autoencoder with convolutional feature masking and an auxiliary reward prediction task (see Section 4.1), (ii) learning a dynamics model in the latent space of the autoencoder (see Section 4.2), and (iii) collecting samples from environment interaction. We provide the overview and pseudocode of MWM in Figure 1 and Appendix D, respectively.
84
+
85
+ ### 4.1 Autoencoders with Convolutional Feature Masking and Reward Prediction
86
+
87
+ It has been observed that masked image modeling with a ViT architecture [14, 15, 35] enables compute-efficient and stable self-supervised visual representation learning. This motivates us to adopt this approach for visual model-based RL, but we find that masked image modeling with commonly used pixel patch masking [14] often makes it difficult to learn fine-grained details within patches, e.g., small objects (see Appendix B for a motivating example). While one can consider small-size patches, this would increase computational costs due to the quadratic complexity of self-attention layers.
88
+
89
+ To handle this issue, we instead propose to train an autoencoder that reconstructs raw pixels given randomly masked convolutional features. Unlike previous approaches that utilize a patchify stem and randomly mask pixel patches (see Section 3), we adopt a convolution stem [13, 39] that processes ${o}_{t}$ through a series of convolutional layers followed by a flatten layer, to obtain ${h}_{t}^{c} \in {\mathbb{R}}^{{N}_{c} \times D}$ where ${N}_{c}$ is the number of convolutional features. Then ${h}_{t}^{c}$ is randomly masked with a ratio of $m$ to obtain ${h}_{t}^{c, m} \in {\mathbb{R}}^{{M}_{c} \times D}$ , and ViT encoder and decoder process ${h}_{t}^{c, m}$ to reconstruct raw pixels.
90
+
91
+ $$
92
+ \text{Convolution stem:}{h}_{t}^{c} = {f}_{\phi }^{\text{conv }}\left( {o}_{t}\right) \;\text{Masking:}\;{h}_{t}^{c, m} \sim {p}^{\text{mask }}\left( {{h}_{t}^{c, m} \mid {h}_{t}^{c}, m}\right) \tag{5}
93
+ $$
94
+
95
+ $$
96
+ \text{ViT encoder:}\;{z}_{t}^{c, m} \sim {p}_{\phi }\left( {{z}_{t}^{c, m} \mid {h}_{t}^{c, m}}\right) \;\text{ViT decoder:}\;{\widehat{o}}_{t} \sim {p}_{\phi }\left( {{\widehat{o}}_{t} \mid {z}_{t}^{c, m}}\right)
97
+ $$
98
+
99
+ Because early convolutional layers mix low-level details, we find that our autoencoder can effectively reconstruct all the details within patches by learning to extract information from nearby non-masked features (see Figure 7 for examples). This enables us to learn visual representations capturing such details while also achieving the benefits of MAE, e.g., stability and compute-efficiency.
100
+
101
+ Reward prediction In order to encode task-relevant information that might not be captured solely by the reconstruction objective, we introduce an auxiliary objective for the autoencoder to predict rewards jointly with pixels. Specifically, we make the autoencoder predict the reward ${r}_{t}$ from ${z}_{t}^{c, m}$ in conjunction with raw pixels.
102
+
103
+ $$
104
+ \text{ViT decoder with reward prediction:}{\widehat{o}}_{t},{\widehat{r}}_{t} \sim {p}_{\phi }\left( {{\widehat{o}}_{t},{\widehat{r}}_{t} \mid {z}_{t}^{c, m}}\right) \tag{6}
105
+ $$
106
+
107
+ In practice, we concatenate one additional learnable mask token to inputs of the ViT decoder, and utilize the corresponding output representation for predicting the reward with a linear output head.
108
+
109
+ High masking ratio Introducing early convolutional layers might impede the masked reconstruction tasks because they propagate information across patches [19], and the model can exploit this to find a shortcut to solve reconstruction tasks. However, we find that a high masking ratio (i.e., 75%) can prevent the model from finding such shortcuts and induce useful representations (see Figure 6(b) for supporting experimental results). This also aligns with the observation from Touvron et al. [19], where masked image modeling [15] with a convolution stem [44] can achieve competitive performance with the patchify stem on the ImageNet classification task [20].
110
+
111
+ ![01963f18-f084-74df-a1ab-3cf4fb2b6034_4_316_208_1158_611_0.jpg](images/01963f18-f084-74df-a1ab-3cf4fb2b6034_4_316_208_1158_611_0.jpg)
112
+
113
+ Figure 3: Learning curves on six visual robotic manipulation tasks from Meta-world as measured on the success rate. We select the tasks that require modeling interactions between small objects and robot arms. Learning curves on 50 tasks are available in Appendix G. The solid line and shaded regions represent the mean and bootstrap confidence intervals, respectively, across five runs.
114
+
115
+ ### 4.2 World Models in Latent Space
116
+
117
+ Once we learn visual representations, we leverage them for efficiently learning a dynamics model in the latent space of the autoencoder. Specifically, we obtain the frozen representations ${z}_{t}^{c,0}$ from the autoencoder, and then train a variant of RSSM whose inputs and reconstruction targets are ${z}_{t,0}^{c}$ , by replacing the representation model and the image decoder in Equation 1 with following components:
118
+
119
+ $$
120
+ \text{Representation model:}\;{s}_{t} \sim {q}_{\theta }\left( {{s}_{t} \mid {s}_{t - 1},{a}_{t - 1},{z}_{t}^{c,0}}\right) \tag{7}
121
+ $$
122
+
123
+ $$
124
+ \text{Visual representation decoder:}{\widehat{z}}_{t}^{c,0} \sim {p}_{\theta }\left( {{\widehat{z}}_{t}^{c,0} \mid {s}_{t}}\right)
125
+ $$
126
+
127
+ Because visual representations capture both high- and low-level information in an abstract form, the model can focus more on dynamics learning by reconstructing them instead of raw pixels (see Section 5.5 for relevant discussion). Here, we also note that we utilize all the elements of ${z}_{t}^{c,0}$ unlike MAE that only utilizes CLS representation for downstream tasks. We empirically find this enables the model to receive rich learning signals from reconstructing all the representations containing spatial information (see Appendix I for supporting experiments).
128
+
129
+ ## 5 Experiments
130
+
131
+ We evaluate MWM on various robotics benchmarks, including Meta-world [17] (see Section 5.1), RLBench [18] (see Section 5.2), and DeepMind Control Suite [45] (see Section 5.3). We remark that these benchmarks consist of diverse and challenging visual robotic tasks. We also analyze algorithmic design choices in-depth (see Section 5.4) and provide a qualitative analysis of how our decoupling approach works by visualizing the predictions from the latent dynamics model (see Section 5.5).
132
+
133
+ Implementation We use visual observations of ${64} \times {64} \times 3$ . For the convolution stem, we stack 3 convolution layers with the kernel size of 4 and stride 2 , followed by a linear projection layer. We use a 4-layer ViT encoder and a 3-layer ViT decoder. We find that initializing the autoencoder with a warm-up schedule at the beginning of training is helpful. Unlike MAE, we compute the loss on entire pixels because we do not apply masking to pixels. For world models, we build our implementation on top of DreamerV2 [16]. To take a sequence of autoencoder representations as inputs, we replace a CNN encoder and decoder with a 2-layer Transformer encoder and decoder. We use same hyperparameters within the same benchmark. More details are available in Appendix E.
134
+
135
+ ![01963f18-f084-74df-a1ab-3cf4fb2b6034_5_325_225_1146_345_0.jpg](images/01963f18-f084-74df-a1ab-3cf4fb2b6034_5_325_225_1146_345_0.jpg)
136
+
137
+ Figure 4: (a) Aggregate performance on all 50 Meta-world tasks. We normalize environment steps by maximum steps in each task. The solid line and shaded regions represent the mean and stratified bootstrap confidence intervals, respectively, across 250 runs. We report the learning curves on (b) Reach Target and (c) Push Button from RLBench. Performances are not directly comparable to previous results [46, 47] due to the difference in setups (see Section 5.2). The solid line and shaded regions represent the mean and bootstrap confidence intervals, respectively, across eight runs.
138
+
139
+ ### 5.1 Meta-world Experiments
140
+
141
+ Environment details In order to use a single camera viewpoint consistently over all 50 tasks, we use the modified corner2 camera viewpoint for all tasks. In our experiments, we classify 50 tasks into easy, medium, hard, and very hard tasks where experiments are run over ${500}\mathrm{\;K},1\mathrm{M},2\mathrm{M},3\mathrm{M}$ environments steps with action repeat of 2, respectively. More details are available in Appendix F.
142
+
143
+ Results In Figure 3, we report the performance on a set of selected six challenging tasks that require agents to control robot arms to interact with small objects. We find that MWM significantly outperforms DreamerV2 in terms of both sample-efficiency and final performance. In particular, MWM achieves $> {80}\%$ success rate on Pick Place while DreamerV2 struggles to solve the task. These results show that our approach of separating visual representation learning and dynamics learning can learn accurate world models on challenging domains. Figure 4(a) shows the aggregate performance over all the 50 tasks from the benchmark, demonstrating that our method consistently outperforms DreamerV2 overall. We also provide learning curves on all individual tasks in Appendix G, where MWM consistently achieves similar or better performance on most tasks.
144
+
145
+ ### 5.2 RLBench Experiments
146
+
147
+ Environment details In order to evaluate our method on more challenging visual robotic manipulation tasks, we consider RLBench [18], which has previously acted as an effective proxy for real-robot performance [47]. Since RLBench consists of sparse-reward and challenging tasks, solving them typically requires expert demonstrations, specialized network architectures, additional inputs (e.g., point cloud and proprioceptive states), and an action mode that requires path planning [46, 47, 48, 49]. While we could utilize some of these components, we instead leave this as future work in order to maintain a consistent evaluation setup across multiple domains. In our experiments, we instead consider two relatively easy tasks with dense rewards, and utilize an action mode that specifies the delta of joint positions. We provide more details in Appendix F.
148
+
149
+ Results As shown in Figure 4(b) and Figure 4(c), we observe that our approach can also be effective on RLBench tasks, significantly outperforming DreamerV2. In particular, DreamerV2 achieves $< {20}\%$ success rate on Reach Target, while our approach can solve the tasks with $> {80}\%$ success rates. We find that this is because DreamerV2 fails to capture target positions in visual observations, while our method can capture such details (see Section 5.5 for relevant discussion and visualizations). However, we also note that these results are preliminary because they are still too sample-inefficient to be used for real-world scenarios. We provide more discussion in Section 6.
150
+
151
+ ![01963f18-f084-74df-a1ab-3cf4fb2b6034_6_316_208_1155_304_0.jpg](images/01963f18-f084-74df-a1ab-3cf4fb2b6034_6_316_208_1155_304_0.jpg)
152
+
153
+ Figure 5: Learning curves on three visual robot control tasks from DeepMind Control Suite as measured on the episode return. The solid line and shaded regions represent the mean and bootstrap confidence intervals, respectively, across eight runs.
154
+
155
+ ![01963f18-f084-74df-a1ab-3cf4fb2b6034_6_322_621_1146_336_0.jpg](images/01963f18-f084-74df-a1ab-3cf4fb2b6034_6_322_621_1146_336_0.jpg)
156
+
157
+ Figure 6: Learning curves on three manipulation tasks from Meta-world that investigate the effect of (a) convolutional feature masking, (b) masking ratio, and (c) reward prediction. The solid line and shaded regions represent the mean and stratified bootstrap confidence interval across 12 runs.
158
+
159
+ ### 5.3 DeepMind Control Suite Experiments
160
+
161
+ Environment details In order to demonstrate that our approach is generally applicable to diverse visual control tasks, we also evaluate our method on visual locomotion tasks from the widely used DeepMind Control Suite benchmark. Following a standard setup in Hafner et al. [22], we use an action repeat of 2 and default camera configurations. We provide more details in Appendix F.
162
+
163
+ Results Figure 5 shows that our method achieves competitive performance to DreamerV2 on visual locomotion tasks (i.e., Quadruped tasks), demonstrating the generality of our approach across diverse visual control tasks. We also observe that our method outperforms DreamerV2 on Reach Duplo, which is one of a few manipulation tasks in the benchmark (see Figure 2(e) for an example). This implies that our method is effective on environments where the model should capture fine-grained details like object positions. More results are available in Appendix H, where trends are similar.
164
+
165
+ ### 5.4 Ablation Study
166
+
167
+ Convolutional feature masking We compare convolutional feature masking with pixel masking (i.e., MAE) in Figure 6(a), which shows that convolutional feature masking significantly outperforms pixel masking. This demonstrates that enabling the model to capture fine-grained details within patches can be important for visual control. We also report the performance with varying masking ratio $m \in \{ {0.25},{0.5},{0.75},{0.9}\}$ in Figure 6(b). As we discussed in Section 4.1, we find that $m = {0.75}$ achieves better performance than $m \in \{ {0.25},{0.5}\}$ because strong regularization can prevent the model from finding a shortcut from input pixels. However, we also find that too strong regularization (i.e., $m = {0.9}$ ) degrades the performance.
168
+
169
+ Reward prediction In Figure 6(c), we find that performance significantly degrades without reward prediction, which shows that the reconstruction objective might not be sufficient for learning task-relevant information. It would be an interesting future direction to develop a representation learning scheme that learns task-relevant information without rewards because they might not be available in practice. We provide more ablation studies and learning curves on individual tasks in Appendix I.
170
+
171
+ ![01963f18-f084-74df-a1ab-3cf4fb2b6034_7_313_206_1161_443_0.jpg](images/01963f18-f084-74df-a1ab-3cf4fb2b6034_7_313_206_1161_443_0.jpg)
172
+
173
+ Figure 7: Future frames reconstructed with the autoencoder (i.e., Recon) and predicted by latent dynamics models (i.e., Predicted). Predictions from our model capture the position of a red block, which is a target position a robot arm should reach, but predictions from Dreamer are not capturing such details. In our predictions, the components that are not task-relevant are abstracted away (i.e., blue and orange blocks), though the autoencoder reconstructs them. This shows how our decoupling approach works: it encourages the autoencoder to capture all the details, and the dynamics model to focus on modeling task-relevant components. Best viewed as video provided in Appendix B.
174
+
175
+ ### 5.5 Qualitative Analysis
176
+
177
+ We visually investigate how our world model works compared to the world model of DreamerV2. Specifically, we visualize the future frames predicted by latent dynamics models on Reach Target from RLBench in Figure 7. In this task, a robot arm should reach a target position specified by a red block in visual observations (see Figure 2(c)), which changes every trial. Thus it is crucial for the model to accurately predict the position of red blocks for solving the tasks. We find that our world model effectively captures the position of red blocks, while DreamerV2 fails. Interestingly, we also observe that our latent dynamics model ignores the components that are not task-relevant such as blue and orange blocks, though the reconstructions from the autoencoder are capturing all the details. This shows how our decoupling approach works: it encourages the autoencoder to focus on learning representations capturing the details and the dynamics model to focus on modeling task-relevant components of environments. We provide more examples in Appendix B.
178
+
179
+ ## 6 Discussion
180
+
181
+ We have presented Masked World Models (MWM), which is a visual model-based RL framework that decouples visual representation learning and dynamics learning. By learning a latent dynamics model operating in the latent space of a self-supervised ViT, we find that our approach allows for solving a variety of visual control tasks from Meta-world, RLBench, and DeepMind Control Suite.
182
+
183
+ Limitation Despite the results, there are a number of areas for improvement. As we have shown in Figure 6(c), the performance of our approach heavily depends on the auxiliary reward prediction task. This might be because our autoencoder is not learning temporal information, which is crucial for learning task-relevant information. It would be interesting to investigate the performance of video representation learning with ViTs $\left\lbrack {{35},{50}}\right\rbrack$ . It would also be interesting to study introducing auxiliary prediction for other modalities, such as audio. Another weakness is that our model operates only on RGB pixels from a single camera viewpoint; we look forward to a future work that incorporates different input modalities such as proprioceptive states and point clouds, building on top of the recent multi-modal learning approaches $\left\lbrack {{51},{52}}\right\rbrack$ . Finally, our approach trains behaviors from scratch, which makes it still too sample-inefficient to be used in real-world scenarios. Leveraging a small number of demonstrations, incorporating the action mode with path planning [46], or pre-training a world model on video datasets [53] are directions we hope to investigate in future works.
184
+
185
+ References
186
+
187
+ [1] K. Chua, R. Calandra, R. McAllister, and S. Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in neural information processing systems, 2018.
188
+
189
+ [2] M. Deisenroth and C. E. Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In International Conference on Machine Learning, 2011.
190
+
191
+ [3] I. Lenz, R. A. Knepper, and A. Saxena. Deepmpc: Learning deep latent features for model predictive control. In Robotics: Science and Systems, 2015.
192
+
193
+ [4] T. Kurutach, I. Clavera, Y. Duan, A. Tamar, and P. Abbeel. Model-ensemble trust-region policy optimization. In International Conference on Learning Representations, 2018.
194
+
195
+ [5] M. Janner, J. Fu, M. Zhang, and S. Levine. When to trust your model: Model-based policy optimization. Advances in Neural Information Processing Systems, 2019.
196
+
197
+ [6] C. Finn and S. Levine. Deep visual foresight for planning robot motion. In 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017.
198
+
199
+ [7] F. Ebert, C. Finn, S. Dasari, A. Xie, A. Lee, and S. Levine. Visual foresight: Model-based deep reinforcement learning for vision-based robotic control. arXiv preprint arXiv:1812.00568, 2018.
200
+
201
+ [8] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in neural information processing systems, 2015.
202
+
203
+ [9] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson. Learning latent dynamics for planning from pixels. In International Conference on Machine Learning, 2019.
204
+
205
+ [10] M. Zhang, S. Vikram, L. Smith, P. Abbeel, M. Johnson, and S. Levine. Solar: Deep structured representations for model-based reinforcement learning. In International Conference on Machine Learning, 2019.
206
+
207
+ [11] D. Ha and J. Schmidhuber. World models. In Advances in Neural Information Processing Systems, 2018.
208
+
209
+ [12] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In International Conference on Learning Representations, 2014.
210
+
211
+ [13] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representaitons, 2021.
212
+
213
+ [14] K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick. Masked autoencoders are scalable vision learners. arXiv preprint arXiv:2111.06377, 2021.
214
+
215
+ [15] H. Bao, L. Dong, and F. Wei. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254, 2021.
216
+
217
+ [16] D. Hafner, T. Lillicrap, M. Norouzi, and J. Ba. Mastering atari with discrete world models. In International Conference on Learning Representations, 2021.
218
+
219
+ [17] T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In Conference on Robot Learning, 2020.
220
+
221
+ [18] S. James, Z. Ma, D. R. Arrojo, and A. J. Davison. RLBench: The robot learning benchmark & learning environment. IEEE Robotics and Automation Letters, 2020.
222
+
223
+ [19] H. Touvron, M. Cord, A. El-Nouby, J. Verbeek, and H. Jégou. Three things everyone should know about vision transformers. arXiv preprint arXiv:2203.09795, 2022.
224
+
225
+ [20] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, 2009.
226
+
227
+ [21] C. Finn, X. Y. Tan, Y. Duan, T. Darrell, S. Levine, and P. Abbeel. Deep spatial autoencoders for visuomotor learning. In 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016.
228
+
229
+ [22] D. Hafner, T. Lillicrap, J. Ba, and M. Norouzi. Dream to control: Learning behaviors by latent imagination. In International Conference on Learning Representations, 2020.
230
+
231
+ [23] W. Ye, S. Liu, T. Kurutach, P. Abbeel, and Y. Gao. Mastering atari games with limited data. Advances in Neural Information Processing Systems, 2021.
232
+
233
+ [24] L. Kaiser, M. Babaeizadeh, P. Milos, B. Osinski, R. H. Campbell, K. Czechowski, D. Erhan, C. Finn, P. Kozakowski, S. Levine, et al. Model-based reinforcement learning for atari. In International Conference on Learning Representations, 2019.
234
+
235
+ [25] T. Seyde, W. Schwarting, S. Karaman, and D. Rus. Learning to plan optimistically: Uncertainty-guided deep exploration via latent model ensembles. In Conference on Robot Learning, 2021.
236
+
237
+ [26] O. Rybkin, C. Zhu, A. Nagabandi, K. Daniilidiis, I. Mordatch, and S. Levine. Model-based reinforcement learning via latent-space collocation. In International Conference on Machine Learning, 2021.
238
+
239
+ [27] C. Gelada, S. Kumar, J. Buckman, O. Nachum, and M. G. Bellemare. Deepmdp: Learning continuous latent space models for representation learning. In International Conference on Machine Learning, 2019.
240
+
241
+ [28] T. D. Nguyen, R. Shu, T. Pham, H. Bui, and S. Ermon. Temporal predictive coding for model-based planning in latent space. In International Conference on Machine Learning, 2021.
242
+
243
+ [29] M. Okada and T. Taniguchi. Dreaming: Model-based reinforcement learning by latent imagination without reconstruction. In 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021.
244
+
245
+ [30] F. Deng, I. Jang, and S. Ahn. Dreamerpro: Reconstruction-free model-based reinforcement learning with prototypical representations. arXiv preprint arXiv:2110.14565, 2021.
246
+
247
+ [31] X. Chen, S. Xie, and K. He. An empirical study of training self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.
248
+
249
+ [32] M. Caron, H. Touvron, I. Misra, H. Jégou, J. Mairal, P. Bojanowski, and A. Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.
250
+
251
+ [33] G. Hinton, O. Vinyals, J. Dean, et al. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
252
+
253
+ [34] Z. Li, Z. Chen, F. Yang, W. Li, Y. Zhu, C. Zhao, R. Deng, L. Wu, R. Zhao, M. Tang, et al. Mst: Masked self-supervised transformer for visual representation. In Advances in Neural Information Processing Systems, 2021.
254
+
255
+ [35] C. Feichtenhofer, H. Fan, Y. Li, and K. He. Masked autoencoders as spatiotemporal learners. arXiv preprint arXiv:2205.09113, 2022.
256
+
257
+ [36] Z. Xie, Z. Zhang, Y. Cao, Y. Lin, J. Bao, Z. Yao, Q. Dai, and H. Hu. Simmim: A simple framework for masked image modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
258
+
259
+ [37] C. Wei, H. Fan, S. Xie, C.-Y. Wu, A. Yuille, and C. Feichtenhofer. Masked feature prediction for self-supervised visual pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
260
+
261
+ [38] J. Zhou, C. Wei, H. Wang, W. Shen, C. Xie, A. Yuille, and T. Kong. ibot: Image bert pre-training with online tokenizer. arXiv preprint arXiv:2111.07832, 2021.
262
+
263
+ [39] T. Xiao, M. Singh, E. Mintun, T. Darrell, P. Dollár, and R. Girshick. Early convolutions help transformers see better. In Advances in Neural Information Processing Systems, 2021.
264
+
265
+ [40] Y. Tassa, S. Tunyasuvunakool, A. Muldal, Y. Doron, S. Liu, S. Bohez, J. Merel, T. Erez, T. Lillicrap, and N. Heess. dm_control: Software and tasks for continuous control. arXiv preprint arXiv:2006.12983, 2020.
266
+
267
+ [41] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT Press, 2018.
268
+
269
+ [42] A. Alemi, B. Poole, I. Fischer, J. Dillon, R. A. Saurous, and K. Murphy. Fixing a broken elbo. In International Conference on Machine Learning, 2018.
270
+
271
+ [43] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, 2017.
272
+
273
+ [44] B. Graham, A. El-Nouby, H. Touvron, P. Stock, A. Joulin, H. Jégou, and M. Douze. Levit: a vision transformer in convnet's clothing for faster inference. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.
274
+
275
+ [45] Y. Tassa, T. Erez, and E. Todorov. Synthesis and stabilization of complex behaviors through online trajectory optimization. In International Conference on Intelligent Robots and Systems, 2012.
276
+
277
+ [46] S. James and A. J. Davison. Q-attention: Enabling efficient learning for vision-based robotic manipulation. IEEE Robotics and Automation Letters, 2022.
278
+
279
+ [47] S. James, K. Wada, T. Laidlow, and A. J. Davison. Coarse-to-Fine Q-attention: Efficient learning for visual robotic manipulation via discretisation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
280
+
281
+ [48] S. James and P. Abbeel. Coarse-to-Fine Q-attention with Learned Path Ranking. arXiv preprint arXiv:2204.01571, 2022.
282
+
283
+ [49] S. James and P. Abbeel. Coarse-to-Fine Q-attention with Tree Expansion. arXiv preprint arXiv:2204.12471, 2022.
284
+
285
+ [50] A. Arnab, M. Dehghani, G. Heigold, C. Sun, M. Lučić, and C. Schmid. Vivit: A video vision transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.
286
+
287
+ [51] X. Geng, H. Liu, L. Lee, D. Schuurams, S. Levine, and P. Abbeel. Multimodal masked autoencoders learn transferable representations. arXiv preprint arXiv:2205.14204, 2022.
288
+
289
+ [52] R. Bachmann, D. Mizrahi, A. Atanov, and A. Zamir. Multimae: Multi-modal multi-task masked autoencoders. arXiv preprint arXiv:2204.01678, 2022.
290
+
291
+ [53] Y. Seo, K. Lee, S. James, and P. Abbeel. Reinforcement learning with action-free pre-training from videos. In International Conference on Machine Learning, 2022.
292
+
293
+ [54] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015.
294
+
295
+ [55] Y. Bengio, N. Léonard, and A. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/Bf6on28H0Jv/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § MASKED WORLD MODELS FOR VISUAL CONTROL
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: We introduce a visual model-based reinforcement learning (RL) framework that decouples visual representation learning and dynamics learning. For visual representation learning, we train an autoencoder with convolutional layers and vision transformers (ViT) to reconstruct pixels given masked convolutional features, inspired by the recent success of self-supervised learning with ViT and masked image modeling. Moreover, in order to encode task-relevant information, we introduce an auxiliary reward prediction objective for the autoencoder. For dynamics learning, we learn a latent dynamics model that operates on visual representations from the autoencoder. Our framework continually updates both the autoencoder and dynamics model using online samples collected from environment interaction. We demonstrate that our decoupling approach achieves state-of-the-art performance on a variety of visual robotic tasks from Meta-world and RLBench, e.g., we achieve ${81.7}\%$ success rate on 50 visual robotic manipulation tasks from Meta-world, while the baseline achieves 67.9%.
12
+
13
+ Keywords: Visual model-based RL, World models
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Model-based reinforcement learning (RL) holds the promise of sample-efficient robot learning by learning a world model and leveraging it for planning $\left\lbrack {1,2,3}\right\rbrack$ or generating imaginary states for behavior learning $\left\lbrack {4,5}\right\rbrack$ . These approaches have also previously been applied to environments with visual observations, by learning an action-conditional video prediction model [6, 7] or a latent dynamics model that predicts compact representations in an abstract latent space $\left\lbrack {8,9}\right\rbrack$ . However, learning world models on environments with complex visual observations, e.g., modeling interactions between multiple small objects, as accurately as state-based world models still remains a challenge.
18
+
19
+ We argue that this difficulty comes from the design of current approaches that typically learn a single model to excel at learning both visual representations and dynamics [9, 10]. This imposes a trade-off between visual representation learning and dynamics learning, thus making it difficult to learn world models in environments where visual representation learning alone is already a challenging problem. In contrast, Ha and Schmidhuber [11] investigated the approach that separately trains a variational autoencoder (VAE) [12] and a dynamics model on top of the VAE features. By enabling us to leverage any visual representation learning and dynamics learning scheme, this approach has the potential to be a generic approach for learning world models. However, it is also limited in that VAE representations may not be amenable to dynamics learning $\left\lbrack {8,{10}}\right\rbrack$ or may not capture task-relevant information [11].
20
+
21
+ We present MWM: Masked World Models, a visual model-based RL framework that decouples visual representation learning and dynamics learning by learning a latent dynamics model on top of a self-supervised vision transformer (ViT) [13], inspired by the recent success of self-supervised learning methods with ViT and masked image modeling [14, 15]. Specifically, we separately update visual representations and dynamics by repeating the iterative processes of (i) training an autoencoder with convolutional feature masking and reward prediction, and (ii) learning a latent dynamics model that predicts visual representations from the autoencoder (see Figure 1).
22
+
23
+ < g r a p h i c s >
24
+
25
+ Figure 1: Illustration of our approach. We continually update visual representations and dynamics using online samples collected from environment interaction, by repeating iterative processes of training (Left) an autoencoder with convolutional feature masking and reward prediction and (Right) a latent dynamics model in the latent space of the autoencoder. We note that autoencoder parameters are not updated during dynamics learning.
26
+
27
+ § CONTRIBUTIONS WE HIGHLIGHT THE CONTRIBUTIONS OF OUR PAPER BELOW:
28
+
29
+ * We demonstrate the effectiveness of decoupling visual representation learning and dynamics learning for visual model-based RL. MWM significantly outperforms a state-of-the-art model-based baseline [16] on various visual control tasks from Meta-world [17] and RLBench [18].
30
+
31
+ * We show that a self-supervised ViT trained to reconstruct visual observations with convolutional feature masking can be effective for visual model-based RL. Interestingly, we find that masking convolutional features can be more effective than pixel patch masking [14], by allowing for capturing fine-grained details within patches. This is in contrast to the observation in Touvron et al. [19], where both perform similarly on the ImageNet classification task [20].
32
+
33
+ * We show that an auxiliary reward prediction task can significantly improve performance by encoding task-relevant information into visual representations.
34
+
35
+ § 2 RELATED WORK
36
+
37
+ World models from visual observations There have been several approaches to learn visual representations for model-based via image reconstruction [6,7,8,9,10,11,16,21,22], e.g., learning a video prediction model [6] or a latent dynamics model $\left\lbrack {8,9,{10}}\right\rbrack$ . This has been followed by a series of works that demonstrated the effectiveness of model-based approaches for solving video games $\left\lbrack {{16},{23},{24}}\right\rbrack$ and visual robot control tasks $\left\lbrack {7,{22},{25},{26}}\right\rbrack$ . There also have been several works that considered different objectives, including bisimulation [27] and contrastive learning [28, 29,30]. While most prior works introduce a single model that learns both visual representations and dynamics, we instead develop a framework that decouples visual representation learning and dynamics learning.
38
+
39
+ Self-supervised vision transformers Self-supervised learning with vision transformers (ViT) [13] has been actively studied. For instance, Chen et al. [31] introduced MoCo-v3 which trains a ViT with contrastive learning. Caron et al. [32] introduced DINO which utilizes a self-distillation loss [33], and demonstrated that self-supervised ViTs contain information about the semantic layout of images. Training self-supervised ViTs with masked image modeling [14, 15, 34, 35, 36, 37, 38] has also been successful. In particular, He et al. [14] proposed a masked autoencoder (MAE) that reconstructs masked pixel patches with an asymmetric encoder-decoder architecture. Unlike MAE, we propose to randomly mask features from early convolutional layers [39] instead of pixel patches and demonstrate that self-supervised ViTs can also be effective for visual model-based RL.
40
+
41
+ We provide more discussion on related works in more detail in Appendix C.
42
+
43
+ < g r a p h i c s >
44
+
45
+ Figure 2: Examples of visual observations used in our experiments. We consider a variety of visual robot control tasks from Meta-world [17], RLBench [18], and DeepMind Control Suite [40].
46
+
47
+ § 3 PRELIMINARIES
48
+
49
+ Problem formulation We formulate a visual control task as a partially observable Markov decision process (POMDP) [41], which is defined as a tuple $\left( {\mathcal{O},\mathcal{A},p,r,\gamma }\right) .\mathcal{O}$ is the observation space, $\mathcal{A}$ is the action space, $p\left( {{o}_{t} \mid {o}_{ < t},{a}_{ < t}}\right)$ is the transition dynamics, $r$ is the reward function that maps previous observations and actions to a reward ${r}_{t} = r\left( {{o}_{ \leq t},{a}_{ < t}}\right)$ , and $\gamma \in \lbrack 0,1)$ is the discount factor.
50
+
51
+ Dreamer Dreamer [16, 22] is a visual model-based RL method that learns world models from pixels and trains an actor-critic model via latent imagination. Specifically, Dreamer learns a Recurrent State Space Model (RSSM) [9], which consists of following four components:
52
+
53
+ Representation model: ${s}_{t} \sim {q}_{\theta }\left( {{s}_{t} \mid {s}_{t - 1},{a}_{t - 1},{o}_{t}}\right) \;$ Image decoder: $\;{\widehat{o}}_{t} \sim {p}_{\theta }\left( {{\widehat{o}}_{t} \mid {s}_{t}}\right)$(1)Transition model: $\;{\widehat{s}}_{t} \sim {p}_{\theta }\left( {{\widehat{s}}_{t} \mid {s}_{t - 1},{a}_{t - 1}}\right) \;$ Reward predictor: $\;{\widehat{r}}_{t} \sim {p}_{\theta }\left( {{\widehat{r}}_{t} \mid {s}_{t}}\right)$
54
+
55
+ The representation model extracts model state ${s}_{t}$ from previous model state ${s}_{t - 1}$ , previous action ${a}_{t - 1}$ , and current observation ${o}_{t}$ . The transition model predicts future state ${\widehat{s}}_{t}$ without the access to current observation ${o}_{t}$ . The image decoder reconstructs raw pixels to provide learning signal, and the reward predictor enables us to compute rewards from future model states without decoding future frames. All model parameters $\theta$ are trained to jointly learn visual representations and environment dynamics by minimizing the negative variational lower bound [12]:
56
+
57
+ $$
58
+ \mathcal{L}\left( \theta \right) \doteq {\mathbb{E}}_{{q}_{\theta }\left( {{s}_{1 : T} \mid {a}_{1 : T},{o}_{1 : T}}\right) } \tag{2}
59
+ $$
60
+
61
+ $$
62
+ \left. {\mathop{\sum }\limits_{{t = 1}}^{T}\left( {-\ln {p}_{\theta }\left( {{o}_{t} \mid {s}_{t}}\right) - \ln {p}_{\theta }\left( {{r}_{t} \mid {s}_{t}}\right) + \beta \operatorname{KL}\left\lbrack {{q}_{\theta }\left( {{s}_{t} \mid {s}_{t - 1},{a}_{t - 1},{o}_{t}}\right) \parallel {p}_{\theta }\left( {{\widehat{s}}_{t} \mid {s}_{t - 1},{a}_{t - 1}}\right) }\right\rbrack }\right) }\right\rbrack ,
63
+ $$
64
+
65
+ where $\beta$ is a hyperparameter that controls the tradeoff between the quality of visual representation learning and the accuracy of dynamics learning [42]. Then, the critic is learned to regress the values computed from imaginary rollouts, and the actor is trained to maximize the values by propagating analytic gradients back through the transition model (see Appendix A for the details).
66
+
67
+ Masked autoencoder Masked autoencoder (MAE) [14] is a self-supervised visual representation technique that trains an autoencoder to reconstruct raw pixels with randomly masked patches consisting of pixels. Following a scheme introduced in vision transformer (ViT) [13], the observation ${o}_{t} \in {\mathbb{R}}^{H \times W \times C}$ is processed with a patchify stem that reshapes ${o}_{t}$ into a sequence of 2D patches ${h}_{t} \in {\mathbb{R}}^{N \times \left( {{P}^{2}C}\right) }$ , where $P$ is the patch size and $N = {HW}/{P}^{2}$ is the number of patches. Then a subset of patches is randomly masked with a ratio of $m$ to construct ${h}_{t}^{m} \in {\mathbb{R}}^{M \times \left( {{P}^{2}C}\right) }$ .
68
+
69
+ $$
70
+ \text{ Patchify stem: }{h}_{t} = {f}_{\phi }^{\text{ patch }}\left( {o}_{t}\right) \;\text{ Masking: }\;{h}_{t}^{m} \sim {p}^{\text{ mask }}\left( {{h}_{t}^{m} \mid {h}_{t},m}\right) \tag{3}
71
+ $$
72
+
73
+ A ViT encoder embeds only the remaining patches ${h}_{t}^{m}$ into $D$ -dimensional vectors, concatenates the embedded tokens with a learnable CLS token, and processes them through a series of Transformer layers [43]. Finally, a ViT decoder reconstructs the observation by processing tokens from the encoder and learnable mask tokens through Transformer layers followed by a linear output head:
74
+
75
+ $$
76
+ \text{ ViT encoder: }{z}_{t}^{m} \sim {p}_{\phi }\left( {{z}_{t}^{m} \mid {h}_{t}^{m}}\right) \;\text{ ViT decoder: }\;{\widehat{o}}_{t} \sim {p}_{\phi }\left( {{\widehat{o}}_{t} \mid {z}_{t}^{m}}\right) \tag{4}
77
+ $$
78
+
79
+ All the components paramaterized by $\phi$ are jointly optimized to minimize the mean squared error (MSE) between the reconstructed and original pixel patches. MAE computes ${z}_{t}^{0}$ without masking, and utilizes its first component (i.e., CLS representation) for downstream tasks (e.g., image classification).
80
+
81
+ § 4 METHOD
82
+
83
+ In this section, we present Masked World Models (MWM), a visual model-based RL framework for learning accurate world models by separately learning visual representations and environment dynamics. Our method repeats (i) updating an autoencoder with convolutional feature masking and an auxiliary reward prediction task (see Section 4.1), (ii) learning a dynamics model in the latent space of the autoencoder (see Section 4.2), and (iii) collecting samples from environment interaction. We provide the overview and pseudocode of MWM in Figure 1 and Appendix D, respectively.
84
+
85
+ § 4.1 AUTOENCODERS WITH CONVOLUTIONAL FEATURE MASKING AND REWARD PREDICTION
86
+
87
+ It has been observed that masked image modeling with a ViT architecture [14, 15, 35] enables compute-efficient and stable self-supervised visual representation learning. This motivates us to adopt this approach for visual model-based RL, but we find that masked image modeling with commonly used pixel patch masking [14] often makes it difficult to learn fine-grained details within patches, e.g., small objects (see Appendix B for a motivating example). While one can consider small-size patches, this would increase computational costs due to the quadratic complexity of self-attention layers.
88
+
89
+ To handle this issue, we instead propose to train an autoencoder that reconstructs raw pixels given randomly masked convolutional features. Unlike previous approaches that utilize a patchify stem and randomly mask pixel patches (see Section 3), we adopt a convolution stem [13, 39] that processes ${o}_{t}$ through a series of convolutional layers followed by a flatten layer, to obtain ${h}_{t}^{c} \in {\mathbb{R}}^{{N}_{c} \times D}$ where ${N}_{c}$ is the number of convolutional features. Then ${h}_{t}^{c}$ is randomly masked with a ratio of $m$ to obtain ${h}_{t}^{c,m} \in {\mathbb{R}}^{{M}_{c} \times D}$ , and ViT encoder and decoder process ${h}_{t}^{c,m}$ to reconstruct raw pixels.
90
+
91
+ $$
92
+ \text{ Convolution stem: }{h}_{t}^{c} = {f}_{\phi }^{\text{ conv }}\left( {o}_{t}\right) \;\text{ Masking: }\;{h}_{t}^{c,m} \sim {p}^{\text{ mask }}\left( {{h}_{t}^{c,m} \mid {h}_{t}^{c},m}\right) \tag{5}
93
+ $$
94
+
95
+ $$
96
+ \text{ ViT encoder: }\;{z}_{t}^{c,m} \sim {p}_{\phi }\left( {{z}_{t}^{c,m} \mid {h}_{t}^{c,m}}\right) \;\text{ ViT decoder: }\;{\widehat{o}}_{t} \sim {p}_{\phi }\left( {{\widehat{o}}_{t} \mid {z}_{t}^{c,m}}\right)
97
+ $$
98
+
99
+ Because early convolutional layers mix low-level details, we find that our autoencoder can effectively reconstruct all the details within patches by learning to extract information from nearby non-masked features (see Figure 7 for examples). This enables us to learn visual representations capturing such details while also achieving the benefits of MAE, e.g., stability and compute-efficiency.
100
+
101
+ Reward prediction In order to encode task-relevant information that might not be captured solely by the reconstruction objective, we introduce an auxiliary objective for the autoencoder to predict rewards jointly with pixels. Specifically, we make the autoencoder predict the reward ${r}_{t}$ from ${z}_{t}^{c,m}$ in conjunction with raw pixels.
102
+
103
+ $$
104
+ \text{ ViT decoder with reward prediction: }{\widehat{o}}_{t},{\widehat{r}}_{t} \sim {p}_{\phi }\left( {{\widehat{o}}_{t},{\widehat{r}}_{t} \mid {z}_{t}^{c,m}}\right) \tag{6}
105
+ $$
106
+
107
+ In practice, we concatenate one additional learnable mask token to inputs of the ViT decoder, and utilize the corresponding output representation for predicting the reward with a linear output head.
108
+
109
+ High masking ratio Introducing early convolutional layers might impede the masked reconstruction tasks because they propagate information across patches [19], and the model can exploit this to find a shortcut to solve reconstruction tasks. However, we find that a high masking ratio (i.e., 75%) can prevent the model from finding such shortcuts and induce useful representations (see Figure 6(b) for supporting experimental results). This also aligns with the observation from Touvron et al. [19], where masked image modeling [15] with a convolution stem [44] can achieve competitive performance with the patchify stem on the ImageNet classification task [20].
110
+
111
+ < g r a p h i c s >
112
+
113
+ Figure 3: Learning curves on six visual robotic manipulation tasks from Meta-world as measured on the success rate. We select the tasks that require modeling interactions between small objects and robot arms. Learning curves on 50 tasks are available in Appendix G. The solid line and shaded regions represent the mean and bootstrap confidence intervals, respectively, across five runs.
114
+
115
+ § 4.2 WORLD MODELS IN LATENT SPACE
116
+
117
+ Once we learn visual representations, we leverage them for efficiently learning a dynamics model in the latent space of the autoencoder. Specifically, we obtain the frozen representations ${z}_{t}^{c,0}$ from the autoencoder, and then train a variant of RSSM whose inputs and reconstruction targets are ${z}_{t,0}^{c}$ , by replacing the representation model and the image decoder in Equation 1 with following components:
118
+
119
+ $$
120
+ \text{ Representation model: }\;{s}_{t} \sim {q}_{\theta }\left( {{s}_{t} \mid {s}_{t - 1},{a}_{t - 1},{z}_{t}^{c,0}}\right) \tag{7}
121
+ $$
122
+
123
+ $$
124
+ \text{ Visual representation decoder: }{\widehat{z}}_{t}^{c,0} \sim {p}_{\theta }\left( {{\widehat{z}}_{t}^{c,0} \mid {s}_{t}}\right)
125
+ $$
126
+
127
+ Because visual representations capture both high- and low-level information in an abstract form, the model can focus more on dynamics learning by reconstructing them instead of raw pixels (see Section 5.5 for relevant discussion). Here, we also note that we utilize all the elements of ${z}_{t}^{c,0}$ unlike MAE that only utilizes CLS representation for downstream tasks. We empirically find this enables the model to receive rich learning signals from reconstructing all the representations containing spatial information (see Appendix I for supporting experiments).
128
+
129
+ § 5 EXPERIMENTS
130
+
131
+ We evaluate MWM on various robotics benchmarks, including Meta-world [17] (see Section 5.1), RLBench [18] (see Section 5.2), and DeepMind Control Suite [45] (see Section 5.3). We remark that these benchmarks consist of diverse and challenging visual robotic tasks. We also analyze algorithmic design choices in-depth (see Section 5.4) and provide a qualitative analysis of how our decoupling approach works by visualizing the predictions from the latent dynamics model (see Section 5.5).
132
+
133
+ Implementation We use visual observations of ${64} \times {64} \times 3$ . For the convolution stem, we stack 3 convolution layers with the kernel size of 4 and stride 2, followed by a linear projection layer. We use a 4-layer ViT encoder and a 3-layer ViT decoder. We find that initializing the autoencoder with a warm-up schedule at the beginning of training is helpful. Unlike MAE, we compute the loss on entire pixels because we do not apply masking to pixels. For world models, we build our implementation on top of DreamerV2 [16]. To take a sequence of autoencoder representations as inputs, we replace a CNN encoder and decoder with a 2-layer Transformer encoder and decoder. We use same hyperparameters within the same benchmark. More details are available in Appendix E.
134
+
135
+ < g r a p h i c s >
136
+
137
+ Figure 4: (a) Aggregate performance on all 50 Meta-world tasks. We normalize environment steps by maximum steps in each task. The solid line and shaded regions represent the mean and stratified bootstrap confidence intervals, respectively, across 250 runs. We report the learning curves on (b) Reach Target and (c) Push Button from RLBench. Performances are not directly comparable to previous results [46, 47] due to the difference in setups (see Section 5.2). The solid line and shaded regions represent the mean and bootstrap confidence intervals, respectively, across eight runs.
138
+
139
+ § 5.1 META-WORLD EXPERIMENTS
140
+
141
+ Environment details In order to use a single camera viewpoint consistently over all 50 tasks, we use the modified corner2 camera viewpoint for all tasks. In our experiments, we classify 50 tasks into easy, medium, hard, and very hard tasks where experiments are run over ${500}\mathrm{\;K},1\mathrm{M},2\mathrm{M},3\mathrm{M}$ environments steps with action repeat of 2, respectively. More details are available in Appendix F.
142
+
143
+ Results In Figure 3, we report the performance on a set of selected six challenging tasks that require agents to control robot arms to interact with small objects. We find that MWM significantly outperforms DreamerV2 in terms of both sample-efficiency and final performance. In particular, MWM achieves $> {80}\%$ success rate on Pick Place while DreamerV2 struggles to solve the task. These results show that our approach of separating visual representation learning and dynamics learning can learn accurate world models on challenging domains. Figure 4(a) shows the aggregate performance over all the 50 tasks from the benchmark, demonstrating that our method consistently outperforms DreamerV2 overall. We also provide learning curves on all individual tasks in Appendix G, where MWM consistently achieves similar or better performance on most tasks.
144
+
145
+ § 5.2 RLBENCH EXPERIMENTS
146
+
147
+ Environment details In order to evaluate our method on more challenging visual robotic manipulation tasks, we consider RLBench [18], which has previously acted as an effective proxy for real-robot performance [47]. Since RLBench consists of sparse-reward and challenging tasks, solving them typically requires expert demonstrations, specialized network architectures, additional inputs (e.g., point cloud and proprioceptive states), and an action mode that requires path planning [46, 47, 48, 49]. While we could utilize some of these components, we instead leave this as future work in order to maintain a consistent evaluation setup across multiple domains. In our experiments, we instead consider two relatively easy tasks with dense rewards, and utilize an action mode that specifies the delta of joint positions. We provide more details in Appendix F.
148
+
149
+ Results As shown in Figure 4(b) and Figure 4(c), we observe that our approach can also be effective on RLBench tasks, significantly outperforming DreamerV2. In particular, DreamerV2 achieves $< {20}\%$ success rate on Reach Target, while our approach can solve the tasks with $> {80}\%$ success rates. We find that this is because DreamerV2 fails to capture target positions in visual observations, while our method can capture such details (see Section 5.5 for relevant discussion and visualizations). However, we also note that these results are preliminary because they are still too sample-inefficient to be used for real-world scenarios. We provide more discussion in Section 6.
150
+
151
+ < g r a p h i c s >
152
+
153
+ Figure 5: Learning curves on three visual robot control tasks from DeepMind Control Suite as measured on the episode return. The solid line and shaded regions represent the mean and bootstrap confidence intervals, respectively, across eight runs.
154
+
155
+ < g r a p h i c s >
156
+
157
+ Figure 6: Learning curves on three manipulation tasks from Meta-world that investigate the effect of (a) convolutional feature masking, (b) masking ratio, and (c) reward prediction. The solid line and shaded regions represent the mean and stratified bootstrap confidence interval across 12 runs.
158
+
159
+ § 5.3 DEEPMIND CONTROL SUITE EXPERIMENTS
160
+
161
+ Environment details In order to demonstrate that our approach is generally applicable to diverse visual control tasks, we also evaluate our method on visual locomotion tasks from the widely used DeepMind Control Suite benchmark. Following a standard setup in Hafner et al. [22], we use an action repeat of 2 and default camera configurations. We provide more details in Appendix F.
162
+
163
+ Results Figure 5 shows that our method achieves competitive performance to DreamerV2 on visual locomotion tasks (i.e., Quadruped tasks), demonstrating the generality of our approach across diverse visual control tasks. We also observe that our method outperforms DreamerV2 on Reach Duplo, which is one of a few manipulation tasks in the benchmark (see Figure 2(e) for an example). This implies that our method is effective on environments where the model should capture fine-grained details like object positions. More results are available in Appendix H, where trends are similar.
164
+
165
+ § 5.4 ABLATION STUDY
166
+
167
+ Convolutional feature masking We compare convolutional feature masking with pixel masking (i.e., MAE) in Figure 6(a), which shows that convolutional feature masking significantly outperforms pixel masking. This demonstrates that enabling the model to capture fine-grained details within patches can be important for visual control. We also report the performance with varying masking ratio $m \in \{ {0.25},{0.5},{0.75},{0.9}\}$ in Figure 6(b). As we discussed in Section 4.1, we find that $m = {0.75}$ achieves better performance than $m \in \{ {0.25},{0.5}\}$ because strong regularization can prevent the model from finding a shortcut from input pixels. However, we also find that too strong regularization (i.e., $m = {0.9}$ ) degrades the performance.
168
+
169
+ Reward prediction In Figure 6(c), we find that performance significantly degrades without reward prediction, which shows that the reconstruction objective might not be sufficient for learning task-relevant information. It would be an interesting future direction to develop a representation learning scheme that learns task-relevant information without rewards because they might not be available in practice. We provide more ablation studies and learning curves on individual tasks in Appendix I.
170
+
171
+ < g r a p h i c s >
172
+
173
+ Figure 7: Future frames reconstructed with the autoencoder (i.e., Recon) and predicted by latent dynamics models (i.e., Predicted). Predictions from our model capture the position of a red block, which is a target position a robot arm should reach, but predictions from Dreamer are not capturing such details. In our predictions, the components that are not task-relevant are abstracted away (i.e., blue and orange blocks), though the autoencoder reconstructs them. This shows how our decoupling approach works: it encourages the autoencoder to capture all the details, and the dynamics model to focus on modeling task-relevant components. Best viewed as video provided in Appendix B.
174
+
175
+ § 5.5 QUALITATIVE ANALYSIS
176
+
177
+ We visually investigate how our world model works compared to the world model of DreamerV2. Specifically, we visualize the future frames predicted by latent dynamics models on Reach Target from RLBench in Figure 7. In this task, a robot arm should reach a target position specified by a red block in visual observations (see Figure 2(c)), which changes every trial. Thus it is crucial for the model to accurately predict the position of red blocks for solving the tasks. We find that our world model effectively captures the position of red blocks, while DreamerV2 fails. Interestingly, we also observe that our latent dynamics model ignores the components that are not task-relevant such as blue and orange blocks, though the reconstructions from the autoencoder are capturing all the details. This shows how our decoupling approach works: it encourages the autoencoder to focus on learning representations capturing the details and the dynamics model to focus on modeling task-relevant components of environments. We provide more examples in Appendix B.
178
+
179
+ § 6 DISCUSSION
180
+
181
+ We have presented Masked World Models (MWM), which is a visual model-based RL framework that decouples visual representation learning and dynamics learning. By learning a latent dynamics model operating in the latent space of a self-supervised ViT, we find that our approach allows for solving a variety of visual control tasks from Meta-world, RLBench, and DeepMind Control Suite.
182
+
183
+ Limitation Despite the results, there are a number of areas for improvement. As we have shown in Figure 6(c), the performance of our approach heavily depends on the auxiliary reward prediction task. This might be because our autoencoder is not learning temporal information, which is crucial for learning task-relevant information. It would be interesting to investigate the performance of video representation learning with ViTs $\left\lbrack {{35},{50}}\right\rbrack$ . It would also be interesting to study introducing auxiliary prediction for other modalities, such as audio. Another weakness is that our model operates only on RGB pixels from a single camera viewpoint; we look forward to a future work that incorporates different input modalities such as proprioceptive states and point clouds, building on top of the recent multi-modal learning approaches $\left\lbrack {{51},{52}}\right\rbrack$ . Finally, our approach trains behaviors from scratch, which makes it still too sample-inefficient to be used in real-world scenarios. Leveraging a small number of demonstrations, incorporating the action mode with path planning [46], or pre-training a world model on video datasets [53] are directions we hope to investigate in future works.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/BxHcg_Zlpxj/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,245 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Rethinking Sim2Real: Lower Fidelity Simulation Leads to Higher Sim2Real Transfer in Navigation
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: If we want to train robots in simulation before deploying them in reality, it seems natural and almost self-evident to presume that reducing the sim2real gap involves creating simulators of increasing fidelity (since reality is what it is). We challenge this assumption and present a contrary hypothesis - sim2real transfer of robots may be improved with lower (not higher) fidelity simulation. We conduct a systematic large-scale evaluation of this hypothesis on the problem of visual navigation - in the real world, and on 2 different simulators (Habitat and iGibson) using 3 different robots (A1, AlienGo, Spot). Our results show that, contrary to expectation, adding fidelity does not help with learning; performance is poor due to slow simulation speed (preventing large-scale learning) and overfit-ting to inaccuracies in simulation physics. Instead, building simple models of the robot motion using real-world data can improve learning and generalization.
12
+
13
+ Keywords: Sim2Real, Deep Reinforcement Learning, Visual-Based Navigation
14
+
15
+ ## 1 Introduction
16
+
17
+ The sim2real paradigm consists of training robots in simulation (potentially for billions of simulation steps corresponding to decades of experience [1]) before deploying them in reality. The last few years have seen significant investments - the development of new simulators [2-12], curation and annotation of 3D scans and assets [13-15], and development of techniques for overcoming the sim2real gap [16-19] - resulting in a number of successful demonstrations of sim2real transfer [20- 25]. However, no simulator is a perfect replica of reality and the main challenge in this paradigm is overcoming the sim2real gap, defined as the drop in a robot's performance in the real-world (compared to simulation). It seems natural and almost self-evident to presume that reducing this sim2real gap involves creating simulators of increasing physics fidelity, and this sometimes forms the default operating hypothesis of the field.
18
+
19
+ We challenge this convention and present a counter-intuitive idea - sim2real transfer of robots may be improved not by increasing but by decreasing simulation fidelity. Specifically, we propose that instead of training robots entirely in simulation, we use classical ideas from hierarchical robot control [26] to decompose the policy into a 'high-level policy' (that is trained solely in simulation) and a 'low-level controller' (that is designed entirely on hardware and may even be a black-box controller shipped by a manufacturer). This decomposition means that the simulator does not need to model low-level dynamics, which can save both simulation time (since there is no need to simulate expensive low-level controllers), and developer time spent building and designing these controllers.
20
+
21
+ We conduct a systematic large-scale evaluation of our hypothesis on the task of PointGoal (visual) Navigation [27] in unknown environments - using 2 simulators (Habitat and Gibson) and 3 different robots (A1, AlienGo, Spot). We train policies using two physics fidelities - kinematic and dynamic. Kinematic simulation uses abstracted physics and 'teleports' the robot to the next state using Euler integration; kinematic policies command robot center-of-mass (CoM) linear and angular velocities.
22
+
23
+ ![01963f33-b2e5-7c40-b731-07ee4fc5575a_1_371_205_1055_745_0.jpg](images/01963f33-b2e5-7c40-b731-07ee4fc5575a_1_371_205_1055_745_0.jpg)
24
+
25
+ Figure 1: Left: We train visual navigation policies at two levels of fidelity - kinematic and dynamic. In kinematic control (top), the robot is 'teleported' to the next state using Euler integration. In dynamic control (bottom), the robot's velocity commands are converted to leg joint-torques and rigid-body physics is simulated at 240Hz. Right: We evaluate the kinematic and dynamic trained policies in simulation (top) and the real-world (bottom) across 5 identical episodes. The kinematic policy achieves a ${100}\%$ success rate in all 5 episodes, and the robot takes similar paths in both simulation and the real-world. On the other hand, the dynamic policy achieves a 20-60% success rate, and the trajectories taken in simulation and the real-world do not correlate, pointing towards a larger sim2real gap.
26
+
27
+ Dynamic simulation consists of rigid-body mechanics and simulates contact dynamics (via Bullet [12]); dynamic policies command CoM linear and angular velocities, which are converted to robot joint-torques by a low-level controller operating at ${240}\mathrm{\;{Hz}}$ . We find that across all robots, a kinematically trained policy outperforms dynamic policies, even when evaluated using dynamic simulation and control. Additionally, we show that the trained kinematic policy can be transferred to a real Spot robot, which ships with manufacturer-provided 'black-box' low-level controllers that cannot be accurately simulated. In contrast, dynamic policies fail to achieve efficient navigation behavior on Spot, due to the sim2real gap and less simulation experience.
28
+
29
+ The reasons for these improvements are perhaps unsurprising in hindsight - learning-based methods overfit to simulators, and present-day physics simulators have approximations and imperfections that do not transfer to the real-world. A second equally significant mechanism is also in play - lower fidelity simulation is typically faster, enabling policies to be trained with more experience under a fixed wall-clock budget; although the kinematic and dynamic policies were trained with the same compute for the same amount of time, the kinematic policy was able to learn from ${20} \times$ data as the dynamic policy. While our results are presented on legged locomotion and visual navigation, the underlying principle - of architecting hierarchical policies and only training the high-level policy in an abstracted simulation - is broadly applicable. We hope that our work leads to a rethink in how the research community pursues sim2real and in how we develop the simulators of tomorrow. Specifically, our findings suggest that instead of investing in higher-fidelity physics, the field should prioritize simulation speed for tasks that can be represented with abstract action spaces.
30
+
31
+ ## 8 2 Related Work
32
+
33
+ Visual Navigation. Recent works have shown that large-scale indoor environments and simulators like Habitat $\left\lbrack {2,3}\right\rbrack$ and iGibson $\left\lbrack 4\right\rbrack$ can enable end-to-end learning of navigation policies from large amounts of agent- or expert-generated data [28-30] on simple, wheeled systems. This is in contrast to the typical mapping and planning paradigm used in classical robotics, which can suffer when the quality of maps is low [31] or requires expensive equipment like LiDAR [32]. In this work, we show that such end-to-end learning is also possible for complex, legged robots.
34
+
35
+ Sim2real for Legged Robots. Sim2real quadrupedal locomotion has been widely studied in the past several decades [22, 33-36], with most learning low-level skills in simulation and transferring them to hardware [37], or adapting them online to reduce the sim2real gap [38, 39]. However, these policies are typically blind, and use only proprioceptive sensors on the robot to determine actions $\left\lbrack {{23},{25},{40}}\right\rbrack$ . In contrast, an autonomous robot needs to respond to its environment, and take visual input into account. Some works have proposed learning visual policies in simulation and applying them to the real-world $\left\lbrack {{22},{24},{41}}\right\rbrack$ . These works use learned or hand-designed physically simulated low-level controllers; we show that physics simulations can be detrimental to learning high-performing sim2real policies, even for complex legged robots.
36
+
37
+ Abstracted Task-space Learning. Abstracted (hierarchical, high-level) action spaces are common in robotics literature. Examples include task and motion planning for manipulation [42-44], legged locomotion [45, 46], navigation [20], etc. Several works reason over symbolic actions like pick and place, or hierarchical policies with discrete/continuous attributes [33-35, 47, 48], or even abstracted dynamics models [36]. While the ideas of abstracted/hierarchical policies are fairly common, typically both the high- and low-level policies are learned in simulation and transferred to reality $\left\lbrack {{33},{34},{36}}\right\rbrack$ , often augmented with techniques like domain randomization [37] and real-word adaption [18]. Instead, we use an abstracted simulator, which does not model low-level physics, and learn high-level policies that are transferred to the real-world in a zero-shot manner.
38
+
39
+ ## 3 Experimental Setup
40
+
41
+ Task: PointGoal Navigation. In the task of PointGoal Navigation [27], a robot is initialized in an unknown environment and is tasked with navigating to a goal coordinate without access to a pre-built map of the environment. The goal is specified relative to the robot's starting location for the episode (i.e.,"go to $\Delta \mathrm{x},\Delta \mathrm{y}$ "). The robot has access to an egocentric depth sensor and an egomotion sensor (sometimes referred to as GPS+Compass in this literature) from which the robot derives the goal location relative to its current pose. An episode is considered successful when the robot reaches the goal position within a success radius (typically half of the robot's body length). The robot operates within constraints of maximum number of steps per episode ( 150 for Spot) and velocity limits ( $\pm {0.5}$ $\mathrm{m}/\mathrm{s}$ for linear and $\pm {0.3}\mathrm{{rad}}/\mathrm{s}$ for angular velocities on Spot). We linearly scale the linear and angular velocity limits for A1 and Aliengo to be proportional to the length of each robot's leg, and inversely scale the maximum number of steps allowed. In effect, smaller robots have a smaller maximum allowed velocity to improve stability during execution, but are allowed more steps to reach the goal. The exact parameters used for each robot is described in the appendix. For evaluation, we report the success rate (SR), and Success inversely weighted by Path Length (SPL) [27], which measures the efficiency of the trajectory taken with respect to the ground-truth shortest path.
42
+
43
+ Robot Platforms. We study visual navigation for 3 quadrupedal robots - A1 and Aliengo from Unitree [49], and Spot from Boston Dynamics (BD) [50] in simulation. In the real-world, we show sim2real transfer of the learned navigation policies to Spot. To have a consistent camera setup across all the robots, we attach an Intel RealSense D435 camera to Spot in the real-world, and use this camera for visual inputs to the policy. In our hardware experiments, we want to measure how often our sim2real policies lead to collisions without jeopardizing safety. We achieve this balance as follows: the BD collision-avoidance capability is kept turned on, set to trigger at a tight threshold of ${0.10}\mathrm{\;m}$ . Next, we track the number of times the robot comes within ${0.20}\mathrm{\;m}$ of any obstacle (as measured by any of the 5 onboard depth cameras). This gap (between ${0.20}\mathrm{\;m}$ and ${0.10}\mathrm{\;m}$ ) allows us to record possible collisions while preventing actual ones. While the BD API allows for high-level navigation without access to a map, it cannot navigate around obstacles autonomously, without
44
+
45
+ a map. In our work, we consider complex, long-range navigation paths (up to ${30}\mathrm{\;m}$ ) in cluttered environments with many obstacles; the goals are unreachable with the just BD navigation API.
46
+
47
+ ![01963f33-b2e5-7c40-b731-07ee4fc5575a_3_299_304_1186_248_0.jpg](images/01963f33-b2e5-7c40-b731-07ee4fc5575a_3_299_304_1186_248_0.jpg)
48
+
49
+ Figure 2: Robots used for training and evaluation.
50
+
51
+ Simulation Environments. We use two simulation platforms - Habitat [2,3] and iGibson [4] for training and evaluation. Both simulators support rendering of photorealistic environments; Habitat uses a low-level (C++) integration with the Bullet physics engine [12], while Gibson leverages Py-Bullet, the Python-based integration of Bullet. Thus, while the underlying physics engines between the two are the same, Habitat runs $\sim {1200}\%$ faster than Gibson [3]. This allows us to train policies faster with Habitat than with Gibson even when using identical policies and compute.
52
+
53
+ Dataset. For training and evaluation, we use a combination of the Habitat-Matterport (HM3D) [13] and Gibson [51] 3D datasets. The two datasets combined consist of over 1000 high-resolution 3D scans of real-world indoors environments, and consists of realistic clutter. We generate training and evaluation episodes compatible with our robots for the HM3D and Gibson scenes following the procedure described in [2]. Specifically, we restrict the geodesic distance from the start and positions to be between 1 and ${30}\mathrm{\;m}$ , and increase navigation complexity by rejecting paths that consist of near-straight lines, with few obstacles. As described in [2], both of these heuristics result in complex, but navigable paths. Additionally, we check for collisions along the sampled paths using the URDF of the largest robot (Spot) to ensure that all paths are navigable.
54
+
55
+ Real-World Test Environment. The real-world evaluation environment, LAB, is a ${325}{\mathrm{\;m}}^{2}$ lobby in a commercial office building. The lobby contains furniture such a couches, cushions, bookshelves and tables. We specify a set of 5 waypoints as the start and end locations for the navigation episodes in LAB with an average episode length of ${10}\mathrm{\;m}$ . We match the furniture layout to the position captured in the 3D scan (Figure 3) to run identical evaluation experiments in both simulation and the real-world. The scan of LAB is not part of training.
56
+
57
+ ![01963f33-b2e5-7c40-b731-07ee4fc5575a_3_938_1158_502_321_0.jpg](images/01963f33-b2e5-7c40-b731-07ee4fc5575a_3_938_1158_502_321_0.jpg)
58
+
59
+ Figure 3: The real-world testing environment is a part of a large commercial building and contains clutter from furniture such as tables, bookshelves, and couches.
60
+
61
+ ## 4 Kinematic and Dynamic Control for Visual Navigation
62
+
63
+ As illustrated in Figure 4, our proposed approach is hierarchical, with (1) a high-level visual navigation policy that commands desired center of mass (CoM) motion at $1\mathrm{\;{Hz}}$ , and (2) a low-level controller that follows this desired motion. We consider controllers at two levels of abstraction - 'kinematic' and 'dynamic'. The kinematic controller simply integrates the desired velocity and outputs a CoM position at $1\mathrm{\;{Hz}}$ ; kinematic simulation then teleports the robot to the desired state. The dynamic controller uses a low-level controller that commands joint torques at ${240}\mathrm{\;{Hz}}$ ; dynamic simulation models rigid-body and contact dynamics via Bullet (with a physics step-size of $1/{240}\mathrm{{sec}}$ ). We provide details of all three of these pieces (high-level policy, kinematic and dynamic controllers) next.
64
+
65
+ High-level Visual Navigation Policies. The high-level policy takes as input an egocentric depth image, and the goal location relative to the robot's current pose. The output of the policy is a 3-dimensional vector, representing the desired CoM forward, lateral, and angular velocities $\left( {{V}_{x},{V}_{y},\omega }\right)$ . The neural network architecture consists of a ResNet-18 visual encoder and a 2-layer LSTM policy. Using a recurrent policy allows the policy to learn temporal dependencies through the hidden state. The final layer of the policy parameterizes a Gaussian action distribution from which the action is sampled. The policy is trained using DD-PPO [1], a distributed reinforcement learning method, in both the Habitat and Gibson simulators. Our reward function is derived from [22], with an added penalty for backward velocities, which can lead to collisions and hurts performance.
66
+
67
+ ![01963f33-b2e5-7c40-b731-07ee4fc5575a_4_368_200_1067_386_0.jpg](images/01963f33-b2e5-7c40-b731-07ee4fc5575a_4_368_200_1067_386_0.jpg)
68
+
69
+ Figure 4: Our architecture for PointGoal Navigation on a legged robot. A high-level visual navigation policy predicts CoM linear and angular velocities. The velocities are passed into either a kinematic or dynamic low-level controller to step the robot in simulation. In the real-world, we directly send the velocity commands from the high-level policy to the robot, and uses the low-level controller from Boston Dynamics for movement.
70
+
71
+ Kinematic Control and Simulation. In kinematic control, the final state of the robot is calculated by integrating the desired CoM velocity commanded by the high-level navigation policy at $1\mathrm{\;{Hz}}$ . The robot is directly moved to the desired pose, without running a physics simulation. In both Habitat and iGibson, the robot is kept in place if being at the new desired state would result in a collision.
72
+
73
+ The objective of the kinematic control is to abstract away the low-level physics interactions between the robot and its environment. This has two advantages: (1) it avoids the need to accurately model low-level controllers, especially for closed-source robots like Spot; (2) it enables faster simulation speed by avoiding high-frequency physics integration, conducive to model-free RL that requires large amounts of experience. On the other hand, teleporting the robot to the desired state might remove necessary dynamics, such as poor tracking of low-level controllers. In Section 5, we propose how to incorporate such low-level characteristics into a kinematic simulation using real-world data.
74
+
75
+ Dynamic Control in Simulation and Hardware. We experiment with two different low-level dynamic controllers for quadruped robots. The first is an expert-designed Raibert-style controller from [22], which consists of a footstep generator and an inverse kinematic solver that commands desired joint angles from CoM velocities. The joint angles are converted to joint torques using a linear feedback controller, and applied to the simulation. This controller was shown to achieve sim2real transfer for A1 [22]. However, on other robots in our experiments, it shows relatively poor tracking of high-level commands. Thus, we also experiment with another model-predictive control (MPC) dynamic controller from [38], which commands joint torques directly. This controller has been applied to real-world A1 robot [52, 53] and shows better tracking of desired velocities for our test robots, as compared to the Raibert controller from [22]. However, MPC is prohibitively slow and cannot be used for training RL policies. Thus, we use Raibert for training dynamic policies, but evaluate using MPC. ${}^{1}$ This difference in train and evaluation dynamics controllers has multiple purposes: (1) the evaluation using MPC improves performance of most policies, including dynamic policies, due to its better ability to track high-level commands; (2) the difference between the two dynamic controllers in simulation is also a proxy for the difference between our low-level controllers and closed-source controllers from Spot. If a dynamic policy cannot transfer from Raibert to MPC, it has a low chance of transfer to Spot which has black box BD controllers, or even other robots in the real-world.
76
+
77
+ Both dynamic controllers model the low-level physics interactions between the robot and the environment. This makes them considerably slower than the kinematic controller, making training
78
+
79
+ ---
80
+
81
+ ${}^{1}$ Evaluation using Raibert [22] can be found in the appendix.
82
+
83
+ ---
84
+
85
+ RL policies challenging. Moreover, for Spot, the low-level controller implementation is not openly available, making it hard to reproduce the low-level controller. Our experiments in Section 5 show that the added fidelity of dynamic controllers does not benefit policy learning, or sim2real transfer.
86
+
87
+ ## 5 Results and Analysis
88
+
89
+ In this section, we first study generalization of visual navigation policies across simulators (trained in one sim, tested in another) and across controllers (trained with one controller, tested with another). This shows the importance of fast simulation for learning high-level policies by comparing performance of kinematic and dynamic policies trained for the same wall-clock time. Next, we examine the performance of the different policies at zero-shot sim2real transfer on the Spot robot.
90
+
91
+ How large is the sim2sim gap? High for dynamic, and low for kinematic policies. We exhaustively study the combinatorial space of experiments - policies trained under 2 training conditions (with kinematic and dynamic simulation) $\times 2$ evaluation conditions (kinematic and dynamic simulation) $\times 2$ simulators (Habitat and Gibson) $\times 3$ robots (A1, Aliengo, Spot). For each condition, we train and report results with 3 random seeds. Each policy is trained using 8 GPUs for 3 days, resulting in a cumulative training budget of 6,912 GPU-hours (288 GPU-days). The average success rates are presented in Figure 5. Rows represent the evaluation conditions as tuples (simulator, fidelity), while columns represent the training conditions. We evaluate all policies across 1,100 episodes from 110 unique scenes in the HM3D + Gibson validation split.
92
+
93
+ ![01963f33-b2e5-7c40-b731-07ee4fc5575a_5_294_988_1190_347_0.jpg](images/01963f33-b2e5-7c40-b731-07ee4fc5575a_5_294_988_1190_347_0.jpg)
94
+
95
+ Figure 5: Average success rates for sim2sim and kinematic2dynamic transfer for A1, Aliengo and Spot. We see that the kinematic trained policies perform the best overall (red quadrants), and also often outperform the dynamic trained policies, even when evaluated using dynamic control (green quadrants vs. orange quadrants).
96
+
97
+ ## We make two key observations here:
98
+
99
+ 1. Kinematic-trained policies perform best overall, for all robots. In all cases, kinematic policies outperform the dynamic trained policies, even when evaluated using dynamic control, e.g. 62.1% SR for A1 in (Gibson, Kinematic) vs. 24.2% SR in (Gibson, Dynamic), Fig. 5, left. This is a surprising result because the kinematic policies are being evaluated in an out-of-distribution setting, which was never seen or accounted for during training. On the other hand, the dynamic policies are being evaluated in the domain that they were trained in, hence do not require control-related generalization.
100
+
101
+ 2. Dynamic policies are not robust to different dynamic simulations. The dynamic policies from the two simulations observe significant performance drops when evaluated in the other dynamic simulation. This points to the dynamic policies overfitting to the simulator dynamics during training, failing to generalize to a new setting, see e.g. column 3, rows 3 and 4; 49.8% SR for A1 in (Habitat, Dynamic) vs. 22.3% SR in (Gibson, Dynamic). (Gibson, Dynamic) shows poor performance in both Gibson and Habitat, with slightly poorer performance in Habitat. This sensitivity to simulation makes training dynamics policies difficult, especially when the controller for the real-world robot is unknown. Even if the real-world controller is known, simulation physics and real-world are different, and sim2real transfer of the learned policy can suffer (as evidenced by low sim2sim transfer). On the other hand, kinematic policies, that have been trained with no physics, can generalize to the different dynamic controllers. Both of these results go to show that not only kinematic trained policies are able to learn the task well, they have learned to reason without overfitting to simulation physics, making their chances of successful sim2real transfer high.
102
+
103
+ Why do kinematic-trained policies outperform dynamic ones? Scale. We plot the evaluation performance of both policies in Habitat kinematic and dynamic simulation in Figure 6. While both policies are trained for the same amount of wall-clock time (3 days, 8 GPUs), we see that kinematic training is much faster than training dynamically (right, Fig. 6); with kinematic training, the robot is able to learn from approximately ${20} \times$ more steps of experience ( ${500}\mathrm{M}$ steps vs. ${25}\mathrm{M}$ steps). This increased experience allows the kinematic policies to learn intelligent high-level reasoning.
104
+
105
+ ![01963f33-b2e5-7c40-b731-07ee4fc5575a_6_295_543_1191_395_0.jpg](images/01963f33-b2e5-7c40-b731-07ee4fc5575a_6_295_543_1191_395_0.jpg)
106
+
107
+ Figure 6: Success rate of PointNav policies on the A1 robot trained and evaluated in Habitat with kinematic or dynamic control. Left: Kinematic policies outperform the dynamic trained policies ( $+ {20}\% \mathrm{{SR}}$ ), even when evaluated using dynamic control. Right: Using kinematic control, we can train our robot for ${20} \times$ more steps of experience than with dynamic control under identical compute budgets.
108
+
109
+ How large is the sim2real gap for kinematic and dynamic trained policies? We evaluate the kinematic and dynamic policies on a Spot robot in the novel LAB environment described in Section 3. Note that scans of LAB were not part of training. We evaluate 3 seeds of each policy over 5 episodes in the real-world and report the average success rate (SR) and Success weighted by Path Length (SPL) [27] in Table 1 (reported as a percentage for readability). Each control type is tested in 15 real-world episodes; one run of the Spot robot navigating LAB is shown in Figure 7. Success in the real-world is measured by computing final distance from the goal position using egomotion estimates provided by the Boston Dynamics SDK.
110
+
111
+ As reported in Table 1, all kinematic policies achieve a high success rate of ${100}\%$ and SPL of 82- ${83}\%$ (rows 3 and 4). On the other hand, the success rate drops to 40-67% for the dynamic policies (rows 1 and 2). We notice that the dynamic policies typically commanded lower velocities, and often get stuck around obstacles (Figure 8). This is shown in the higher number of actions commanded and higher collision count for both dynamic policies; on average, a dynamic policy trained in Habitat took 107.9 actions, and collided 41.2 times (row 1, columns 8 and 9), whereas a kinematic policy also trained in Habitat took 26.4 actions, and collided 3.1 times (row 3, columns 8 and 9). We attribute this to the impoverished experience of the dynamic policies; the policies did not learn robust navigation policies that could avoid obstacles during navigation. Additionally, they overfit to the low-level behavior, which can be unstable at high velocities in sim, but not on hardware. Figure 8 (left) shows that the kinematic policy commands higher forward velocities, while the dynamic policy commands slower velocities (right), which are often not achieved by the robot likely due to an obstacle. ${}^{2}$ Successfully executed commands appear on the diagonal.
112
+
113
+ <table><tr><td colspan="3">Train</td><td colspan="2">Simulation</td><td colspan="4">Reality</td><td colspan="2">Sim2Real Gap</td></tr><tr><td>Simulator</td><td>Control</td><td>Noise</td><td>SR</td><td>SPL</td><td>SR</td><td>SPL</td><td>#Act.</td><td>#Coll.</td><td>SR</td><td>SPL</td></tr><tr><td>Habitat</td><td>Dynamic</td><td>-</td><td>60.0</td><td>38.7</td><td>40.0</td><td>28.2</td><td>107.9</td><td>41.2</td><td>20.0</td><td>10.5</td></tr><tr><td>Gibson</td><td>Dynamic</td><td>-</td><td>20.0</td><td>14.0</td><td>67.7</td><td>46.6</td><td>76.8</td><td>12.9</td><td>47.7</td><td>32.6</td></tr><tr><td>Habitat</td><td>Kinematic</td><td>-</td><td>93.3</td><td>76.9</td><td>100.0</td><td>82.7</td><td>26.4</td><td>3.1</td><td>-6.7</td><td>-5.8</td></tr><tr><td>Gibson</td><td>Kinematic</td><td>-</td><td>100.0</td><td>90.6</td><td>100.0</td><td>83.2</td><td>33.1</td><td>4.5</td><td>0.0</td><td>7.4</td></tr><tr><td>Habitat</td><td>Kinematic</td><td>Decoupled</td><td>80.0</td><td>72.0</td><td>100.0</td><td>87.8</td><td>27.1</td><td>2.5</td><td>-20.0</td><td>-15.8</td></tr><tr><td>Habitat</td><td>Kinematic</td><td>Coupled</td><td>80.0</td><td>74.1</td><td>100.0</td><td>88.8</td><td>22.7</td><td>2.8</td><td>-20.0</td><td>-14.7</td></tr></table>
114
+
115
+ Table 1: Zero-shot sim2real transfer performance for the visual navigation policies. Success rate (SR) and path efficiency (SPL) are high for kinematic policies, while dynamic policies have lower performance due to the dynamics gap between the low-level control in training and the controller on the robot in the real-world.
116
+
117
+ ![01963f33-b2e5-7c40-b731-07ee4fc5575a_7_355_200_1085_413_0.jpg](images/01963f33-b2e5-7c40-b731-07ee4fc5575a_7_355_200_1085_413_0.jpg)
118
+
119
+ Figure 7: One run of the Spot robot navigating the real-world LAB environment using a kinematically trained policy from AI Habitat. The robot successfully navigates a hallway, moves around furniture and turns into the next hallway before stopping. In contrast, the native BD controllers without a map can only reach visible goals.
120
+
121
+ ![01963f33-b2e5-7c40-b731-07ee4fc5575a_7_909_714_559_247_0.jpg](images/01963f33-b2e5-7c40-b731-07ee4fc5575a_7_909_714_559_247_0.jpg)
122
+
123
+ Figure 8: Commanded vs. resultant velocities during real-world trajectory rollouts for dynamically and kinematically trained policies.
124
+
125
+ To improve kinematic simulation fidelity, we model actuation noise (difference between commanded and true velocity) on Spot and use it during kinematic training, similar to [16]. We collect 6,000 samples of decoupled (linear and angular velocities are actuated separately) and coupled (linear and angular velocities are actuated together) actuation noise. The parameters for noise in each dimension, and details about data collection and modeling can be found in the appendix. During training, we sample from the Gaussian distribution for each dimension, and add it to the policy's predicted velocity. We see that policies trained with noise (Figure 1, rows 5 and 6) also achieve 100% success in the real-world, and are able to increase path efficiency (SPL) (4.6% using decoupled (row 4 vs. row 5), and 5.6% using coupled actuation noise (row 4 vs. row 6)). The number of collisions and commanded actions are also lower for these policies, compared to kinematic policies trained with no noise (22.7 actions and 2.8 collisions vs. 26.4 actions and 3.1 collisions). These improvements are due to the added robustness that training with noise provides - uncertainty during training forces the policy to take less risky actions resulting in fewer collisions in the real-world.
126
+
127
+ ## 6 Conclusion, Limitations, and Future Work
128
+
129
+ In this work, we study the role of simulation fidelity for sim2real of visual navigation policies on three simulated and one real legged robot. Contrary to expectations, we find that higher simulation fidelity does not enable learning better high-level visual navigation policies. Dynamic policies tend to overfit to low-level simulation details, resulting in poor transfer to the real-world. On the other hand, kinematic policies are able to generalize well. These results raise important questions about the need for simulation fidelity for sim2real, especially in abstracted action spaces.
130
+
131
+ One limitation of this work is that we assume access to a robust 'black box' controller on hardware. While most robots come shipped with manufacturer-provided controllers, the level of accuracy may differ between robots, and more robust noise modeling may be needed to better characterize the actuation noise. In the future, we plan to improve the modeling of real-world actuation noise by using a neural network conditioned on previous states and actions of the robot. We would also like to experiment with other tasks to verify if our findings still hold true.
132
+
133
+ ---
134
+
135
+ ${}^{2}$ Actual velocity is measured using the Boston Dynamics SDK.
136
+
137
+ ---
138
+
139
+ References
140
+
141
+ [1] E. Wijmans, A. Kadian, A. Morcos, S. Lee, I. Essa, D. Parikh, M. Savva, and D. Batra. DD-PPO: Learning near-perfect pointgoal navigators from 2.5 billion frames. In International Conference on Learning Representations (ICLR), 2020.
142
+
143
+ [2] M. Savva, A. Kadian, O. Maksymets, Y. Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik, D. Parikh, and D. Batra. Habitat: A Platform for Embodied AI Research. In International Conference on Computer Vision (ICCV), 2019.
144
+
145
+ [3] A. Szot, A. Clegg, E. Undersander, E. Wijmans, Y. Zhao, J. Turner, N. Maestre, M. Mukadam, D. Chaplot, O. Maksymets, A. Gokaslan, V. Vondrus, S. Dharur, F. Meier, W. Galuba, A. Chang, Z. Kira, V. Koltun, J. Malik, M. Savva, and D. Batra. Habitat 2.0: Training home assistants to rearrange their habitat. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
146
+
147
+ [4] B. Shen, F. Xia, C. Li, R. Martín-Martín, L. Fan, G. Wang, C. Pérez-D'Arpino, S. Buch, S. Srivastava, L. P. Tchapmi, M. E. Tchapmi, K. Vainio, J. Wong, L. Fei-Fei, and S. Savarese. igibson 1.0: a simulation environment for interactive tasks in large realistic scenes. In 2021 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), page accepted. IEEE, 2021.
148
+
149
+ [5] Nvidia. Isaac Sim. https://developer.nvidia.com/isaac-sim, 2020.
150
+
151
+ [6] M. Deitke, W. Han, A. Herrasti, A. Kembhavi, E. Kolve, R. Mottaghi, J. Salvador, D. Schwenk, E. VanderBilt, M. Wallingford, et al. Robothor: An open simulation-to-real embodied AI platform. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3164-3174, 2020.
152
+
153
+ [7] C. Gan, J. Schwartz, S. Alter, M. Schrimpf, J. Traer, J. De Freitas, J. Kubilius, A. Bhandwaldar, N. Haber, M. Sano, et al. ThreeDWorld: A platform for interactive multi-modal physical simulation. arXiv preprint arXiv:2007.04954, 2020.
154
+
155
+ [8] S. James, Z. Ma, D. R. Arrojo, and A. J. Davison. Rlbench: The robot learning benchmark & learning environment. IEEE Robotics and Automation Letters, 5(2):3019-3026, 2020.
156
+
157
+ [9] F. Xiang, Y. Qin, K. Mo, Y. Xia, H. Zhu, F. Liu, M. Liu, H. Jiang, Y. Yuan, H. Wang, L. Yi, A. X. Chang, L. J. Guibas, and H. Su. SAPIEN: A simulated part-based interactive environment. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
158
+
159
+ [10] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems, pages 5026- 5033. IEEE, 2012.
160
+
161
+ [11] C. D. Freeman, E. Frey, A. Raichuk, S. Girgin, I. Mordatch, and O. Bachem. Brax-a differentiable physics engine for large scale rigid body simulation. arXiv preprint arXiv:2106.13281, 2021.
162
+
163
+ [12] E. Coumans and Y. Bai. Pybullet, a python module for physics simulation for games, robotics and machine learning. 2016.
164
+
165
+ [13] S. K. Ramakrishnan, A. Gokaslan, E. Wijmans, O. Maksymets, A. Clegg, J. Turner, E. Un-dersander, W. Galuba, A. Westbury, A. X. Chang, et al. Habitat-matterport 3d dataset (hm3d): 1000 large-scale 3d environments for embodied ai. arXiv preprint arXiv:2109.08238, 2021.
166
+
167
+ [14] A. Chang, A. Dai, T. Funkhouser, M. Halber, M. Niessner, M. Savva, S. Song, A. Zeng, and Y. Zhang. Matterport3d: Learning from rgb-d data in indoor environments. arXiv preprint arXiv:1709.06158, 2017.
168
+
169
+ [15] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, et al. ShapeNet: An information-rich 3D model repository. arXiv preprint arXiv:1512.03012, 2015.
170
+
171
+ [16] J. Truong, S. Chernova, and D. Batra. Bi-directional domain adaptation for sim2real transfer of embodied navigation agents. IEEE Robotics and Automation Letters (RA-L), 6(2):2634-2641, 2021.
172
+
173
+ [17] Y. Chebotar, A. Handa, V. Makoviychuk, M. Macklin, J. Issac, N. Ratliff, and D. Fox. Closing the sim-to-real loop: Adapting simulation randomization with real world experience. In 2019 International Conference on Robotics and Automation (ICRA), pages 8973-8979. IEEE, 2019.
174
+
175
+ [18] X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel. Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE international conference on robotics and automation (ICRA), 2018.
176
+
177
+ [19] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017.
178
+
179
+ [20] A. Kadian, J. Truong, A. Gokaslan, A. Clegg, E. Wijmans, S. Lee, M. Savva, S. Chernova, and D. Batra. Sim2real predictivity: Does evaluation in simulation predict real-world performance? IEEE Robotics and Automation Letters (RA-L), 2020.
180
+
181
+ [21] N. Yokoyama, S. Ha, and D. Batra. Success weighted by completion time: A dynamics-aware evaluation criteria for embodied navigation. In 2021 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems (IROS), 2021.
182
+
183
+ [22] J. Truong, D. Yarats, T. Li, F. Meier, S. Chernova, D. Batra, and A. Rai. Learning navigation skills for legged robots with learned robot embeddings. 2020.
184
+
185
+ [23] J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter. Learning quadrupedal locomotion over challenging terrain. Science Robotics, 5(47):eabc5986, 2020. doi: 10.1126/scirobotics.abc5986. URL https://www.science.org/doi/abs/10.1126/ scirobotics.abc5986.
186
+
187
+ [24] T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter. Learning robust perceptive locomotion for quadrupedal robots in the wild. Science Robotics, 7(62):eabk2822, 2022. doi:10.1126/scirobotics.abk2822. URL https://www.science.org/doi/abs/10.1126/scirobotics.abk2822.
188
+
189
+ [25] A. Kumar, Z. Fu, D. Pathak, and J. Malik. Rma: Rapid motor adaptation for legged robots. Robotics: Science and Systems (RSS), 2021.
190
+
191
+ [26] C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P. Kaelbling, and T. Lozano-Pérez. Integrated task and motion planning. arXiv preprint arXiv:2010.01083, 2020.
192
+
193
+ [27] P. Anderson, A. Chang, D. S. Chaplot, A. Dosovitskiy, S. Gupta, V. Koltun, J. Kosecka, J. Malik, R. Mottaghi, M. Savva, et al. On Evaluation of Embodied Navigation Agents. arXiv preprint arXiv:1807.06757, 2018.
194
+
195
+ [28] R. Ramrakhya, E. Undersander, D. Batra, and A. Das. Habitat-web: Learning embodied object-search strategies from human demonstrations at scale. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5173-5183, 2022.
196
+
197
+ [29] T. Chen, S. Gupta, and A. Gupta. Learning exploration policies for navigation. arXiv preprint arXiv:1903.01959, 2019.
198
+
199
+ [30] D. S. Chaplot, D. Gandhi, S. Gupta, A. Gupta, and R. Salakhutdinov. Learning to explore using active neural slam. arXiv preprint arXiv:2004.05155, 2020.
200
+
201
+ [31] S. Bansal, V. Tolani, S. Gupta, J. Malik, and C. Tomlin. Combining optimal control and learning for visual navigation in novel environments. In Conference on Robot Learning, pages 420-429. PMLR, 2020.
202
+
203
+ [32] W. Hess, D. Kohler, H. Rapp, and D. Andor. Real-time loop closure in 2d lidar slam. In ICRA, 2016.
204
+
205
+ [33] O. Nachum, M. Ahn, H. Ponte, S. Gu, and V. Kumar. Multi-agent manipulation via locomotion using hierarchical sim2real. arXiv preprint arXiv:1908.05224, 2019.
206
+
207
+ [34] W. Yu, J. Tan, Y. Bai, E. Coumans, and S. Ha. Learning fast adaptation with meta strategy optimization. arXiv preprint arXiv:1909.12995, 2019.
208
+
209
+ [35] T. Li, N. Lambert, R. Calandra, F. Meier, and A. Rai. Learning generalizable locomotion skills with hierarchical reinforcement learning. arXiv preprint arXiv:1909.12324, 2019.
210
+
211
+ [36] T. Li, R. Calandra, D. Pathak, Y. Tian, F. Meier, and A. Rai. Planning in learned latent action spaces for generalizable legged locomotion. IEEE Robotics and Automation Letters, 6(2): 2682-2689, 2021.
212
+
213
+ [37] J. Tan, T. Zhang, E. Coumans, A. Iscen, Y. Bai, D. Hafner, S. Bohez, and V. Vanhoucke. Sim-to-real: Learning agile locomotion for quadruped robots. arXiv preprint arXiv:1804.10332, 2018.
214
+
215
+ [38] X. B. Peng, E. Coumans, T. Zhang, T.-W. Lee, J. Tan, and S. Levine. Learning agile robotic locomotion skills by imitating animals. Robotics: Science and Systems (RSS), 2020.
216
+
217
+ [39] A. Rai, R. Antonova, S. Song, W. Martin, H. Geyer, and C. Atkeson. Bayesian optimization using domain knowledge on the atrias biped. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 1771-1778. IEEE, 2018.
218
+
219
+ [40] N. Rudin, D. Hoeller, P. Reist, and M. Hutter. Learning to walk in minutes using massively parallel deep reinforcement learning. In Conference on Robot Learning, pages 91-100. PMLR, 2022.
220
+
221
+ [41] Z. Fu, A. Kumar, A. Agarwal, H. Qi, J. Malik, and D. Pathak. Coupling vision and proprioception for navigation of legged robots. CVPR, 2022.
222
+
223
+ [42] C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P. Kaelbling, and T. Lozano-Pérez. Integrated task and motion planning. Annual review of control, robotics, and autonomous systems, 4:265-293, 2021.
224
+
225
+ [43] L. P. Kaelbling and T. Lozano-Pérez. Hierarchical task and motion planning in the now. In 2011 IEEE International Conference on Robotics and Automation, pages 1470-1477. IEEE, 2011.
226
+
227
+ [44] Y. Lin, A. S. Wang, E. Undersander, and A. Rai. Efficient and interpretable robot manipulation with graph neural networks. IEEE Robotics and Automation Letters, 2022.
228
+
229
+ [45] S. Kuindersma, R. Deits, M. Fallon, A. Valenzuela, H. Dai, F. Permenter, T. Koolen, P. Marion, and R. Tedrake. Optimization-based locomotion planning, estimation, and control design for the atlas humanoid robot. Autonomous robots, 40(3):429-455, 2016.
230
+
231
+ [46] T. Li, H. Geyer, C. G. Atkeson, and A. Rai. Using deep reinforcement learning to learn high-level policies on the atrias biped. In ICRA, pages 263-269. IEEE, 2019.
232
+
233
+ [47] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin, D. Duong, V. Sindhwani, and J. Lee. Transporter networks: Rearranging the visual world for robotic manipulation. Conference on Robot Learning (CoRL), 2020.
234
+
235
+ [48] W. Yuan, C. Paxton, K. Desingh, and D. Fox. SORNet: Spatial object-centric representations for sequential manipulation. In 5th Annual Conference on Robot Learning, 2021. URL https: //openreview.net/forum?id=mOLu2rODIJF.
236
+
237
+ [49] Unitree robotics. https://www.unitree.com/.
238
+
239
+ [50] Boston dynamics. https://www.bostondynamics.com/spot.
240
+
241
+ [51] F. Xia, A. R. Zamir, Z. He, A. Sax, J. Malik, and S. Savarese. Gibson env: Real-world perception for embodied agents. In ${CVPR},{2018}$ .
242
+
243
+ [52] Y. Yang, T. Zhang, E. Coumans, J. Tan, and B. Boots. Fast and efficient locomotion via learned gait transitions. In Conference on Robot Learning, pages 773-783. PMLR, 2022.
244
+
245
+ [53] T. Li, J. Won, S. Ha, and A. Rai. Model-based motion imitation for agile, diverse and generalizable quadupedal locomotion. arXiv preprint arXiv:2109.13362, 2021.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/BxHcg_Zlpxj/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § RETHINKING SIM2REAL: LOWER FIDELITY SIMULATION LEADS TO HIGHER SIM2REAL TRANSFER IN NAVIGATION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: If we want to train robots in simulation before deploying them in reality, it seems natural and almost self-evident to presume that reducing the sim2real gap involves creating simulators of increasing fidelity (since reality is what it is). We challenge this assumption and present a contrary hypothesis - sim2real transfer of robots may be improved with lower (not higher) fidelity simulation. We conduct a systematic large-scale evaluation of this hypothesis on the problem of visual navigation - in the real world, and on 2 different simulators (Habitat and iGibson) using 3 different robots (A1, AlienGo, Spot). Our results show that, contrary to expectation, adding fidelity does not help with learning; performance is poor due to slow simulation speed (preventing large-scale learning) and overfit-ting to inaccuracies in simulation physics. Instead, building simple models of the robot motion using real-world data can improve learning and generalization.
12
+
13
+ Keywords: Sim2Real, Deep Reinforcement Learning, Visual-Based Navigation
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ The sim2real paradigm consists of training robots in simulation (potentially for billions of simulation steps corresponding to decades of experience [1]) before deploying them in reality. The last few years have seen significant investments - the development of new simulators [2-12], curation and annotation of 3D scans and assets [13-15], and development of techniques for overcoming the sim2real gap [16-19] - resulting in a number of successful demonstrations of sim2real transfer [20- 25]. However, no simulator is a perfect replica of reality and the main challenge in this paradigm is overcoming the sim2real gap, defined as the drop in a robot's performance in the real-world (compared to simulation). It seems natural and almost self-evident to presume that reducing this sim2real gap involves creating simulators of increasing physics fidelity, and this sometimes forms the default operating hypothesis of the field.
18
+
19
+ We challenge this convention and present a counter-intuitive idea - sim2real transfer of robots may be improved not by increasing but by decreasing simulation fidelity. Specifically, we propose that instead of training robots entirely in simulation, we use classical ideas from hierarchical robot control [26] to decompose the policy into a 'high-level policy' (that is trained solely in simulation) and a 'low-level controller' (that is designed entirely on hardware and may even be a black-box controller shipped by a manufacturer). This decomposition means that the simulator does not need to model low-level dynamics, which can save both simulation time (since there is no need to simulate expensive low-level controllers), and developer time spent building and designing these controllers.
20
+
21
+ We conduct a systematic large-scale evaluation of our hypothesis on the task of PointGoal (visual) Navigation [27] in unknown environments - using 2 simulators (Habitat and Gibson) and 3 different robots (A1, AlienGo, Spot). We train policies using two physics fidelities - kinematic and dynamic. Kinematic simulation uses abstracted physics and 'teleports' the robot to the next state using Euler integration; kinematic policies command robot center-of-mass (CoM) linear and angular velocities.
22
+
23
+ < g r a p h i c s >
24
+
25
+ Figure 1: Left: We train visual navigation policies at two levels of fidelity - kinematic and dynamic. In kinematic control (top), the robot is 'teleported' to the next state using Euler integration. In dynamic control (bottom), the robot's velocity commands are converted to leg joint-torques and rigid-body physics is simulated at 240Hz. Right: We evaluate the kinematic and dynamic trained policies in simulation (top) and the real-world (bottom) across 5 identical episodes. The kinematic policy achieves a ${100}\%$ success rate in all 5 episodes, and the robot takes similar paths in both simulation and the real-world. On the other hand, the dynamic policy achieves a 20-60% success rate, and the trajectories taken in simulation and the real-world do not correlate, pointing towards a larger sim2real gap.
26
+
27
+ Dynamic simulation consists of rigid-body mechanics and simulates contact dynamics (via Bullet [12]); dynamic policies command CoM linear and angular velocities, which are converted to robot joint-torques by a low-level controller operating at ${240}\mathrm{\;{Hz}}$ . We find that across all robots, a kinematically trained policy outperforms dynamic policies, even when evaluated using dynamic simulation and control. Additionally, we show that the trained kinematic policy can be transferred to a real Spot robot, which ships with manufacturer-provided 'black-box' low-level controllers that cannot be accurately simulated. In contrast, dynamic policies fail to achieve efficient navigation behavior on Spot, due to the sim2real gap and less simulation experience.
28
+
29
+ The reasons for these improvements are perhaps unsurprising in hindsight - learning-based methods overfit to simulators, and present-day physics simulators have approximations and imperfections that do not transfer to the real-world. A second equally significant mechanism is also in play - lower fidelity simulation is typically faster, enabling policies to be trained with more experience under a fixed wall-clock budget; although the kinematic and dynamic policies were trained with the same compute for the same amount of time, the kinematic policy was able to learn from ${20} \times$ data as the dynamic policy. While our results are presented on legged locomotion and visual navigation, the underlying principle - of architecting hierarchical policies and only training the high-level policy in an abstracted simulation - is broadly applicable. We hope that our work leads to a rethink in how the research community pursues sim2real and in how we develop the simulators of tomorrow. Specifically, our findings suggest that instead of investing in higher-fidelity physics, the field should prioritize simulation speed for tasks that can be represented with abstract action spaces.
30
+
31
+ § 8 2 RELATED WORK
32
+
33
+ Visual Navigation. Recent works have shown that large-scale indoor environments and simulators like Habitat $\left\lbrack {2,3}\right\rbrack$ and iGibson $\left\lbrack 4\right\rbrack$ can enable end-to-end learning of navigation policies from large amounts of agent- or expert-generated data [28-30] on simple, wheeled systems. This is in contrast to the typical mapping and planning paradigm used in classical robotics, which can suffer when the quality of maps is low [31] or requires expensive equipment like LiDAR [32]. In this work, we show that such end-to-end learning is also possible for complex, legged robots.
34
+
35
+ Sim2real for Legged Robots. Sim2real quadrupedal locomotion has been widely studied in the past several decades [22, 33-36], with most learning low-level skills in simulation and transferring them to hardware [37], or adapting them online to reduce the sim2real gap [38, 39]. However, these policies are typically blind, and use only proprioceptive sensors on the robot to determine actions $\left\lbrack {{23},{25},{40}}\right\rbrack$ . In contrast, an autonomous robot needs to respond to its environment, and take visual input into account. Some works have proposed learning visual policies in simulation and applying them to the real-world $\left\lbrack {{22},{24},{41}}\right\rbrack$ . These works use learned or hand-designed physically simulated low-level controllers; we show that physics simulations can be detrimental to learning high-performing sim2real policies, even for complex legged robots.
36
+
37
+ Abstracted Task-space Learning. Abstracted (hierarchical, high-level) action spaces are common in robotics literature. Examples include task and motion planning for manipulation [42-44], legged locomotion [45, 46], navigation [20], etc. Several works reason over symbolic actions like pick and place, or hierarchical policies with discrete/continuous attributes [33-35, 47, 48], or even abstracted dynamics models [36]. While the ideas of abstracted/hierarchical policies are fairly common, typically both the high- and low-level policies are learned in simulation and transferred to reality $\left\lbrack {{33},{34},{36}}\right\rbrack$ , often augmented with techniques like domain randomization [37] and real-word adaption [18]. Instead, we use an abstracted simulator, which does not model low-level physics, and learn high-level policies that are transferred to the real-world in a zero-shot manner.
38
+
39
+ § 3 EXPERIMENTAL SETUP
40
+
41
+ Task: PointGoal Navigation. In the task of PointGoal Navigation [27], a robot is initialized in an unknown environment and is tasked with navigating to a goal coordinate without access to a pre-built map of the environment. The goal is specified relative to the robot's starting location for the episode (i.e.,"go to $\Delta \mathrm{x},\Delta \mathrm{y}$ "). The robot has access to an egocentric depth sensor and an egomotion sensor (sometimes referred to as GPS+Compass in this literature) from which the robot derives the goal location relative to its current pose. An episode is considered successful when the robot reaches the goal position within a success radius (typically half of the robot's body length). The robot operates within constraints of maximum number of steps per episode ( 150 for Spot) and velocity limits ( $\pm {0.5}$ $\mathrm{m}/\mathrm{s}$ for linear and $\pm {0.3}\mathrm{{rad}}/\mathrm{s}$ for angular velocities on Spot). We linearly scale the linear and angular velocity limits for A1 and Aliengo to be proportional to the length of each robot's leg, and inversely scale the maximum number of steps allowed. In effect, smaller robots have a smaller maximum allowed velocity to improve stability during execution, but are allowed more steps to reach the goal. The exact parameters used for each robot is described in the appendix. For evaluation, we report the success rate (SR), and Success inversely weighted by Path Length (SPL) [27], which measures the efficiency of the trajectory taken with respect to the ground-truth shortest path.
42
+
43
+ Robot Platforms. We study visual navigation for 3 quadrupedal robots - A1 and Aliengo from Unitree [49], and Spot from Boston Dynamics (BD) [50] in simulation. In the real-world, we show sim2real transfer of the learned navigation policies to Spot. To have a consistent camera setup across all the robots, we attach an Intel RealSense D435 camera to Spot in the real-world, and use this camera for visual inputs to the policy. In our hardware experiments, we want to measure how often our sim2real policies lead to collisions without jeopardizing safety. We achieve this balance as follows: the BD collision-avoidance capability is kept turned on, set to trigger at a tight threshold of ${0.10}\mathrm{\;m}$ . Next, we track the number of times the robot comes within ${0.20}\mathrm{\;m}$ of any obstacle (as measured by any of the 5 onboard depth cameras). This gap (between ${0.20}\mathrm{\;m}$ and ${0.10}\mathrm{\;m}$ ) allows us to record possible collisions while preventing actual ones. While the BD API allows for high-level navigation without access to a map, it cannot navigate around obstacles autonomously, without
44
+
45
+ a map. In our work, we consider complex, long-range navigation paths (up to ${30}\mathrm{\;m}$ ) in cluttered environments with many obstacles; the goals are unreachable with the just BD navigation API.
46
+
47
+ < g r a p h i c s >
48
+
49
+ Figure 2: Robots used for training and evaluation.
50
+
51
+ Simulation Environments. We use two simulation platforms - Habitat [2,3] and iGibson [4] for training and evaluation. Both simulators support rendering of photorealistic environments; Habitat uses a low-level (C++) integration with the Bullet physics engine [12], while Gibson leverages Py-Bullet, the Python-based integration of Bullet. Thus, while the underlying physics engines between the two are the same, Habitat runs $\sim {1200}\%$ faster than Gibson [3]. This allows us to train policies faster with Habitat than with Gibson even when using identical policies and compute.
52
+
53
+ Dataset. For training and evaluation, we use a combination of the Habitat-Matterport (HM3D) [13] and Gibson [51] 3D datasets. The two datasets combined consist of over 1000 high-resolution 3D scans of real-world indoors environments, and consists of realistic clutter. We generate training and evaluation episodes compatible with our robots for the HM3D and Gibson scenes following the procedure described in [2]. Specifically, we restrict the geodesic distance from the start and positions to be between 1 and ${30}\mathrm{\;m}$ , and increase navigation complexity by rejecting paths that consist of near-straight lines, with few obstacles. As described in [2], both of these heuristics result in complex, but navigable paths. Additionally, we check for collisions along the sampled paths using the URDF of the largest robot (Spot) to ensure that all paths are navigable.
54
+
55
+ Real-World Test Environment. The real-world evaluation environment, LAB, is a ${325}{\mathrm{\;m}}^{2}$ lobby in a commercial office building. The lobby contains furniture such a couches, cushions, bookshelves and tables. We specify a set of 5 waypoints as the start and end locations for the navigation episodes in LAB with an average episode length of ${10}\mathrm{\;m}$ . We match the furniture layout to the position captured in the 3D scan (Figure 3) to run identical evaluation experiments in both simulation and the real-world. The scan of LAB is not part of training.
56
+
57
+ < g r a p h i c s >
58
+
59
+ Figure 3: The real-world testing environment is a part of a large commercial building and contains clutter from furniture such as tables, bookshelves, and couches.
60
+
61
+ § 4 KINEMATIC AND DYNAMIC CONTROL FOR VISUAL NAVIGATION
62
+
63
+ As illustrated in Figure 4, our proposed approach is hierarchical, with (1) a high-level visual navigation policy that commands desired center of mass (CoM) motion at $1\mathrm{\;{Hz}}$ , and (2) a low-level controller that follows this desired motion. We consider controllers at two levels of abstraction - 'kinematic' and 'dynamic'. The kinematic controller simply integrates the desired velocity and outputs a CoM position at $1\mathrm{\;{Hz}}$ ; kinematic simulation then teleports the robot to the desired state. The dynamic controller uses a low-level controller that commands joint torques at ${240}\mathrm{\;{Hz}}$ ; dynamic simulation models rigid-body and contact dynamics via Bullet (with a physics step-size of $1/{240}\mathrm{{sec}}$ ). We provide details of all three of these pieces (high-level policy, kinematic and dynamic controllers) next.
64
+
65
+ High-level Visual Navigation Policies. The high-level policy takes as input an egocentric depth image, and the goal location relative to the robot's current pose. The output of the policy is a 3-dimensional vector, representing the desired CoM forward, lateral, and angular velocities $\left( {{V}_{x},{V}_{y},\omega }\right)$ . The neural network architecture consists of a ResNet-18 visual encoder and a 2-layer LSTM policy. Using a recurrent policy allows the policy to learn temporal dependencies through the hidden state. The final layer of the policy parameterizes a Gaussian action distribution from which the action is sampled. The policy is trained using DD-PPO [1], a distributed reinforcement learning method, in both the Habitat and Gibson simulators. Our reward function is derived from [22], with an added penalty for backward velocities, which can lead to collisions and hurts performance.
66
+
67
+ < g r a p h i c s >
68
+
69
+ Figure 4: Our architecture for PointGoal Navigation on a legged robot. A high-level visual navigation policy predicts CoM linear and angular velocities. The velocities are passed into either a kinematic or dynamic low-level controller to step the robot in simulation. In the real-world, we directly send the velocity commands from the high-level policy to the robot, and uses the low-level controller from Boston Dynamics for movement.
70
+
71
+ Kinematic Control and Simulation. In kinematic control, the final state of the robot is calculated by integrating the desired CoM velocity commanded by the high-level navigation policy at $1\mathrm{\;{Hz}}$ . The robot is directly moved to the desired pose, without running a physics simulation. In both Habitat and iGibson, the robot is kept in place if being at the new desired state would result in a collision.
72
+
73
+ The objective of the kinematic control is to abstract away the low-level physics interactions between the robot and its environment. This has two advantages: (1) it avoids the need to accurately model low-level controllers, especially for closed-source robots like Spot; (2) it enables faster simulation speed by avoiding high-frequency physics integration, conducive to model-free RL that requires large amounts of experience. On the other hand, teleporting the robot to the desired state might remove necessary dynamics, such as poor tracking of low-level controllers. In Section 5, we propose how to incorporate such low-level characteristics into a kinematic simulation using real-world data.
74
+
75
+ Dynamic Control in Simulation and Hardware. We experiment with two different low-level dynamic controllers for quadruped robots. The first is an expert-designed Raibert-style controller from [22], which consists of a footstep generator and an inverse kinematic solver that commands desired joint angles from CoM velocities. The joint angles are converted to joint torques using a linear feedback controller, and applied to the simulation. This controller was shown to achieve sim2real transfer for A1 [22]. However, on other robots in our experiments, it shows relatively poor tracking of high-level commands. Thus, we also experiment with another model-predictive control (MPC) dynamic controller from [38], which commands joint torques directly. This controller has been applied to real-world A1 robot [52, 53] and shows better tracking of desired velocities for our test robots, as compared to the Raibert controller from [22]. However, MPC is prohibitively slow and cannot be used for training RL policies. Thus, we use Raibert for training dynamic policies, but evaluate using MPC. ${}^{1}$ This difference in train and evaluation dynamics controllers has multiple purposes: (1) the evaluation using MPC improves performance of most policies, including dynamic policies, due to its better ability to track high-level commands; (2) the difference between the two dynamic controllers in simulation is also a proxy for the difference between our low-level controllers and closed-source controllers from Spot. If a dynamic policy cannot transfer from Raibert to MPC, it has a low chance of transfer to Spot which has black box BD controllers, or even other robots in the real-world.
76
+
77
+ Both dynamic controllers model the low-level physics interactions between the robot and the environment. This makes them considerably slower than the kinematic controller, making training
78
+
79
+ ${}^{1}$ Evaluation using Raibert [22] can be found in the appendix.
80
+
81
+ RL policies challenging. Moreover, for Spot, the low-level controller implementation is not openly available, making it hard to reproduce the low-level controller. Our experiments in Section 5 show that the added fidelity of dynamic controllers does not benefit policy learning, or sim2real transfer.
82
+
83
+ § 5 RESULTS AND ANALYSIS
84
+
85
+ In this section, we first study generalization of visual navigation policies across simulators (trained in one sim, tested in another) and across controllers (trained with one controller, tested with another). This shows the importance of fast simulation for learning high-level policies by comparing performance of kinematic and dynamic policies trained for the same wall-clock time. Next, we examine the performance of the different policies at zero-shot sim2real transfer on the Spot robot.
86
+
87
+ How large is the sim2sim gap? High for dynamic, and low for kinematic policies. We exhaustively study the combinatorial space of experiments - policies trained under 2 training conditions (with kinematic and dynamic simulation) $\times 2$ evaluation conditions (kinematic and dynamic simulation) $\times 2$ simulators (Habitat and Gibson) $\times 3$ robots (A1, Aliengo, Spot). For each condition, we train and report results with 3 random seeds. Each policy is trained using 8 GPUs for 3 days, resulting in a cumulative training budget of 6,912 GPU-hours (288 GPU-days). The average success rates are presented in Figure 5. Rows represent the evaluation conditions as tuples (simulator, fidelity), while columns represent the training conditions. We evaluate all policies across 1,100 episodes from 110 unique scenes in the HM3D + Gibson validation split.
88
+
89
+ < g r a p h i c s >
90
+
91
+ Figure 5: Average success rates for sim2sim and kinematic2dynamic transfer for A1, Aliengo and Spot. We see that the kinematic trained policies perform the best overall (red quadrants), and also often outperform the dynamic trained policies, even when evaluated using dynamic control (green quadrants vs. orange quadrants).
92
+
93
+ § WE MAKE TWO KEY OBSERVATIONS HERE:
94
+
95
+ 1. Kinematic-trained policies perform best overall, for all robots. In all cases, kinematic policies outperform the dynamic trained policies, even when evaluated using dynamic control, e.g. 62.1% SR for A1 in (Gibson, Kinematic) vs. 24.2% SR in (Gibson, Dynamic), Fig. 5, left. This is a surprising result because the kinematic policies are being evaluated in an out-of-distribution setting, which was never seen or accounted for during training. On the other hand, the dynamic policies are being evaluated in the domain that they were trained in, hence do not require control-related generalization.
96
+
97
+ 2. Dynamic policies are not robust to different dynamic simulations. The dynamic policies from the two simulations observe significant performance drops when evaluated in the other dynamic simulation. This points to the dynamic policies overfitting to the simulator dynamics during training, failing to generalize to a new setting, see e.g. column 3, rows 3 and 4; 49.8% SR for A1 in (Habitat, Dynamic) vs. 22.3% SR in (Gibson, Dynamic). (Gibson, Dynamic) shows poor performance in both Gibson and Habitat, with slightly poorer performance in Habitat. This sensitivity to simulation makes training dynamics policies difficult, especially when the controller for the real-world robot is unknown. Even if the real-world controller is known, simulation physics and real-world are different, and sim2real transfer of the learned policy can suffer (as evidenced by low sim2sim transfer). On the other hand, kinematic policies, that have been trained with no physics, can generalize to the different dynamic controllers. Both of these results go to show that not only kinematic trained policies are able to learn the task well, they have learned to reason without overfitting to simulation physics, making their chances of successful sim2real transfer high.
98
+
99
+ Why do kinematic-trained policies outperform dynamic ones? Scale. We plot the evaluation performance of both policies in Habitat kinematic and dynamic simulation in Figure 6. While both policies are trained for the same amount of wall-clock time (3 days, 8 GPUs), we see that kinematic training is much faster than training dynamically (right, Fig. 6); with kinematic training, the robot is able to learn from approximately ${20} \times$ more steps of experience ( ${500}\mathrm{M}$ steps vs. ${25}\mathrm{M}$ steps). This increased experience allows the kinematic policies to learn intelligent high-level reasoning.
100
+
101
+ < g r a p h i c s >
102
+
103
+ Figure 6: Success rate of PointNav policies on the A1 robot trained and evaluated in Habitat with kinematic or dynamic control. Left: Kinematic policies outperform the dynamic trained policies ( $+ {20}\% \mathrm{{SR}}$ ), even when evaluated using dynamic control. Right: Using kinematic control, we can train our robot for ${20} \times$ more steps of experience than with dynamic control under identical compute budgets.
104
+
105
+ How large is the sim2real gap for kinematic and dynamic trained policies? We evaluate the kinematic and dynamic policies on a Spot robot in the novel LAB environment described in Section 3. Note that scans of LAB were not part of training. We evaluate 3 seeds of each policy over 5 episodes in the real-world and report the average success rate (SR) and Success weighted by Path Length (SPL) [27] in Table 1 (reported as a percentage for readability). Each control type is tested in 15 real-world episodes; one run of the Spot robot navigating LAB is shown in Figure 7. Success in the real-world is measured by computing final distance from the goal position using egomotion estimates provided by the Boston Dynamics SDK.
106
+
107
+ As reported in Table 1, all kinematic policies achieve a high success rate of ${100}\%$ and SPL of 82- ${83}\%$ (rows 3 and 4). On the other hand, the success rate drops to 40-67% for the dynamic policies (rows 1 and 2). We notice that the dynamic policies typically commanded lower velocities, and often get stuck around obstacles (Figure 8). This is shown in the higher number of actions commanded and higher collision count for both dynamic policies; on average, a dynamic policy trained in Habitat took 107.9 actions, and collided 41.2 times (row 1, columns 8 and 9), whereas a kinematic policy also trained in Habitat took 26.4 actions, and collided 3.1 times (row 3, columns 8 and 9). We attribute this to the impoverished experience of the dynamic policies; the policies did not learn robust navigation policies that could avoid obstacles during navigation. Additionally, they overfit to the low-level behavior, which can be unstable at high velocities in sim, but not on hardware. Figure 8 (left) shows that the kinematic policy commands higher forward velocities, while the dynamic policy commands slower velocities (right), which are often not achieved by the robot likely due to an obstacle. ${}^{2}$ Successfully executed commands appear on the diagonal.
108
+
109
+ max width=
110
+
111
+ 3|c|Train 2|c|Simulation 4|c|Reality 2|c|Sim2Real Gap
112
+
113
+ 1-11
114
+ Simulator Control Noise SR SPL SR SPL #Act. #Coll. SR SPL
115
+
116
+ 1-11
117
+ Habitat Dynamic - 60.0 38.7 40.0 28.2 107.9 41.2 20.0 10.5
118
+
119
+ 1-11
120
+ Gibson Dynamic - 20.0 14.0 67.7 46.6 76.8 12.9 47.7 32.6
121
+
122
+ 1-11
123
+ Habitat Kinematic - 93.3 76.9 100.0 82.7 26.4 3.1 -6.7 -5.8
124
+
125
+ 1-11
126
+ Gibson Kinematic - 100.0 90.6 100.0 83.2 33.1 4.5 0.0 7.4
127
+
128
+ 1-11
129
+ Habitat Kinematic Decoupled 80.0 72.0 100.0 87.8 27.1 2.5 -20.0 -15.8
130
+
131
+ 1-11
132
+ Habitat Kinematic Coupled 80.0 74.1 100.0 88.8 22.7 2.8 -20.0 -14.7
133
+
134
+ 1-11
135
+
136
+ Table 1: Zero-shot sim2real transfer performance for the visual navigation policies. Success rate (SR) and path efficiency (SPL) are high for kinematic policies, while dynamic policies have lower performance due to the dynamics gap between the low-level control in training and the controller on the robot in the real-world.
137
+
138
+ < g r a p h i c s >
139
+
140
+ Figure 7: One run of the Spot robot navigating the real-world LAB environment using a kinematically trained policy from AI Habitat. The robot successfully navigates a hallway, moves around furniture and turns into the next hallway before stopping. In contrast, the native BD controllers without a map can only reach visible goals.
141
+
142
+ < g r a p h i c s >
143
+
144
+ Figure 8: Commanded vs. resultant velocities during real-world trajectory rollouts for dynamically and kinematically trained policies.
145
+
146
+ To improve kinematic simulation fidelity, we model actuation noise (difference between commanded and true velocity) on Spot and use it during kinematic training, similar to [16]. We collect 6,000 samples of decoupled (linear and angular velocities are actuated separately) and coupled (linear and angular velocities are actuated together) actuation noise. The parameters for noise in each dimension, and details about data collection and modeling can be found in the appendix. During training, we sample from the Gaussian distribution for each dimension, and add it to the policy's predicted velocity. We see that policies trained with noise (Figure 1, rows 5 and 6) also achieve 100% success in the real-world, and are able to increase path efficiency (SPL) (4.6% using decoupled (row 4 vs. row 5), and 5.6% using coupled actuation noise (row 4 vs. row 6)). The number of collisions and commanded actions are also lower for these policies, compared to kinematic policies trained with no noise (22.7 actions and 2.8 collisions vs. 26.4 actions and 3.1 collisions). These improvements are due to the added robustness that training with noise provides - uncertainty during training forces the policy to take less risky actions resulting in fewer collisions in the real-world.
147
+
148
+ § 6 CONCLUSION, LIMITATIONS, AND FUTURE WORK
149
+
150
+ In this work, we study the role of simulation fidelity for sim2real of visual navigation policies on three simulated and one real legged robot. Contrary to expectations, we find that higher simulation fidelity does not enable learning better high-level visual navigation policies. Dynamic policies tend to overfit to low-level simulation details, resulting in poor transfer to the real-world. On the other hand, kinematic policies are able to generalize well. These results raise important questions about the need for simulation fidelity for sim2real, especially in abstracted action spaces.
151
+
152
+ One limitation of this work is that we assume access to a robust 'black box' controller on hardware. While most robots come shipped with manufacturer-provided controllers, the level of accuracy may differ between robots, and more robust noise modeling may be needed to better characterize the actuation noise. In the future, we plan to improve the modeling of real-world actuation noise by using a neural network conditioned on previous states and actions of the robot. We would also like to experiment with other tasks to verify if our findings still hold true.
153
+
154
+ ${}^{2}$ Actual velocity is measured using the Boston Dynamics SDK.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/Bxr45keYrf/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,235 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Evo-NeRF: Evolving NeRF for Sequential Robot Grasping
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Sequential robot grasping of transparent objects is important in many industrial and household scenarios. We leverage recent speedups in NeRF training, and further extend it to progressively train on images as they are captured. We propose early training termination when a sufficient task confidence is achieved and the reusing of the NeRF weights from grasp to grasp to rapidly adapt to object removal. We propose Evo-NeRF with additional geometry regularizations improving performance in rapid capture settings. Because there can be unreliable geometry from NeRF, general purpose grasp planners such as Dex-Net struggle. To mitigate this distribution shift, we propose a grasping network Rad-Net adapted to NeRF's characteristics, and train it on NeRF models trained on synthetic photo-realistic scenes. In experiments, a physical YuMi robot using Evo-NeRF and Rad-Net achieves a 89% grasp success rate over 27 trials on single objects, with early capture termination providing a 41% speed improvement with no loss in reliability. In the sequential grasping task on 6 scenes, Evo-NeRF reusing the weight clears 72% of the objects, retaining the same performance as reconstructing the NeRF from the scratch (76%) but taking 61% of the capture time.
12
+
13
+ ## 17 1 Introduction
14
+
15
+ Sequentially grasping transparent objects is a highly useful but difficult task for robots due to the complexity of sensing the object to grasp. It has applications in industry, pharmaceuticals, and households; but since camera-based sensors see through transparent objects from most angles, assumptions underlying traditional disparity- and structure-from-motion-based methods break. Clear-Grasp [1] addresses this problem and trains a CNN to infer local surface normals on transparent objects from RGBD images based on synthetic examples from Blender. They show impressive results on 3-5 transparent objects separated by $2\mathrm{\;{cm}}$ , however the system has challenges with thin walls of transparent containers and background distractors. To this end, Dex-NeRF [2] introduced a method that uses Neural Radiance Fields (NeRF) [3] to grasp transparent objects, but at the cost of hours of computation per grasp. In this paper, we propose Evo-NeRF, a method for progressively updating and rapidly training NeRF models for grasping, and Rad-Net, a neural network for robust grasping on NeRF models.
16
+
17
+ NeRFs are powerful 3D representation which can reconstruct traditionally challenging-to-model scenes that include transparent objects. NeRFs were originally designed for novel view synthesis, but their volumetric representation enables the extraction of scene geometry. The use of geometry extracted from NeRFs have been explored in robotic applications such as dynamics modeling [4, 5], localization [6] and motion planning [7], and computing grasps on transparent objects [2, 8].
18
+
19
+ In the last year, dramatic advancements in NeRF training speed have opened the door for real-world usage $\left\lbrack {9,{10},{11}}\right\rbrack$ . In this paper, we apply NeRF in a purely online setting to practical robotics applications which require sequentially grasping transparent objects in clutter, for example as required in dishwasher unloading, table clearing, and other household tasks. To make NeRF practical for robotic grasping, we build on Instant-NGP [11], a fast variant of NeRF. Rather than training on a fixed set of images, we incrementally optimize over a stream of images as they are captured during a robot motion. Because of NeRF's varying convergence speed on different difficulty scenes, we also propose a method to terminate image capture upon achieving sufficient task confidence. We further adapt NeRF to sequential grasping by re-using NeRF weights from grasp to grasp and demonstrate its rapid adaptability to object removal.
20
+
21
+ ![01963f84-6b4f-7dfd-ab50-0b047e3ccad7_1_305_203_1188_474_0.jpg](images/01963f84-6b4f-7dfd-ab50-0b047e3ccad7_1_305_203_1188_474_0.jpg)
22
+
23
+ Figure 1: Sequential object removal. (a) The YuMi moves a camera on its left hand across the full hemisphere trajectory (red arrow) to capture a scene of 5 glass objects (b) The robot immediately plans and executes a grasp from the NeRF (c) Short trajectories are used to update the NeRF between grasps (d) Evo-NeRF first reconstructs the whole scene (1) with the camera trajectory shown in (a), then progressively updates the scene with small camera captures shown in (c) as objects are removed.
24
+
25
+ Since we propose that the robot captures images and trains a NeRF as it moves, motion blur, kinematic limitations, and speed considerations reduce the quality of the recovered geometry and introduce prominent spurious geometry known as floaters. We propose adding geometry regularization to the training objective, which improves the recovered geometry, but out-of-the-box grasp planners still struggle to find quality grasps due to remaining artifacts. Dex-NeRF [2] on the other hand, could use an out-of-the-box grasp sampler because it used diverse, high-quality, calibrated, still images captured in an offline process.
26
+
27
+ To mitigate the effect of the lower-quality NeRF reconstructions, we propose a novel Sim2Real training pipeline for a NeRF-based grasping network, which transfers well to real-world NeRF reconstructions. This pipeline takes advantage of the training speed of Instant-NGP-without it, the pipeline would be computationally infeasible. Experiments on a real robot with an actuatable camera to capture images suggest that Evo-NeRF is able to reconstruct graspable scene geometry rapidly and reliably, when combined with Rad-Net achieving a 89% success rate on single objects within 9.5 seconds of image capturing.
28
+
29
+ The contributions of this paper are: (1) novel usage of NeRF in a sequential setting, rapidly evolving the NeRF representation between grasps, (2) improvements in scene geometry reconstruction speed built on existing methods, (3) an active sensing approach to efficiently early stopping capture, (4) a novel Sim2Real training pipeline to acclimate a grasping planner to NeRF geometry characteristics, (5) a dataset of 8667 Blender rendered scenes of transparent objects with robust grasps, and (6) experimental data suggesting that Evo-NeRF enables rapid grasping on NeRF.
30
+
31
+ ## 2 Related Work
32
+
33
+ Neural Radiance Fields (NeRF) NeRF [3] is a neural-network scene representation that enables photorealistic synthesis of novel views of a scene given a set of images and camera matrices. The representation is a function of location and view angle, and returns a density and view-dependent color. Densities and colors sampled along a camera ray are aggregated using volumetric rendering to produce a pixel color. NeRF is popular in the computer vision and graphics communities with the applications in dynamic scene reconstruction [12, 13], image synthesis [14, 15, 16], pose estimation [17, 18, 19], and more. Optimizing NeRF to reconstruct a single scene can take hours or days-making it impractical for many robotics applications. Instant-NGP [11] and others [9, 20] speed up NeRF by using voxel feature grids instead of multi-layer perceptions to simplify or remove [10] a computational bottleneck. We build on Instant-NGP [11], which uses a learnable hash encoding and highly optimized CUDA implementation to speed up NeRF training from the order of hours to seconds. Others have also sped up NeRF by reusing computation between scenes by utilizing priors. Existing methods $\left\lbrack {{21},{22},{23},{24},{25}}\right\rbrack$ use convolutional neural networks (CNNs) to extract image features as input to a shared network that predicts the NeRF. Tancik et al. [26] and Gao et al. [27] speed up NeRF training using meta-learning to initialize network weights to ones that converge faster for likely scenes. In this work, we propose using past reconstructions of a scene as an initialization for the current reconstruction, allowing rapid adaptation to changes in the scene.
34
+
35
+ ![01963f84-6b4f-7dfd-ab50-0b047e3ccad7_2_308_200_1183_368_0.jpg](images/01963f84-6b4f-7dfd-ab50-0b047e3ccad7_2_308_200_1183_368_0.jpg)
36
+
37
+ Figure 2: Evo-NeRF for rapid grasping: (a) The robot begins moving along a hemisphere capture trajectory (b) Evo-NeRF progressively trains NeRF during arm motion, quickly building graspable geometry of the wineglass. Grasp confidence from Rad-Net builds as NeRF learns geometry, reaching the stopping threshold at (3). (c) Robot execution of the grasp given by Rad-Net at the early stop point.
38
+
39
+ NeRFs in Robotics Recent research has shown NeRFs to be a promising scene representation for downstream robotics tasks such as navigation and SLAM [6, 7, 28] and manipulation [5, 2, 8]. Yen-Chen et al. [17] and Tseng et al. [29] use a trained NeRF model to estimate an object's 6-DOF pose by minimizing the residuals between a rendered image and a given observed image. Driess et al. [4] train a graph neural network to learn a dynamics model in a multi-object scene represented through a NeRF model, while Li et al. [5] condition a NeRF model on a learned latent dynamics model to plan to visual goals in simulated environments. We propose building on advances in NeRF and its applications to robotics to speed up NeRF-based grasping for practical uses.
40
+
41
+ Grasping Transparent Objects: Most closely related to this paper are two recent works leveraging NeRFs to manipulate objects that cannot be detected by commodity RGBD sensors. Yen-Chen et al. [8] use a NeRF model offline to train dense object descriptors and manipulate thin and reflective objects. Ichnowski et al. [2] show that manually constructing an offline dataset of a given scene then training NeRF allows off-the-shelf grasp planners [30] to compute successful grasps on transparent objects. ClearGrasp [1] trains a Sim2Real depth prediction network on RGB images, then uses this network in real environments to estimate surface geometry for grasps. This idea has been extended to pointclouds and with more efficient real-world data collection [31, 32]. Using NeRF removes the requirement for RGB datasets which make Sim2Real challenging, and has superior performance on thin surfaces, occlusions, and complex backgrounds.
42
+
43
+ ## 3 Problem Statement
44
+
45
+ Given a set of transparent objects resting on a planar workspace, the objective is for the robot to find, grasp, and remove each object quickly. Objects are placed close to each other(2.5cm)and the robot has an actuatable camera and a parallel jaw gripper (Fig. 1). The focus is on finding robust grasps rapidly, with grasp success measured as transporting an object without dropping. We assume (1) objects rest in graspable stable poses on a flat surface, (2) objects are in the reachable workspace of the robot with a known forward kinematic model, (3) the camera-to-arm transform is known and stable, and (5) the robot can follow a known obstacle-free trajectory to capture images.
46
+
47
+ ![01963f84-6b4f-7dfd-ab50-0b047e3ccad7_3_417_197_959_374_0.jpg](images/01963f84-6b4f-7dfd-ab50-0b047e3ccad7_3_417_197_959_374_0.jpg)
48
+
49
+ Figure 3: Comparison of Evo-NeRF's training over time vs NGP on the exact same camera trajectory. Added geometry regularization improves the speed and smoothness of geometry regularization.
50
+
51
+ ## 4 Method
52
+
53
+ To rapidly compute robust grasps, we propose Evo-NeRF and Rad-Net. Evo-NeRF, or Evolving ${NeRF}$ , builds on Instant-NGP [11], a fast implementation of NeRF, and adds a series of modifications to make it practical. Rad-Net, or Radiance-adjusted grasp Network, is a network trained to compute grasps from geometry reconstructed from a NeRF.
54
+
55
+ ### 4.1 Evo-NeRF
56
+
57
+ To shorten the time to get a trained NeRF, we propose Evo-NeRF, a method that pipelines image capture with NeRF training, reuses weights in sequential grasping, adds regularization to counter effects from rapid capture, and includes an early stopping condition to start a grasp when the grasp network has high confidence.
58
+
59
+ Image capture: The Evo-NeRF method starts with the robot moving a camera around its workspace to capture images. Heuristically, hemispherical captures are ideal for NeRF since they maximally vary the view angles of the scene. The Evo-NeRF capture trajectory sweeps the camera through a discretized hemisphere centered at a location of interest while pointing at the center. First, the camera sweeps around the $z$ -axis to maximize the variance of viewing angle early in the capture sequence. In experiments, we capture images every $3\mathrm{\;{cm}}$ while moving at ${20}\mathrm{\;{cm}}/\mathrm{s}$ , and a full capture trajectory takes 16 seconds and includes 80 images with trajectory shown in Fig. 1.
60
+
61
+ Continual NeRF training: NeRF training, even sped up by Instant-NGP, is a bottleneck. We propose continually training NeRF from the moment the first image is captured, and incrementally adding images to the NeRF training dataset as the camera moves to new viewpoints. This effectively pipelines the image-capture and NeRF-training processes, and allows for usable NeRF representations quickly after (and sometimes before) the capture process finishes.
62
+
63
+ During each capture motion we train NeRF in batches of 48 steps, adding new images between each batch when available. This is akin to other online neural implicit methods like iMAP [6] and NICE-SLAM [33], who also update the image sets between training batches. We compute the camera frame using the forward kinematics and pair it with each image. In practice, this yields pose error around $1\mathrm{\;{cm}}$ , which NeRF accounts for by optimizing the camera extrinsics.
64
+
65
+ Reusing NeRF weights: In sequential grasping scenarios, scenes often change by only the removal of the last object grasped. To take advantage the information already trained, we use the NeRF network weights from the previous grasp in the subsequent grasp. In implementation, we remove the old images from the training dataset and start capturing and training on images for the next grasp.
66
+
67
+ ![01963f84-6b4f-7dfd-ab50-0b047e3ccad7_4_421_198_955_374_0.jpg](images/01963f84-6b4f-7dfd-ab50-0b047e3ccad7_4_421_198_955_374_0.jpg)
68
+
69
+ Figure 4: Sim2Real Dataset Generation. Each scene includes a subset of the training objects in simulation (Fig. 5). Top: Grasp generation starts by sampling grasps on the object meshes and projecting it to a top-down view. Bottom: We render multiple views of each scene using Blender, then train Instant-NGP and render a top-down depth image. We accumulate NeRF depth rendering and projected grasps into a dataset.
70
+
71
+ Geometry regularization: A well-known artifact of NeRF's volumetric rendering loss are floaters, spurious regions of density floating in space. When using NeRF for view synthesis, floaters can go unnoticed, but in grasping, floaters can lead to grasp failures. We apply 2 regularizations which increase the speed and smoothness of geometry reconstruction while sacrificing RGB quality.
72
+
73
+ First, we adapt the total-variation regularization loss (TV-loss) from Plenoxels [34] to discourage floaters and encourage smooth scene geometry. During training, at each step Evo-NeRF sample $N$ random points ${p}_{i}$ using rejection sampling to constrain samples to locations with non-trivial density values. Evo-NeRF then queries the density at all 8-connected neighbors ${n}_{j}$ at a radius $r$ . The final TV-loss is ${L}_{\mathrm{{tv}}} = \mathop{\sum }\limits_{{i = 1}}^{N}\mathop{\sum }\limits_{{j = 1}}^{8}{\lambda }_{\mathrm{{tv}}}{\left( \sigma \left( {p}_{i}\right) - \sigma \left( {n}_{j}^{i}\right) \right) }^{2}$ , where $\sigma$ is the raw, pre-activation output from the density network, and ${\lambda }_{\mathrm{{tv}}}$ is a loss scaling factor.
74
+
75
+ Second, sampling along each ray more coarsely during training reduces floaters and quickly acquires meaningful geometry. In Instant-NGP the distance between samples grows proportional to distance along the ray, and we scale this distance to be ${10}\mathrm{x}$ larger. By training with coarse samples, the NeRF is incentivized to learn a low frequency representation of the scene to minimize reconstruction error.
76
+
77
+ Efficient perception stopping: In scenes where NeRF is able to recover usable geometry before the full camera trajectory has terminated, Evo-NeRF can terminates the capture phase early to speed task completion. In Sec. 5.2 we present experiments showing this by querying grasp confidence of Rad-Net in a closed loop while the robot moves the camera and trains NeRF. When a confidence exceeds a threshold, the capture stops early and the robot executes the grasp.
78
+
79
+ ### 4.2 Grasp Planning Network
80
+
81
+ When NeRF is trained to completion with dense camera viewpoints, grasp planners trained on ground truth depth like Dex-Net [35] produce usable grasps. However, in an online setting where viewpoints are of lower quality, depth images rendered from NeRF appears significantly different from depth images from standard RGBD cameras. To mitigate this test-time distribution shift and enable more reliable grasping from online NeRFs, we train a network directly on NeRF-rendered depth maps to predict grasps in a Sim2Real fashion.
82
+
83
+ Network Architecture: We train a location neural networks to predict the center of the grasp location when given a NeRF-rendered image; and we train a rotation network to predict the discretied grasp angle when given a cropped patch around the grasp location. We adapt the grasping architecture proposed by Zhu et al. [36], which suggests that an equivariant convolutional neural network learns to perform top-down grasps in fewer samples than standard networks. We train location and rotation networks on a static grasp dataset, in contrast to the online setting in Zhu et al. [36].
84
+
85
+ Dataset Generation: We generate the training dataset in simulation. We use 7 object meshes that are representative of the common household transparent objects which are graspable by the YuMi robot, shown in Fig. 5a. We model all objects with the same density as glass $\left( {{2500}\mathrm{\;{kg}}/{\mathrm{m}}^{3}}\right)$ . Then, we assemble a set of scenes with labeled grasp qualities: we randomly place objects in stable poses
86
+
87
+ ![01963f84-6b4f-7dfd-ab50-0b047e3ccad7_5_369_223_1063_271_0.jpg](images/01963f84-6b4f-7dfd-ab50-0b047e3ccad7_5_369_223_1063_271_0.jpg)
88
+
89
+ Figure 5: Training and testing objects. The Blender rendering in (a) shows the 7 objects we use in data generation for computing grasps and rendering in various stable poses. Objects in (b) are real objects tat are in-distribution with the training objects. We also test on out-of-distribution objects shown in (c). To test grasping in clutter, we setup various testing scenes with objects in and out of distribution, with examples shown in (d).
90
+
91
+ on a planar surface and analytically sample antipodal grasp closure axes based on mesh surface normals as in Dex-Net [37]. We use a soft point-contact model [38], and evaluate the probability of grasp success using wrench resistance [39], a common analytic measure for grasp success. This method is inexpensive to calculate (0.02sec per grasp) and has high precision for measuring grasp success [40]. We densely sample 1000 collision free grasps for each stable pose and we use Blender to render the scenes.
92
+
93
+ Training: To train Rad-Net, we project sampled grasps onto the depth images and store at each pixel the maximum grasp confidence over all rotations resulting confidence heatmaps. We dilate and blur the masks with a $3 \times 3$ kernel to smooth the predictions, and randomly augment the masks with translation, shear and scale transformation for both the depth images and the confidence heatmaps. To train the rotation network, we sample crops from grasps above 0.7 quality, and use a cross-entropy loss on the output rotation probabilities.
94
+
95
+ Grasp Planning: To execute a grasp from Rad-Net we render a depth image from NeRF of size ${144} \times {256}$ from the camera pose used during dataset generation, using the ray transmittance truncation of Dex-NeRF [2].We query the location network on this depth image to obtain a heatmap over the image of grasp confidence, then crop a patch of the depth image centered at the argmax of this heatmap. The rotation network takes this crop and outputs 8 grasp angle probabilities, and we take the weighted average of the argmax with its neighbors to produce the final grasp angle. We determine grasp depth by analyzing a local pointcloud at the grasp location, and subtracting a static ${1.5}\mathrm{\;{cm}}$ grasp depth from the highest point.
96
+
97
+ ## 5 Experiments
98
+
99
+ We evaluate the reliability of Evo-NeRF paired with Rad-Net vs Dex-Net [35], evaluate the speed improvements from early stopping captures and Evo-NeRF reusing representations, and ablate aspects of the system including NeRF modifications and training on NeRF depth vs ground-truth depth. We compare to Dex-Net to highlight the improvement in reliability gained from training on NeRF-rendered depth rather than ground-truth depth, and note that in Dex-NeRF [2], the NeRF model was trained for ${1900}\mathrm{x}$ longer, with an offline, manually captured set of images with precisely calibrated poses from Colmap [41, 42]. This difference in view quality and training length from rapid capture results in a notable drop in raw Dex-Net grasp robustness because of lower quality reconstructions.
100
+
101
+ ### 5.1 Physical Setup
102
+
103
+ We evaluate on a physical YuMi robot with a ZED Mini camera. The pose of the ZED relative to the arm that holds it is calibrated with a chessboard once before all experiments. We surround the robot with a kitchen-like workspace containing printed images of a countertop and shelves, where test objects are positioned near the center of the workspace. The workspace has 3 LED floodlights positioned across from the robot aiming at the workspace. We use one NVIDIA GeForce RTX 3080 GPU for NeRF training and grasp network inference. We evaluate on 9 different objects, both in distribution and out of distribution with respect to the train set in Fig. 5a.
104
+
105
+ Table 1: Single objects results: each cell shows the average over 27 different trials. Full capture compares success after a complete camera trajectory, and early stop compares grasp success at the same stop early stop point. Early stopping results in a ${41}\%$ speed improvement with no drop in success rate for Rad-Net.
106
+
107
+ <table><tr><td>Method</td><td>Full Capture Success</td><td>Early Stop Success</td></tr><tr><td>Dex-Net</td><td>56%</td><td>11 %</td></tr><tr><td>Rad-Net</td><td>89 %</td><td>89 %</td></tr><tr><td>Time</td><td>16s</td><td>9.5s</td></tr></table>
108
+
109
+ ![01963f84-6b4f-7dfd-ab50-0b047e3ccad7_6_319_538_1158_225_0.jpg](images/01963f84-6b4f-7dfd-ab50-0b047e3ccad7_6_319_538_1158_225_0.jpg)
110
+
111
+ Figure 6: Decluttering results. We report (a) histograms of objects remaining after each trial for Rad-Net and Dex-Net [35] and (b) the ratio of the capture time over the full capture time for each method. Rad-Net is able to extract more objects compared to Dex-Net. Furthermore, reusing NeRF to update scene geometry rather than recapturing the scene retains performance of Rad-Net, while reducing capture time by ${61}\%$ .
112
+
113
+ ### 5.2 Rapid single object retrieval
114
+
115
+ We apply confidence-based capture early stopping (4.1) with a threshold of 70% to execute a grasp as quickly as possible as shown in Fig. 2. We place each of the 9 test objects near the center of the workspace, and report grasp success and total time spent capturing images. We repeat each experiment 3 times and compare Rad-Net with Dex-Net [35] and evaluate with and without early capture stopping. Since Dex-Net does not output grasp confidence we use the same stopping point for both networks, as determined by Rad-Net. An experiment is successful if the robot grasps and places the object into the storage bin.
116
+
117
+ Table 1 summarizes the results. Using Rad-Net for early stopping results in a capture time reduction of 41%, with no drop in reliability. On average, the robot grasps objects within 9.5 seconds with an 89% success rate over 54 trials. Rad-Net outperforms Dex-Net in grasp success by 1.6x even with a full capture of the scene as a result of its habituation to NeRF density. Dex-Net performs poorly overall because of its sensitivity to NeRF depth noise and Rad-Net's primary failure cases are on out-of-distribution objects, specifically missing grasps on the lightbulb and tape dispenser. This is likely because the training set has no small profile items.
118
+
119
+ ### 5.3 Sequential decluttering
120
+
121
+ We evaluate on the decluttering task where multiple transparent objects are placed in close proximity (2cm) in stable poses on the table, and the robot must sequentially grasp and place all objects in the bin one by one as shown in Fig. 1. We consider three tiers of experiments with different difficulties and two scenes for each tier, resulting in a total of 6 different scenes (Fig. 5). We repeat each scene 3 times and compare against Dex-Net [35]. At the beginning of each experiment, the robot executes a full capture of the scene as shown in Fig. 1a. Then, after each consecutive grasp, the robot executes a much smaller hemispherical capture centered at the grasp location to update the NeRF (Fig. 1c). We allow only as many grasp attempts as objects in the scene. If a grasp planner generates a wrong grasp leading to a joint over-torque error, we terminate the experiment as a failure.
122
+
123
+ Results are summarized in Fig 6, showing the number of objects remaining after each trial terminates and the capture time ratio. Overall, Evo-NeRF with Rad-Net clears 72% of test objects across all tiers, while Dex-Net clears ${48}\%$ of objects, and takes ${39}\%$ of the capture time compared to Rad-Net using the full image capture each grasp, which has similar clear performance (76%). This suggests Evo-NeRF retains graspable geometry over successive updates, despite their short duration.
124
+
125
+ Table 2: Ablations of graspability regularizations. We query Dex-Net [35] continuously through camera capture trajectories and report the percent of the trajectory needed until the highest probability grasp is on an object. We compare vanilla Instant-NGP with Evo-NeRF, as well as ablating TV-loss and coarse ray sampling.
126
+
127
+ <table><tr><td/><td>Instant-NGP</td><td>Evo-NeRF -TV</td><td>Evo-NeRF -Coarse</td><td>Evo-NeRF</td></tr><tr><td>$\%$ Capture Until Grasp $\downarrow$</td><td>80.3 %</td><td>64.8 %</td><td>62.0%</td><td>52.6 %</td></tr></table>
128
+
129
+ ### 5.4 Graspability ablation
130
+
131
+ In this experiment, we ablate the changes made to NeRF speeding geometry graspability. We capture 9 single-object and 3 multi-object scenes, then continuously train NeRF as it captures, using the same static images and holding all other hyperparameters constant. We measure the capture time needed until the first grasp output from Dex-Net [35] lands on a real object as a proxy for graspa-bility convergence. Table 2 shows the percent of the capture trajectory needed, and Fig. 3 shows a timelapse of visual qualities over a capture. Results suggest that the proposed method produces graspable geometry faster, with a ${32}\%$ reduction in capture time needed to grasp from Dex-Net.
132
+
133
+ ### 5.5 NeRF Depth vs Ground Truth Depth
134
+
135
+ We quantitatively compare the effect of the distribution shift between NeRF-generated and ground-truth depth images on training grasp networks. We generate ground-truth depth images with pyren-der [43]. We train an additional network, GT-Net, on ground-truth depth only. We also evaluate Dex-Net [35], which is trained on a much larger dataset with ground-truth depth images. We test on the held-out test set of NeRF-rendered depth images and report average grasp confidence using the soft-point-contact model and wrench resistance. We normalize results with respect to the performance of GT-Net and evaluated on the on ground-truth depth images of the same scenes.
136
+
137
+ GT-Net achieves $0\%$ , Rad-Net achieves ${42}\%$ , and Dex-Net achieves ${0.1}\%$ , suggesting that there is a large distribution shift from training on ground-truth depth to testing on NeRF-depth in simulation. Rad-Net performance in simulated scenes is worse than in real scenes because the synthetic dataset contains fewer camera angles than real scenes (52 vs. 80), resulting in more floaters.
138
+
139
+ ## 6 Conclusion
140
+
141
+ In this work, we introduced Evo-NeRF, a method that progressively captures and trains NeRFs for practical robotic grasping. While its rapid image capture produces lower-quality reconstructions than prior work, we propose reusing trained weights in sequential grasping, geometry regularization, and continual training to obtain better 3D reconstructions. We further propose a novel Sim2Real training pipeline, where we train grasp networks on NeRF rendered depth images in simulated environments, and we find the networks are able to predict high quality grasps in the physical environment. In experiments, Evo-NeRF and Rad-Net can grasp transparent objects around ${1900}\mathrm{x}$ faster than Ichnowski et al. [2] and with ${89}\%$ success on singulated objects.
142
+
143
+ ### 6.1 Limitations and future work
144
+
145
+ Rad-Net uses on rendered depth images, throwing away much of the rich 3D NeRF information. It may missing out on more robust grasps than a network trained on 3D geometry (e.g., VGN [44]). Future work in this direction could also include optimization techniques which use the differentiability of NeRF's density to refine a grasp estimate with local gradient descent. Though we have shown that NeRF is adaptable to geometry deletion, NeRF resists adding new geometry-thus reusing weights will slow down training when objects are added between grasps. Future work in adapting NeRF to changing scenes would greatly improve the practicality of real-time usage.
146
+
147
+ References
148
+
149
+ [1] S. S. Sajjan, M. Moore, M. Pan, G. Nagaraja, J. Lee, A. Zeng, and S. Song. Cleargrasp: 3d shape estimation of transparent objects for manipulation. CoRR, abs/1910.02550, 2019. URL http://arxiv.org/abs/1910.02550.
150
+
151
+ [2] J. Ichnowski, Y. Avigal, J. Kerr, and K. Goldberg. Dex-nerf: Using a neural radiance field to grasp transparent objects. In 5th Annual Conference on Robot Learning, 2021. URL https: //openreview.net/forum?id=z0jU2vZzhCk.
152
+
153
+ [3] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. In European Conference on Computer Vision, pages 405-421. Springer, 2020.
154
+
155
+ [4] D. Driess, Z. Huang, Y. Li, R. Tedrake, and M. Toussaint. Learning multi-object dynamics with compositional neural radiance fields. arXiv preprint arXiv:2202.11855, 2022.
156
+
157
+ [5] Y. Li, S. Li, V. Sitzmann, P. Agrawal, and A. Torralba. 3d neural scene representations for visuomotor control. In Conference on Robot Learning, pages 112-123. PMLR, 2022.
158
+
159
+ [6] E. Sucar, S. Liu, J. Ortiz, and A. J. Davison. imap: Implicit mapping and positioning in real-time. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6229-6238, 2021.
160
+
161
+ [7] M. Adamkiewicz, T. Chen, A. Caccavale, R. Gardner, P. Culbertson, J. Bohg, and M. Schwa-ger. Vision-only robot navigation in a neural radiance world. IEEE Robotics and Automation Letters, 7(2):4606-4613, 2022.
162
+
163
+ [8] L. Yen-Chen, P. Florence, J. T. Barron, T.-Y. Lin, A. Rodriguez, and P. Isola. Nerf-supervision: Learning dense object descriptors from neural radiance fields. arXiv preprint arXiv:2203.01913, 2022.
164
+
165
+ [9] C. Sun, M. Sun, and H.-T. Chen. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5459-5469, 2022.
166
+
167
+ [10] S. Fridovich-Keil, A. Yu, M. Tancik, Q. Chen, B. Recht, and A. Kanazawa. Plenoxels: Radiance fields without neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5501-5510, 2022.
168
+
169
+ [11] T. Müller, A. Evans, C. Schied, and A. Keller. Instant neural graphics primitives with a mul-tiresolution hash encoding. ACM Trans. Graph., 2022.
170
+
171
+ [12] K. Park, U. Sinha, J. T. Barron, S. Bouaziz, D. B. Goldman, S. M. Seitz, and R. Martin-Brualla. Nerfies: Deformable neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5865-5874, 2021.
172
+
173
+ [13] Z. Li, S. Niklaus, N. Snavely, and O. Wang. Neural scene flow fields for space-time view synthesis of dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6498-6508, 2021.
174
+
175
+ [14] K. Schwarz, Y. Liao, M. Niemeyer, and A. Geiger. Graf: Generative radiance fields for 3D-aware image synthesis. In Advances in Neural Information Processing Systems (NeurIPS), volume 33, 2020.
176
+
177
+ [15] E. R. Chan, M. Monteiro, P. Kellnhofer, J. Wu, and G. Wetzstein. pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5799-5809, 2021.
178
+
179
+ [16] J. Gu, L. Liu, P. Wang, and C. Theobalt. Stylenerf: A style-based 3d-aware generator for high-resolution image synthesis. arXiv preprint arXiv:2110.08985, 2021.
180
+
181
+ [17] L. Yen-Chen, P. Florence, J. T. Barron, A. Rodriguez, P. Isola, and T.-Y. Lin. inerf: Inverting neural radiance fields for pose estimation. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1323-1330. IEEE, 2021.
182
+
183
+ [18] Q. Meng, A. Chen, H. Luo, M. Wu, H. Su, L. Xu, X. He, and J. Yu. Gnerf: Gan-based neural radiance field without posed camera. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6351-6361, 2021.
184
+
185
+ [19] C.-H. Lin, W.-C. Ma, A. Torralba, and S. Lucey. Barf: Bundle-adjusting neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5741- 5751, 2021.
186
+
187
+ [20] A. Chen, Z. Xu, A. Geiger, J. Yu, and H. Su. Tensorf: Tensorial radiance fields. arXiv preprint arXiv:2203.09517, 2022.
188
+
189
+ [21] Q. Wang, Z. Wang, K. Genova, P. Srinivasan, H. Zhou, J. T. Barron, R. Martin-Brualla, N. Snavely, and T. Funkhouser. Ibrnet: Learning multi-view image-based rendering. In CVPR, 2021.
190
+
191
+ [22] A. Trevithick and B. Yang. Grf: Learning a general radiance field for 3d representation and rendering. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15182-15192, 2021.
192
+
193
+ [23] A. Chen, Z. Xu, F. Zhao, X. Zhang, F. Xiang, J. Yu, and H. Su. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14124-14133, 2021.
194
+
195
+ [24] J. Chibane, A. Bansal, V. Lazova, and G. Pons-Moll. Stereo radiance fields (srf): Learning view synthesis for sparse views of novel scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7911-7920, 2021.
196
+
197
+ [25] A. Yu, V. Ye, M. Tancik, and A. Kanazawa. pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4578-4587, 2021.
198
+
199
+ [26] M. Tancik, B. Mildenhall, T. Wang, D. Schmidt, P. P. Srinivasan, J. T. Barron, and R. Ng. Learned initializations for optimizing coordinate-based neural representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2846-2855, 2021.
200
+
201
+ [27] C. Gao, Y. Shih, W.-S. Lai, C.-K. Liang, and J.-B. Huang. Portrait neural radiance fields from a single image. arXiv preprint arXiv:2012.05903, 2020.
202
+
203
+ [28] J. Abou-Chakra, F. Dayoub, and N. Sünderhauf. Implicit object mapping with noisy data. arXiv preprint arXiv:2204.10516, 2022.
204
+
205
+ [29] W.-C. Tseng, H.-J. Liao, L. Yen-Chen, and M. Sun. Cla-nerf: Category-level articulated neural radiance field. arXiv preprint arXiv:2202.00181, 2022.
206
+
207
+ [30] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg. Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. 2017.
208
+
209
+ [31] H. Xu, Y. R. Wang, S. Eppel, A. Aspuru-Guzik, F. Shkurti, and A. Garg. Seeing glass: Joint point cloud and depth completion for transparent objects. CoRR, abs/2110.00087, 2021. URL https://arxiv.org/abs/2110.00087.
210
+
211
+ [32] T. Weng, A. Pallankize, Y. Tang, O. Kroemer, and D. Held. Multi-modal transfer learning for grasping transparent and specular objects. CoRR, abs/2006.00028, 2020. URL https: //arxiv.org/abs/2006.00028.
212
+
213
+ [33] Z. Zhu, S. Peng, V. Larsson, W. Xu, H. Bao, Z. Cui, M. R. Oswald, and M. Pollefeys. Nice-slam: Neural implicit scalable encoding for slam. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2022.
214
+
215
+ [34] A. Yu, S. Fridovich-Keil, M. Tancik, Q. Chen, B. Recht, and A. Kanazawa. Plenoxels: Radiance fields without neural networks. CoRR, abs/2112.05131, 2021. URL https: //arxiv.org/abs/2112.05131.
216
+
217
+ [35] V. Satish, J. Mahler, and K. Goldberg. On-policy dataset synthesis for learning robot grasping policies using fully convolutional deep networks. IEEE Robotics and Automation Letters, 2019.
218
+
219
+ [36] X. Zhu, D. Wang, O. Biza, G. Su, R. Walters, and R. Platt. Sample efficient grasp learning using equivariant models. arXiv preprint arXiv:2202.09468, 2022.
220
+
221
+ [37] J. Mahler, F. T. Pokorny, B. Hou, M. Roderick, M. Laskey, M. Aubry, K. Kohlhoff, T. Kröger, J. Kuffner, and K. Goldberg. Dex-net 1.0: A cloud-based network of 3d objects for robust grasp planning using a multi-armed bandit model with correlated rewards. In 2016 IEEE international conference on robotics and automation (ICRA), pages 1957-1964. IEEE, 2016.
222
+
223
+ [38] Y. Zheng and W.-H. Qian. Coping with the grasping uncertainties in force-closure analysis. The international journal of robotics research, 24(4):311-327, 2005.
224
+
225
+ [39] J. Mahler, S. Patil, B. Kehoe, J. Van Den Berg, M. Ciocarlie, P. Abbeel, and K. Goldberg. Gp-gpis-opt: Grasp planning with shape uncertainty using gaussian process implicit surfaces and sequential convex programming. In 2015 IEEE international conference on robotics and automation (ICRA), pages 4919-4926. IEEE, 2015.
226
+
227
+ [40] C. M. Kim, M. Danielczuk, I. Huang, and K. Goldberg. Simulation of parallel-jaw grasping using incremental potential contact models. arXiv preprint arXiv:2111.01391, 2021.
228
+
229
+ [41] J. L. Schönberger and J.-M. Frahm. Structure-from-motion revisited. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
230
+
231
+ [42] J. L. Schönberger, E. Zheng, M. Pollefeys, and J.-M. Frahm. Pixelwise view selection for unstructured multi-view stereo. In European Conference on Computer Vision (ECCV), 2016.
232
+
233
+ [43] M. Matl. Pyrender. https://github.com/mmatl/pyrender, 2019.
234
+
235
+ [44] M. Breyer, J. J. Chung, L. Ott, R. Siegwart, and J. I. Nieto. Volumetric grasping network: Real-time 6 DOF grasp detection in clutter. CoRR, abs/2101.01132, 2021. URL https: //arxiv.org/abs/2101.01132.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/Bxr45keYrf/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § EVO-NERF: EVOLVING NERF FOR SEQUENTIAL ROBOT GRASPING
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Sequential robot grasping of transparent objects is important in many industrial and household scenarios. We leverage recent speedups in NeRF training, and further extend it to progressively train on images as they are captured. We propose early training termination when a sufficient task confidence is achieved and the reusing of the NeRF weights from grasp to grasp to rapidly adapt to object removal. We propose Evo-NeRF with additional geometry regularizations improving performance in rapid capture settings. Because there can be unreliable geometry from NeRF, general purpose grasp planners such as Dex-Net struggle. To mitigate this distribution shift, we propose a grasping network Rad-Net adapted to NeRF's characteristics, and train it on NeRF models trained on synthetic photo-realistic scenes. In experiments, a physical YuMi robot using Evo-NeRF and Rad-Net achieves a 89% grasp success rate over 27 trials on single objects, with early capture termination providing a 41% speed improvement with no loss in reliability. In the sequential grasping task on 6 scenes, Evo-NeRF reusing the weight clears 72% of the objects, retaining the same performance as reconstructing the NeRF from the scratch (76%) but taking 61% of the capture time.
12
+
13
+ § 17 1 INTRODUCTION
14
+
15
+ Sequentially grasping transparent objects is a highly useful but difficult task for robots due to the complexity of sensing the object to grasp. It has applications in industry, pharmaceuticals, and households; but since camera-based sensors see through transparent objects from most angles, assumptions underlying traditional disparity- and structure-from-motion-based methods break. Clear-Grasp [1] addresses this problem and trains a CNN to infer local surface normals on transparent objects from RGBD images based on synthetic examples from Blender. They show impressive results on 3-5 transparent objects separated by $2\mathrm{\;{cm}}$ , however the system has challenges with thin walls of transparent containers and background distractors. To this end, Dex-NeRF [2] introduced a method that uses Neural Radiance Fields (NeRF) [3] to grasp transparent objects, but at the cost of hours of computation per grasp. In this paper, we propose Evo-NeRF, a method for progressively updating and rapidly training NeRF models for grasping, and Rad-Net, a neural network for robust grasping on NeRF models.
16
+
17
+ NeRFs are powerful 3D representation which can reconstruct traditionally challenging-to-model scenes that include transparent objects. NeRFs were originally designed for novel view synthesis, but their volumetric representation enables the extraction of scene geometry. The use of geometry extracted from NeRFs have been explored in robotic applications such as dynamics modeling [4, 5], localization [6] and motion planning [7], and computing grasps on transparent objects [2, 8].
18
+
19
+ In the last year, dramatic advancements in NeRF training speed have opened the door for real-world usage $\left\lbrack {9,{10},{11}}\right\rbrack$ . In this paper, we apply NeRF in a purely online setting to practical robotics applications which require sequentially grasping transparent objects in clutter, for example as required in dishwasher unloading, table clearing, and other household tasks. To make NeRF practical for robotic grasping, we build on Instant-NGP [11], a fast variant of NeRF. Rather than training on a fixed set of images, we incrementally optimize over a stream of images as they are captured during a robot motion. Because of NeRF's varying convergence speed on different difficulty scenes, we also propose a method to terminate image capture upon achieving sufficient task confidence. We further adapt NeRF to sequential grasping by re-using NeRF weights from grasp to grasp and demonstrate its rapid adaptability to object removal.
20
+
21
+ < g r a p h i c s >
22
+
23
+ Figure 1: Sequential object removal. (a) The YuMi moves a camera on its left hand across the full hemisphere trajectory (red arrow) to capture a scene of 5 glass objects (b) The robot immediately plans and executes a grasp from the NeRF (c) Short trajectories are used to update the NeRF between grasps (d) Evo-NeRF first reconstructs the whole scene (1) with the camera trajectory shown in (a), then progressively updates the scene with small camera captures shown in (c) as objects are removed.
24
+
25
+ Since we propose that the robot captures images and trains a NeRF as it moves, motion blur, kinematic limitations, and speed considerations reduce the quality of the recovered geometry and introduce prominent spurious geometry known as floaters. We propose adding geometry regularization to the training objective, which improves the recovered geometry, but out-of-the-box grasp planners still struggle to find quality grasps due to remaining artifacts. Dex-NeRF [2] on the other hand, could use an out-of-the-box grasp sampler because it used diverse, high-quality, calibrated, still images captured in an offline process.
26
+
27
+ To mitigate the effect of the lower-quality NeRF reconstructions, we propose a novel Sim2Real training pipeline for a NeRF-based grasping network, which transfers well to real-world NeRF reconstructions. This pipeline takes advantage of the training speed of Instant-NGP-without it, the pipeline would be computationally infeasible. Experiments on a real robot with an actuatable camera to capture images suggest that Evo-NeRF is able to reconstruct graspable scene geometry rapidly and reliably, when combined with Rad-Net achieving a 89% success rate on single objects within 9.5 seconds of image capturing.
28
+
29
+ The contributions of this paper are: (1) novel usage of NeRF in a sequential setting, rapidly evolving the NeRF representation between grasps, (2) improvements in scene geometry reconstruction speed built on existing methods, (3) an active sensing approach to efficiently early stopping capture, (4) a novel Sim2Real training pipeline to acclimate a grasping planner to NeRF geometry characteristics, (5) a dataset of 8667 Blender rendered scenes of transparent objects with robust grasps, and (6) experimental data suggesting that Evo-NeRF enables rapid grasping on NeRF.
30
+
31
+ § 2 RELATED WORK
32
+
33
+ Neural Radiance Fields (NeRF) NeRF [3] is a neural-network scene representation that enables photorealistic synthesis of novel views of a scene given a set of images and camera matrices. The representation is a function of location and view angle, and returns a density and view-dependent color. Densities and colors sampled along a camera ray are aggregated using volumetric rendering to produce a pixel color. NeRF is popular in the computer vision and graphics communities with the applications in dynamic scene reconstruction [12, 13], image synthesis [14, 15, 16], pose estimation [17, 18, 19], and more. Optimizing NeRF to reconstruct a single scene can take hours or days-making it impractical for many robotics applications. Instant-NGP [11] and others [9, 20] speed up NeRF by using voxel feature grids instead of multi-layer perceptions to simplify or remove [10] a computational bottleneck. We build on Instant-NGP [11], which uses a learnable hash encoding and highly optimized CUDA implementation to speed up NeRF training from the order of hours to seconds. Others have also sped up NeRF by reusing computation between scenes by utilizing priors. Existing methods $\left\lbrack {{21},{22},{23},{24},{25}}\right\rbrack$ use convolutional neural networks (CNNs) to extract image features as input to a shared network that predicts the NeRF. Tancik et al. [26] and Gao et al. [27] speed up NeRF training using meta-learning to initialize network weights to ones that converge faster for likely scenes. In this work, we propose using past reconstructions of a scene as an initialization for the current reconstruction, allowing rapid adaptation to changes in the scene.
34
+
35
+ < g r a p h i c s >
36
+
37
+ Figure 2: Evo-NeRF for rapid grasping: (a) The robot begins moving along a hemisphere capture trajectory (b) Evo-NeRF progressively trains NeRF during arm motion, quickly building graspable geometry of the wineglass. Grasp confidence from Rad-Net builds as NeRF learns geometry, reaching the stopping threshold at (3). (c) Robot execution of the grasp given by Rad-Net at the early stop point.
38
+
39
+ NeRFs in Robotics Recent research has shown NeRFs to be a promising scene representation for downstream robotics tasks such as navigation and SLAM [6, 7, 28] and manipulation [5, 2, 8]. Yen-Chen et al. [17] and Tseng et al. [29] use a trained NeRF model to estimate an object's 6-DOF pose by minimizing the residuals between a rendered image and a given observed image. Driess et al. [4] train a graph neural network to learn a dynamics model in a multi-object scene represented through a NeRF model, while Li et al. [5] condition a NeRF model on a learned latent dynamics model to plan to visual goals in simulated environments. We propose building on advances in NeRF and its applications to robotics to speed up NeRF-based grasping for practical uses.
40
+
41
+ Grasping Transparent Objects: Most closely related to this paper are two recent works leveraging NeRFs to manipulate objects that cannot be detected by commodity RGBD sensors. Yen-Chen et al. [8] use a NeRF model offline to train dense object descriptors and manipulate thin and reflective objects. Ichnowski et al. [2] show that manually constructing an offline dataset of a given scene then training NeRF allows off-the-shelf grasp planners [30] to compute successful grasps on transparent objects. ClearGrasp [1] trains a Sim2Real depth prediction network on RGB images, then uses this network in real environments to estimate surface geometry for grasps. This idea has been extended to pointclouds and with more efficient real-world data collection [31, 32]. Using NeRF removes the requirement for RGB datasets which make Sim2Real challenging, and has superior performance on thin surfaces, occlusions, and complex backgrounds.
42
+
43
+ § 3 PROBLEM STATEMENT
44
+
45
+ Given a set of transparent objects resting on a planar workspace, the objective is for the robot to find, grasp, and remove each object quickly. Objects are placed close to each other(2.5cm)and the robot has an actuatable camera and a parallel jaw gripper (Fig. 1). The focus is on finding robust grasps rapidly, with grasp success measured as transporting an object without dropping. We assume (1) objects rest in graspable stable poses on a flat surface, (2) objects are in the reachable workspace of the robot with a known forward kinematic model, (3) the camera-to-arm transform is known and stable, and (5) the robot can follow a known obstacle-free trajectory to capture images.
46
+
47
+ < g r a p h i c s >
48
+
49
+ Figure 3: Comparison of Evo-NeRF's training over time vs NGP on the exact same camera trajectory. Added geometry regularization improves the speed and smoothness of geometry regularization.
50
+
51
+ § 4 METHOD
52
+
53
+ To rapidly compute robust grasps, we propose Evo-NeRF and Rad-Net. Evo-NeRF, or Evolving ${NeRF}$ , builds on Instant-NGP [11], a fast implementation of NeRF, and adds a series of modifications to make it practical. Rad-Net, or Radiance-adjusted grasp Network, is a network trained to compute grasps from geometry reconstructed from a NeRF.
54
+
55
+ § 4.1 EVO-NERF
56
+
57
+ To shorten the time to get a trained NeRF, we propose Evo-NeRF, a method that pipelines image capture with NeRF training, reuses weights in sequential grasping, adds regularization to counter effects from rapid capture, and includes an early stopping condition to start a grasp when the grasp network has high confidence.
58
+
59
+ Image capture: The Evo-NeRF method starts with the robot moving a camera around its workspace to capture images. Heuristically, hemispherical captures are ideal for NeRF since they maximally vary the view angles of the scene. The Evo-NeRF capture trajectory sweeps the camera through a discretized hemisphere centered at a location of interest while pointing at the center. First, the camera sweeps around the $z$ -axis to maximize the variance of viewing angle early in the capture sequence. In experiments, we capture images every $3\mathrm{\;{cm}}$ while moving at ${20}\mathrm{\;{cm}}/\mathrm{s}$ , and a full capture trajectory takes 16 seconds and includes 80 images with trajectory shown in Fig. 1.
60
+
61
+ Continual NeRF training: NeRF training, even sped up by Instant-NGP, is a bottleneck. We propose continually training NeRF from the moment the first image is captured, and incrementally adding images to the NeRF training dataset as the camera moves to new viewpoints. This effectively pipelines the image-capture and NeRF-training processes, and allows for usable NeRF representations quickly after (and sometimes before) the capture process finishes.
62
+
63
+ During each capture motion we train NeRF in batches of 48 steps, adding new images between each batch when available. This is akin to other online neural implicit methods like iMAP [6] and NICE-SLAM [33], who also update the image sets between training batches. We compute the camera frame using the forward kinematics and pair it with each image. In practice, this yields pose error around $1\mathrm{\;{cm}}$ , which NeRF accounts for by optimizing the camera extrinsics.
64
+
65
+ Reusing NeRF weights: In sequential grasping scenarios, scenes often change by only the removal of the last object grasped. To take advantage the information already trained, we use the NeRF network weights from the previous grasp in the subsequent grasp. In implementation, we remove the old images from the training dataset and start capturing and training on images for the next grasp.
66
+
67
+ < g r a p h i c s >
68
+
69
+ Figure 4: Sim2Real Dataset Generation. Each scene includes a subset of the training objects in simulation (Fig. 5). Top: Grasp generation starts by sampling grasps on the object meshes and projecting it to a top-down view. Bottom: We render multiple views of each scene using Blender, then train Instant-NGP and render a top-down depth image. We accumulate NeRF depth rendering and projected grasps into a dataset.
70
+
71
+ Geometry regularization: A well-known artifact of NeRF's volumetric rendering loss are floaters, spurious regions of density floating in space. When using NeRF for view synthesis, floaters can go unnoticed, but in grasping, floaters can lead to grasp failures. We apply 2 regularizations which increase the speed and smoothness of geometry reconstruction while sacrificing RGB quality.
72
+
73
+ First, we adapt the total-variation regularization loss (TV-loss) from Plenoxels [34] to discourage floaters and encourage smooth scene geometry. During training, at each step Evo-NeRF sample $N$ random points ${p}_{i}$ using rejection sampling to constrain samples to locations with non-trivial density values. Evo-NeRF then queries the density at all 8-connected neighbors ${n}_{j}$ at a radius $r$ . The final TV-loss is ${L}_{\mathrm{{tv}}} = \mathop{\sum }\limits_{{i = 1}}^{N}\mathop{\sum }\limits_{{j = 1}}^{8}{\lambda }_{\mathrm{{tv}}}{\left( \sigma \left( {p}_{i}\right) - \sigma \left( {n}_{j}^{i}\right) \right) }^{2}$ , where $\sigma$ is the raw, pre-activation output from the density network, and ${\lambda }_{\mathrm{{tv}}}$ is a loss scaling factor.
74
+
75
+ Second, sampling along each ray more coarsely during training reduces floaters and quickly acquires meaningful geometry. In Instant-NGP the distance between samples grows proportional to distance along the ray, and we scale this distance to be ${10}\mathrm{x}$ larger. By training with coarse samples, the NeRF is incentivized to learn a low frequency representation of the scene to minimize reconstruction error.
76
+
77
+ Efficient perception stopping: In scenes where NeRF is able to recover usable geometry before the full camera trajectory has terminated, Evo-NeRF can terminates the capture phase early to speed task completion. In Sec. 5.2 we present experiments showing this by querying grasp confidence of Rad-Net in a closed loop while the robot moves the camera and trains NeRF. When a confidence exceeds a threshold, the capture stops early and the robot executes the grasp.
78
+
79
+ § 4.2 GRASP PLANNING NETWORK
80
+
81
+ When NeRF is trained to completion with dense camera viewpoints, grasp planners trained on ground truth depth like Dex-Net [35] produce usable grasps. However, in an online setting where viewpoints are of lower quality, depth images rendered from NeRF appears significantly different from depth images from standard RGBD cameras. To mitigate this test-time distribution shift and enable more reliable grasping from online NeRFs, we train a network directly on NeRF-rendered depth maps to predict grasps in a Sim2Real fashion.
82
+
83
+ Network Architecture: We train a location neural networks to predict the center of the grasp location when given a NeRF-rendered image; and we train a rotation network to predict the discretied grasp angle when given a cropped patch around the grasp location. We adapt the grasping architecture proposed by Zhu et al. [36], which suggests that an equivariant convolutional neural network learns to perform top-down grasps in fewer samples than standard networks. We train location and rotation networks on a static grasp dataset, in contrast to the online setting in Zhu et al. [36].
84
+
85
+ Dataset Generation: We generate the training dataset in simulation. We use 7 object meshes that are representative of the common household transparent objects which are graspable by the YuMi robot, shown in Fig. 5a. We model all objects with the same density as glass $\left( {{2500}\mathrm{\;{kg}}/{\mathrm{m}}^{3}}\right)$ . Then, we assemble a set of scenes with labeled grasp qualities: we randomly place objects in stable poses
86
+
87
+ < g r a p h i c s >
88
+
89
+ Figure 5: Training and testing objects. The Blender rendering in (a) shows the 7 objects we use in data generation for computing grasps and rendering in various stable poses. Objects in (b) are real objects tat are in-distribution with the training objects. We also test on out-of-distribution objects shown in (c). To test grasping in clutter, we setup various testing scenes with objects in and out of distribution, with examples shown in (d).
90
+
91
+ on a planar surface and analytically sample antipodal grasp closure axes based on mesh surface normals as in Dex-Net [37]. We use a soft point-contact model [38], and evaluate the probability of grasp success using wrench resistance [39], a common analytic measure for grasp success. This method is inexpensive to calculate (0.02sec per grasp) and has high precision for measuring grasp success [40]. We densely sample 1000 collision free grasps for each stable pose and we use Blender to render the scenes.
92
+
93
+ Training: To train Rad-Net, we project sampled grasps onto the depth images and store at each pixel the maximum grasp confidence over all rotations resulting confidence heatmaps. We dilate and blur the masks with a $3 \times 3$ kernel to smooth the predictions, and randomly augment the masks with translation, shear and scale transformation for both the depth images and the confidence heatmaps. To train the rotation network, we sample crops from grasps above 0.7 quality, and use a cross-entropy loss on the output rotation probabilities.
94
+
95
+ Grasp Planning: To execute a grasp from Rad-Net we render a depth image from NeRF of size ${144} \times {256}$ from the camera pose used during dataset generation, using the ray transmittance truncation of Dex-NeRF [2].We query the location network on this depth image to obtain a heatmap over the image of grasp confidence, then crop a patch of the depth image centered at the argmax of this heatmap. The rotation network takes this crop and outputs 8 grasp angle probabilities, and we take the weighted average of the argmax with its neighbors to produce the final grasp angle. We determine grasp depth by analyzing a local pointcloud at the grasp location, and subtracting a static ${1.5}\mathrm{\;{cm}}$ grasp depth from the highest point.
96
+
97
+ § 5 EXPERIMENTS
98
+
99
+ We evaluate the reliability of Evo-NeRF paired with Rad-Net vs Dex-Net [35], evaluate the speed improvements from early stopping captures and Evo-NeRF reusing representations, and ablate aspects of the system including NeRF modifications and training on NeRF depth vs ground-truth depth. We compare to Dex-Net to highlight the improvement in reliability gained from training on NeRF-rendered depth rather than ground-truth depth, and note that in Dex-NeRF [2], the NeRF model was trained for ${1900}\mathrm{x}$ longer, with an offline, manually captured set of images with precisely calibrated poses from Colmap [41, 42]. This difference in view quality and training length from rapid capture results in a notable drop in raw Dex-Net grasp robustness because of lower quality reconstructions.
100
+
101
+ § 5.1 PHYSICAL SETUP
102
+
103
+ We evaluate on a physical YuMi robot with a ZED Mini camera. The pose of the ZED relative to the arm that holds it is calibrated with a chessboard once before all experiments. We surround the robot with a kitchen-like workspace containing printed images of a countertop and shelves, where test objects are positioned near the center of the workspace. The workspace has 3 LED floodlights positioned across from the robot aiming at the workspace. We use one NVIDIA GeForce RTX 3080 GPU for NeRF training and grasp network inference. We evaluate on 9 different objects, both in distribution and out of distribution with respect to the train set in Fig. 5a.
104
+
105
+ Table 1: Single objects results: each cell shows the average over 27 different trials. Full capture compares success after a complete camera trajectory, and early stop compares grasp success at the same stop early stop point. Early stopping results in a ${41}\%$ speed improvement with no drop in success rate for Rad-Net.
106
+
107
+ max width=
108
+
109
+ Method Full Capture Success Early Stop Success
110
+
111
+ 1-3
112
+ Dex-Net 56% 11 %
113
+
114
+ 1-3
115
+ Rad-Net 89 % 89 %
116
+
117
+ 1-3
118
+ Time 16s 9.5s
119
+
120
+ 1-3
121
+
122
+ < g r a p h i c s >
123
+
124
+ Figure 6: Decluttering results. We report (a) histograms of objects remaining after each trial for Rad-Net and Dex-Net [35] and (b) the ratio of the capture time over the full capture time for each method. Rad-Net is able to extract more objects compared to Dex-Net. Furthermore, reusing NeRF to update scene geometry rather than recapturing the scene retains performance of Rad-Net, while reducing capture time by ${61}\%$ .
125
+
126
+ § 5.2 RAPID SINGLE OBJECT RETRIEVAL
127
+
128
+ We apply confidence-based capture early stopping (4.1) with a threshold of 70% to execute a grasp as quickly as possible as shown in Fig. 2. We place each of the 9 test objects near the center of the workspace, and report grasp success and total time spent capturing images. We repeat each experiment 3 times and compare Rad-Net with Dex-Net [35] and evaluate with and without early capture stopping. Since Dex-Net does not output grasp confidence we use the same stopping point for both networks, as determined by Rad-Net. An experiment is successful if the robot grasps and places the object into the storage bin.
129
+
130
+ Table 1 summarizes the results. Using Rad-Net for early stopping results in a capture time reduction of 41%, with no drop in reliability. On average, the robot grasps objects within 9.5 seconds with an 89% success rate over 54 trials. Rad-Net outperforms Dex-Net in grasp success by 1.6x even with a full capture of the scene as a result of its habituation to NeRF density. Dex-Net performs poorly overall because of its sensitivity to NeRF depth noise and Rad-Net's primary failure cases are on out-of-distribution objects, specifically missing grasps on the lightbulb and tape dispenser. This is likely because the training set has no small profile items.
131
+
132
+ § 5.3 SEQUENTIAL DECLUTTERING
133
+
134
+ We evaluate on the decluttering task where multiple transparent objects are placed in close proximity (2cm) in stable poses on the table, and the robot must sequentially grasp and place all objects in the bin one by one as shown in Fig. 1. We consider three tiers of experiments with different difficulties and two scenes for each tier, resulting in a total of 6 different scenes (Fig. 5). We repeat each scene 3 times and compare against Dex-Net [35]. At the beginning of each experiment, the robot executes a full capture of the scene as shown in Fig. 1a. Then, after each consecutive grasp, the robot executes a much smaller hemispherical capture centered at the grasp location to update the NeRF (Fig. 1c). We allow only as many grasp attempts as objects in the scene. If a grasp planner generates a wrong grasp leading to a joint over-torque error, we terminate the experiment as a failure.
135
+
136
+ Results are summarized in Fig 6, showing the number of objects remaining after each trial terminates and the capture time ratio. Overall, Evo-NeRF with Rad-Net clears 72% of test objects across all tiers, while Dex-Net clears ${48}\%$ of objects, and takes ${39}\%$ of the capture time compared to Rad-Net using the full image capture each grasp, which has similar clear performance (76%). This suggests Evo-NeRF retains graspable geometry over successive updates, despite their short duration.
137
+
138
+ Table 2: Ablations of graspability regularizations. We query Dex-Net [35] continuously through camera capture trajectories and report the percent of the trajectory needed until the highest probability grasp is on an object. We compare vanilla Instant-NGP with Evo-NeRF, as well as ablating TV-loss and coarse ray sampling.
139
+
140
+ max width=
141
+
142
+ X Instant-NGP Evo-NeRF -TV Evo-NeRF -Coarse Evo-NeRF
143
+
144
+ 1-5
145
+ $\%$ Capture Until Grasp $\downarrow$ 80.3 % 64.8 % 62.0% 52.6 %
146
+
147
+ 1-5
148
+
149
+ § 5.4 GRASPABILITY ABLATION
150
+
151
+ In this experiment, we ablate the changes made to NeRF speeding geometry graspability. We capture 9 single-object and 3 multi-object scenes, then continuously train NeRF as it captures, using the same static images and holding all other hyperparameters constant. We measure the capture time needed until the first grasp output from Dex-Net [35] lands on a real object as a proxy for graspa-bility convergence. Table 2 shows the percent of the capture trajectory needed, and Fig. 3 shows a timelapse of visual qualities over a capture. Results suggest that the proposed method produces graspable geometry faster, with a ${32}\%$ reduction in capture time needed to grasp from Dex-Net.
152
+
153
+ § 5.5 NERF DEPTH VS GROUND TRUTH DEPTH
154
+
155
+ We quantitatively compare the effect of the distribution shift between NeRF-generated and ground-truth depth images on training grasp networks. We generate ground-truth depth images with pyren-der [43]. We train an additional network, GT-Net, on ground-truth depth only. We also evaluate Dex-Net [35], which is trained on a much larger dataset with ground-truth depth images. We test on the held-out test set of NeRF-rendered depth images and report average grasp confidence using the soft-point-contact model and wrench resistance. We normalize results with respect to the performance of GT-Net and evaluated on the on ground-truth depth images of the same scenes.
156
+
157
+ GT-Net achieves $0\%$ , Rad-Net achieves ${42}\%$ , and Dex-Net achieves ${0.1}\%$ , suggesting that there is a large distribution shift from training on ground-truth depth to testing on NeRF-depth in simulation. Rad-Net performance in simulated scenes is worse than in real scenes because the synthetic dataset contains fewer camera angles than real scenes (52 vs. 80), resulting in more floaters.
158
+
159
+ § 6 CONCLUSION
160
+
161
+ In this work, we introduced Evo-NeRF, a method that progressively captures and trains NeRFs for practical robotic grasping. While its rapid image capture produces lower-quality reconstructions than prior work, we propose reusing trained weights in sequential grasping, geometry regularization, and continual training to obtain better 3D reconstructions. We further propose a novel Sim2Real training pipeline, where we train grasp networks on NeRF rendered depth images in simulated environments, and we find the networks are able to predict high quality grasps in the physical environment. In experiments, Evo-NeRF and Rad-Net can grasp transparent objects around ${1900}\mathrm{x}$ faster than Ichnowski et al. [2] and with ${89}\%$ success on singulated objects.
162
+
163
+ § 6.1 LIMITATIONS AND FUTURE WORK
164
+
165
+ Rad-Net uses on rendered depth images, throwing away much of the rich 3D NeRF information. It may missing out on more robust grasps than a network trained on 3D geometry (e.g., VGN [44]). Future work in this direction could also include optimization techniques which use the differentiability of NeRF's density to refine a grasp estimate with local gradient descent. Though we have shown that NeRF is adaptable to geometry deletion, NeRF resists adding new geometry-thus reusing weights will slow down training when objects are added between grasps. Future work in adapting NeRF to changing scenes would greatly improve the practicality of real-time usage.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/CC4JMO4dzg/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Learning Preconditions of Hybrid Force-Velocity Controllers for Contact-Rich Manipulation
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Robots need to manipulate objects in constrained environments like shelves and cabinets when assisting humans in everyday settings like homes and offices. These constraints make manipulation difficult by reducing grasp accessibility, so robots need to use non-prehensile strategies that leverage object-environment contacts to perform manipulation tasks. To tackle the challenge of planning and controlling contact-rich behaviors in such settings, this work uses Hybrid Force-Velocity Controllers (HFVCs) as the skill representation and plans skill sequences with learned preconditions. While HFVCs naturally enable robust and compliant contact-rich behaviors, solvers that synthesize them have traditionally relied on precise object models and closed-loop feedback on object pose, which are difficult to obtain in constrained environments due to occlusions. We first relax HFVCs' need for precise models and feedback with our HFVC synthesis framework, then learn a point-cloud-based precondition function to classify where HFVC executions will still be successful despite modeling inaccuracies. Finally, we use the learned precondition in a search-based task planner to complete contact-rich manipulation tasks in a shelf domain. Our method achieves a task success rate of ${73.2}\%$ , outperforming the ${51.5}\%$ achieved by a baseline without the learned precondition. While the precondition function is trained in simulation, it can also transfer to a real-world setup without further fine-tuning.
12
+
13
+ Keywords: Contact-Rich Manipulation, Hybrid Force-Velocity Controllers, Precondition Learning
14
+
15
+ ## 1 Introduction
16
+
17
+ Robots operating in human environments, like homes and offices, need to manipulate objects in constrained environments like shelves and cabinets. These environments introduce challenges in both action and perception. Environmental constraints reduce grasp accessibility, so robots must use non-prehensile motions that leverage object-environment contacts to perform manipulation tasks. For example, a book placed in the corner of a shelf has no collision-free antipodal grasps, but a robot can retrieve the book by first pivoting or sliding the book to reveal a grasp. Environmental constraints also introduce occlusions both before and during robot-object interaction, making precise object modeling and closed-loop vision-based feedback impractical.
18
+
19
+ In this work, we tackle these challenges by first choosing hybrid force-velocity controllers (HFVCs) that use force control to maintain robot-object contacts and velocity control to achieve the desired motion. Prior works in HFVCs require accurate object models, object trajectories, known contact modes, and closed-loop feedback of object pose and contacts. Our work relaxes these requirements by allowing HFVC synthesis and execution in more realistic settings with incomplete models, partial observations, and without closed-loop object feedback. While HFVCs are naturally robust to small modeling errors and collisions, large model mismatches inherent in constrained manipulation may lead to unsuccessful motions. To address this, we learn a precondition function that predicts when an HFVC execution will or will not be successful. The precondition function is a neural network that takes as input segmented point clouds of the scene and the HFVC skill parameters. It is trained entirely in simulation, and using point clouds without color information enables easier sim-to-real transfer. It filters for successful HFVC actions, and we use it in an online search-based task planner to reactively plan sequences of HFVC and Pick-and-Place skills. In our experiments, our approach allows a robot to slide, topple, push, and pivot objects as needed to manipulate cuboid and cylindrical objects in an occluded shelf environment. See Figure 1 for an overview of the proposed approach. For supplementary materials and videos, visit https://sites.google.com/view/constrained-manipulation/.
20
+
21
+ The main contributions of our paper are: 1) A compliant manipulation skill through an HFVC synthesis framework that relaxes previous requirements on object and environment modeling. This skill can achieve diverse motions like pushing, pivoting, and sliding through generating different parameters. 2) A point-cloud based precondition function for this HFVC skill that predicts if an HFVC execution with the given parameter and current observations would be successful, as some may fail due to inaccurate modeling. 3) Using an search-based planner with the HFVC skill and a Pick-and-Place skill to complete contact-rich manipulation tasks in an occluded shelf domain.
22
+
23
+ ## 2 Related Works
24
+
25
+ Nonprehensile Manipulation. Many works have studied planning dexterous contact-rich manipulation. If we know the object's geometric and dynamics models and relevant contacts, then traditional methods can synthesize controllers that lead to complex behaviors like rolling and slipping [1]. Recent works make planning multi-step nonprehensile manipulation more efficient by directly optimizing with contact modes $\left\lbrack {2,3,4,5,6,7,8}\right\rbrack$ . Some $\left\lbrack {7,8}\right\rbrack$ leverage the formulation of HFVCs to effectively model contact-rich behavior. Other works focus on pushing, in part because the planar constraints of table-top setups reduce the number of possible object movements and robot actions. The authors of [9] take a trajectory optimization perspective and optimize for time to push an object to a goal. The authors of [10] explore pushing a target object in clutter while allowing some objects in the clutter to move, even if they are not the target object. These model-based methods are often limited by strict requirements for mathematical modeling of objects and environments. In [11], the authors relax assumptions on object models by jointly estimating and controlling contact configurations for $2\mathrm{D}$ polygons on a flat surface with a single point contact. However, like previous works, they lack high-level policies that can switch between different nonprehensile strategies to complete multi-step tasks [12].
26
+
27
+ Learning approaches can relax modeling assumptions for nonprehensile manipulation. Some works learn dynamics models [13, 14], while others directly learn policies [15, 16, 17]. These works also make additional assumptions on objects and environments, such as uniform objects and full state information, and they do not address planning with different types of nonprehensile strategies.
28
+
29
+ Pregrasp Manipulation. Another area of related works is pregrasp manipulation, where a robot must perform nonprehensile motions not to directly manipulate an object but to enable future grasps. A common setting is grasping in clutter, where a target object must be first singulated before it can be grasped. Singulation is usually done with a pushing policy to maximize downstream grasp, and it can be hardcoded [18], planned [19], or learned [20, 21, 22, 23]. In singulation, objects that prevent access to a target object can be pushed away. In constrained environments, it is the unmovable environment, like tables or shelf walls, that prevent grasping. Under this context, many works study how to push an object on a table surface over the table edge to expose a grasp. Some works plan with known object models [24, 25, 26], learned dynamics [27], or explicit surface constraints [28], and some directly learn an RL policy $\left\lbrack {{29},{30}}\right\rbrack$ . Recent works also proposed online adaptations of object dynamics, like mass and center of mass, to plan for table-edge grasps [31]. While these works show how nonprehensile manipulation can enable downstream grasps, they typically cannot plan with multiple types of nonprehensile behaviors. Furthermore, they are only concerned with obtaining a grasp, and not object manipulation in general, which sometimes may not require any grasps.
30
+
31
+ ![01963f11-9ebb-7bb6-bc83-260340ff0fec_2_371_212_1058_310_0.jpg](images/01963f11-9ebb-7bb6-bc83-260340ff0fec_2_371_212_1058_310_0.jpg)
32
+
33
+ Figure 1: Overview of our approach that manipulates objects in constrained environments by planning with Hybrid Force-Velocity Controllers (HFVCs). Given partial point clouds, we first estimate object-environment segmentations and object and environment geometries. These are used to generate HFVC parameters, which are object subgoal poses and robot-object contact points. Due to model and feedback inaccuracies, not all generated parameters will lead to successful HFVC executions. As such, we learn the HFVC precondition, which takes as input segmented point clouds and skill parameters and predicts skill success. The planner uses the subgoal poses from the successful parameters to find the current best action, which is a (skill, parameter) tuple. The robot then executes this action, and replanning is done as needed.
34
+
35
+ ## 3 Method
36
+
37
+ Problem Definition and Assumptions. We tackle the problem of moving rigid objects from a start to a goal pose in constrained environments by a robot arm. The start and goal poses may be in different stable poses, and the object may not have any collision-free antipodal grasps at these poses. There is only one movable object in the environment. For actions, the robot arm can perform joint torque control. For perception, we assume access to an end-effector force-torque sensor and partial point clouds from an RGB-D camera, from which we can estimate segmentation masks and object poses. For skill parameter generation, we assume object geometries are similar to known geometric primitives, and that we can estimate these primitive shapes and environmental constraints from the segmented point clouds. Lastly, we assume object dynamic properties are within a reasonable range that enables nonprehensile manipulation by our robot arm.
38
+
39
+ Approach Overview. At the high level, the inputs to our system are partial point clouds that represent the current observation, and the outputs are actions represented as parameterized skills. Our approach has three main components. The first is synthesizing HFVCs that allow for compliant execution with inaccurate object models and feedback. The second is learning the HFVC skill precondition to filter out motions that will likely fail due to modeling mismatches. The third is using the learned precondition to plan sequences of skills for object manipulation tasks in constrained environments. The planner uses subgoals from the skill parameters as the skill-level transition model. The planner replans if the reached state deviates a lot from the subgoal. See Figure 1.
40
+
41
+ Parameterized Skill Formulation. We follow the options formulation of skills [32, 33]. We denote a parameterized skill as $o$ with parameters $\theta \in \Theta$ . In our work, a parameterized skill $o$ has five elements: a parameter generator that generates both feasible and infeasible $\theta$ , a precondition function that classifies skill success given current observations and skill parameters, a controller that performs the skill, a termination condition that tells when the skill should stop, and a skill-level dynamics model that predicts the next state after skill execution terminates. We assume the parameter generator, controller, and termination conditions are given. We assume skills have subgoal parameters, which contains information about the next world state if skill execution is successful. For example, an HFVC skill parameter will contain the desired object pose, and the planner assumes the object will reach the desired pose if 1) that parameter satisfies preconditions and 2) the HFVC commands are computed using that parameter.
42
+
43
+ ### 3.1 Hybrid Force Velocity Controller Skill
44
+
45
+ The HFVC skill moves an object using a given skill parameter, which contains the initial object pose, the desired object pose and the robot-object contact(s). To achieve the desired object motions, we use an optimization-based solver to output a sequence of hybrid force and velocity commands for the robot to follow. Below, we explain how parameters are generated for both precondition learning and planning, how HFVC commands are synthesized and executed from these parameters, and how we learn the precondition function to classify successful parameters.
46
+
47
+ ![01963f11-9ebb-7bb6-bc83-260340ff0fec_3_312_206_1174_378_0.jpg](images/01963f11-9ebb-7bb6-bc83-260340ff0fec_3_312_206_1174_378_0.jpg)
48
+
49
+ Figure 2: Our HFVC parameter generator gives diverse contact-rich behaviors. Each column is an HFVC skill execution. Top: initial states. Bottom: final states. Left to right: sliding, toppling, pivoting, and pushing.
50
+
51
+ HFVC Skill Parameter Generation. The HFVC skill parameter contains the initial and subgoal object poses and the robot-object contact point(s). We generate two types of subgoal poses: ones that are in the same stable pose as the current pose, and ones that are in "neighboring" stable poses (i.e., toppled by ${90}^{ \circ }$ ). Initial robot-object contact points come from evenly spaced points on the surface of object primitives. We filter out contact points at which the robot would collide with the environment either for the initial or subgoal object poses. This parameter generation scheme can generate a diverse range of $3\mathrm{D}$ behaviors such as pushing, sliding, pivoting, and toppling. See Figure 2.
52
+
53
+ We assume known geometric primitives (cuboids and cylinders) for parameter generation, but this does not significantly affect the focus of this work. One reason is that many real-world objects resemble cuboids and cylinders, especially in interacting with constrained environments like shelves and cabinets. Another is that this assumption is only made for parameter generation (the precondition directly takes in point clouds), and it can still generate a wide range of parameters with only some satisfying preconditions. As such, the object primitive assumption does not overly limit the types of behaviors the HFVC skill can achieve, and the parameter generator is useful even if the real object cannot be perfectly represented by the primitives.
54
+
55
+ HFVC Synthesis from Subgoals. Our controller synthesis algorithm generates hybrid force-velocity commands for the robot to execute. Initially, the algorithm needs the current object pose, desired object pose, initial estimated object-environment contacts, and initial robot-object contact point. During HFVC execution, HFVC commands are computed at a frequency of ${20}\mathrm{{Hz}}$ , and this procedure only needs the current estimated object pose (see next section).
56
+
57
+ The force and velocity commands are computed by minimizing object pose errors. This minimization is done subject to velocity constraints as well as contact force constraints imposed by the fixed contact modes. There are three steps in this algorithm. First, we use quadratic programming to optimize the desired hand velocity subject to contact mode constraints, where the objective is matching the hand velocity with the desired object velocity from the current to the goal pose. Second, we compute the force control direction, which is chosen to be as close as possible to the robot-object contact normal while being as orthogonal as possible to the desired hand velocity direction. Third, we solve for the force control magnitude by trying to maintain a small amount of normal contact forces on every non-separating environment contact under the quasi-dynamic assumption. If the robot-object contact normal is parallel to the desired hand velocity direction, we only do velocity control (this means the robot is pushing the object). Please see more details in the Appendix.
58
+
59
+ HFVC Execution with Partial Information. Computing new HFVC commands during execution requires object pose feedback, which cannot be directly obtained due to occlusions in constrained
60
+
61
+ environments. As such, we estimate object poses from robot proprioception, and we constrain HFVC velocity commands to prevent inaccuracies in this estimation to drastically alter execution behavior. See Figure 3 for an illustration.
62
+
63
+ To estimate the current object pose, we first linearly interpolate two trajectories - one for the object from the initial pose to the subgoal pose, and another for the end-effector from the initial contact pose to the final contact pose. During execution, the estimated object pose is the one from the interpolated object trajectory that corresponds to the interpolated end-effector position closest to the current end-effector position.
64
+
65
+ ![01963f11-9ebb-7bb6-bc83-260340ff0fec_4_1018_310_459_298_0.jpg](images/01963f11-9ebb-7bb6-bc83-260340ff0fec_4_1018_310_459_298_0.jpg)
66
+
67
+ Figure 3: HFVC execution with partial information with example pivoting motion (start pose is object on the left). Gray objects represent linearly interpolated object poses used by proprioception-based object pose estimation. Object translation is exaggerated for visual clarity. Blue arrow represents direction of force command, purple is velocity. Velocity commands are projected onto the (green) plane containing the interpolated end-effector trajectory.
68
+
69
+ HFVC is intrinsically robust to pose estimation errors in the direction of the force controller, which moves the end-effector to maintain the desired force and robot-object contact. This is not the case for errors in the direction of the velocity controller, which will keep moving as if the object is still on the interpolated trajectory, leading to execution failure. To alleviate this problem, we project velocity controls onto the plane that contains the interpolated end-effector trajectory, preventing the end-effector from traveling too far from the intended motion. Note this is not the same as projecting onto the interpolated end-effector trajectory, which would overly constrain HFVC actions and may conflict with force controller commands.
70
+
71
+ ### 3.2 Learning HFVC Preconditions
72
+
73
+ Due to errors in visual perception (e.g. noisy point clouds), real-time feedback (noisy end-effector force-torque sensing, inaccurate object pose estimations), controls (e.g. HFVC solver does not take into account how the low-level controller actually achieves the commanded velocities and forces), and robot-object contacts (they are often non-sticking in practice), HFVC executions are not always successful. This motivates learning the HFVC skill precondition. The inputs to the precondition are segmented point clouds and the skill parameter. The output is whether or not executing the HFVC skill at the given state with the given parameter will be successful.
74
+
75
+ Precondition Success Definition. An HFVC skill execution is considered successful if it satisfies three conditions: 1) the object moved more than ${1.5}\mathrm{\;{cm}}$ or ${20}^{ \circ }$ after skill execution,2) the final pose is within $7\mathrm{\;{cm}}$ and ${60}^{ \circ }$ of the subgoal pose, and 3) the object does not move after the end-effector leaves contact. Since HFVC executions with model mismatches rarely reach exactly the intended subgoals, having a loose subgoal vicinity requirement allows the planner to execute more HFVC skills and make planning feasible.
76
+
77
+ HFVC Data. The precondition is trained with HFVC execution data generated in simulation with cuboid and cylindrical-shaped objects from the YCB dataset [34]. To improve data diversity, we randomize object geometries by sampling non-uniform scales along object principal axes and randomly setting object mass and friction values. The range for both scaling and dynamics values are chosen so that the resulting object is feasible for manipulation in our shelf domain. See Figure 4 for visualization of the objects used. We also randomly sample the environment shelf dimensions, as well as the initial object pose and stable pose. From simulation, we obtain ground truth segmented point clouds and object poses after skill execution, the latter of which is used to compute ground truth skill preconditions. We use Nvidia IsaacGym, a GPU-accelerated robotics simulator ${}^{1}$
78
+
79
+ Model Architecture. The HFVC precondition is a neural network with a PointNet++ [35] backbone. The input point cloud is centered and cropped around the point that is in the center between the initial object position and the subgoal object position. This improves data efficiency as the network only has to reason about environment points relevant to object-environment interactions. The features of each point include its $3\mathrm{D}$ coordinate as well as the segmentation label. For object points, we additionally append the skill parameter as additional features. These parameter dimensions are set to all 0 's for the environment points. The PointNet++ backbone produces embeddings per point. We take the mean of the point-wise embeddings corresponding to the object points to produce a global object embedding. This embedding is passed to a multi-layer perceptron (MLP) to produce the final precondition prediction. The entire network is trained with a binary cross-entropy loss.
80
+
81
+ ---
82
+
83
+ ${}^{1}$ https://developer.nvidia.com/isaac-gym
84
+
85
+ ---
86
+
87
+ ![01963f11-9ebb-7bb6-bc83-260340ff0fec_5_339_203_1119_277_0.jpg](images/01963f11-9ebb-7bb6-bc83-260340ff0fec_5_339_203_1119_277_0.jpg)
88
+
89
+ Figure 4: Object-in-Shelf Task Domain. Left: cuboid and cylindrical YCB objects used in simulation for training the HFVC precondition. Middle: objects used in real-world experiments. Right: real-world setup.
90
+
91
+ ### 3.3 Search-Based Task Planning
92
+
93
+ Once the precondition is learned, we can use it along with the skill parameter generator to plan for tasks. We perform task planning in a search-based manner on a directed graph, where each node of the graph corresponds to a planning state, and each directed edge corresponds to a (skill, parameter) whose execution from the source state would result in the next state. In our domain, a task is specified by an initial and a goal object pose, and the planning state is the object pose. For parameter generation, if the goal pose is close enough to the current pose, then it is included in the list of subgoal poses, so the planner can find plans that take the object directly to the goal pose. The output of the planner is a sequence of parameterized skills to be executed from the initial state.
94
+
95
+ We interleave planning graph construction with graph search, and search is performed in a best-first manner, similar to [36] which uses ${\mathrm{A}}^{ * }$ to perform task planning. However, it is difficult to efficiently perform A*-style optimal planning in our domain. This is due to several factors -1 ) inaccurate transition models because we directly use subgoals as the next state, 2) expensive edge evaluation (node generation requires storing new point clouds for future precondition inference), and 3) high branching factor with many possible skill parameters at a given state. As such, we instead use Real-Time A* [37], where A* is performed until a search budget is exhausted or when a path to the goal is found. Then, the robot executes the first (skill, parameter) tuple of the path that reaches the best leaf node found so far. After skill execution, if the observed next state is not close enough to the expected next state, or we have reached the end of the current plan, we will replan again.
96
+
97
+ ## 4 Experiments
98
+
99
+ Our experiments focus on evaluating our approach in a shelf domain, where a robot needs to move an object from a start pose to a goal pose. We perform three types of evaluations. The first gauges how much proprioception-based object pose estimation and velocity command projections improve HFVC execution success. The second is about HFVC skill precondition training performance. The third is on the overall task performance by running the planner with the learned HFVC skill precondition. A key comparison we make is between planning with and without the learned precondition function. Lastly, we demonstrate our planner and learned precondition can be applied to a real-world robot setup without further model fine-tuning.
100
+
101
+ Task Domain. Our task domain consists of a 7 DoF Franka Panda robot arm, a rectangular shelf, and a set of objects for the robot to manipulate. Each task instance is specified by a different
102
+
103
+ <table><tr><td/><td>Skill Success</td><td>SG-ADD (cm)</td><td>S-SG-ADD (cm)</td></tr><tr><td>Ours</td><td>52.3%</td><td>${6.9} \pm {8.1}$</td><td>${3.4} \pm {1.9}$</td></tr><tr><td>No-Feedback</td><td>38.6%</td><td>${10.2} \pm {10.3}$</td><td>${3.9} \pm {1.9}$</td></tr><tr><td>No-Constraint</td><td>46.0%</td><td>${7.6} \pm {8.1}$</td><td>${3.6} \pm {2.1}$</td></tr><tr><td>No-Both</td><td>38.3%</td><td>${10.3} \pm {10.3}$</td><td>${3.8} \pm {1.9}$</td></tr></table>
104
+
105
+ Table 1: Skill execution evaluations for our approach vs. ablations that do not use proprioception-based object pose estimation and planar velocity constraints. We report skill success rate and error distance between the subgoal pose and the actual reached pose over all executions (SG-ADD) and over just the successful ones (S-SG-ADD). Numbers after $\pm$ are standard deviations.
106
+
107
+ <table><tr><td/><td>Ours</td><td>Full-PC</td><td>GT-Primitive</td><td>No-Params</td></tr><tr><td>Accuracy</td><td>79%</td><td>76%</td><td>85%</td><td>65%</td></tr><tr><td>Precision</td><td>78%</td><td>73%</td><td>85%</td><td>60%</td></tr><tr><td>Recall</td><td>77%</td><td>79%</td><td>83%</td><td>79%</td></tr></table>
108
+
109
+ Table 2: Precondition training results with different input representations.
110
+
111
+ object geometry, shelf dimensions, shelf pose, initial object pose, and goal object pose. The robot successfully completes the task if it executes a sequence of (skill, parameter) tuples that brings the object from the initial pose to the goal pose, with a goal threshold of ${1.5}\mathrm{\;{cm}}$ for translation and ${10}^{ \circ }$ for rotation. In addition to the HFVC skill, the planner also has access to a Pick-and-Place skill that can move objects via grasps if grasps are available.
112
+
113
+ ### 4.1 HFVC Execution with Partial Information.
114
+
115
+ We first demonstrate the value of estimating object pose from proprioception and applying planar constraints on velocity commands. One ablation is running HFVC with "open-loop" pose estimation, where the current object pose is indexed from the interpolated trajectory with a time-based index (No-Feedback). This assumes the object is following the interpolated trajectory at a fixed speed. Another is running HFVC with proprioceptive feedback but without the velocity planar constraints (No-Constraint). The third variant is running HFVC without both of these modifications (No-Both). We report the execution success rate, the object's average discrepancy distance (ADD) between achieved and subgoal poses for all executions (SG-ADD) and only for the successful executions (S-SG-ADD). A total of ${7.5}\mathrm{k}$ skill executions per method were used in these evaluations. See results in Table 1. Our approach achieves 52.3% success rate, higher than the ablations, with proprioception-based pose feedback (38.6%) making the most difference. These results demonstrate that both proprioceptive object pose feedback and planar velocity constraints can improve HFVC skill executions.
116
+
117
+ ### 4.2 Precondition Learning
118
+
119
+ We generate the HFVC execution dataset with 6 cuboid-shaped and 2 cylindrical-shaped objects from the YCB dataset (see Figure 4). Train-test split is done across object scaling factors, with the test set having the smallest and biggest scales. To see how the network would perform with full, instead of partial observations, we include one ablation that uses full point clouds (Full-PC) and another that uses vertices of ground-truth object primitive meshes (GT-Primitive). We also train a variant that does not use skill parameter features (No-Params). See Table 2. Our method using partial point clouds is on par with Full-PC, and GT-Primitive performs the best. However, we cannot directly use GT-Primitive in planning due to having access to only partial point clouds. Instead, we include a comparison by using estimated object geometries in the task planning experiments below. No-Params performs the worst as expected, but it is still able to improve over random guessing due to biases in our collected data.
120
+
121
+ ### 4.3 Task Planning Experiments
122
+
123
+ We evaluate task planning performance across several ablations in simulation. For each method, we the trials across the $8\mathrm{{YCB}}$ objects and 8 task scenarios, with 5 trials per object and scenario pair, resulting in a total of 240 trials per method. Each trial samples different initial and goal poses, and object geometries used in task evaluation are not in the training set. A task scenario specifies whether or not the initial object pose and goal pose are close to the shelf wall ( 4 variants) and whether they have the same stable pose ( 2 variants), for a total of 8 scenarios. We report the overall task success rate, average planning time and plan length for successful trials.
124
+
125
+ <table><tr><td/><td>Ours</td><td>Est-Primitive</td><td>No-PC</td><td>Only-Pick-Place</td><td>No-Replan</td></tr><tr><td>Plan Success</td><td>73.2%</td><td>61.1%</td><td>51.5%</td><td>27.4%</td><td>28.0%</td></tr><tr><td>Plan Time (s)</td><td>${43.1} \pm {23.0}$</td><td>${44.5} \pm {26.5}$</td><td>${36.5} \pm {20.3}$</td><td>${6.7} \pm {11.6}$</td><td>${22.4} \pm {13.1}$</td></tr><tr><td>Plan Length</td><td>${3.5} \pm {1.3}$</td><td>${3.2} \pm {1.5}$</td><td>${3.6} \pm {1.5}$</td><td>${1.6} \pm {0.8}$</td><td>${1.9} \pm {1.0}$</td></tr></table>
126
+
127
+ Table 3: Task performance of our approach using partial point clouds vs. using estimated object primitives (Est-Primitive), not using learned preconditions (No-PC), only using Pick-and-Place (Only-Pick-Place), and not doing replanning (No-Replan). Plan time includes replanning time. Numbers after $\pm$ are standard deviations. Plan time and length statistics are computed only over successful trials.
128
+
129
+ The first ablation is on using vertices from estimated object primitives (Est-Primitive) as the input to the precondition, instead of partial point clouds. The second is planning without the learned precondition, so the planner treats all generated parameters as feasible (No-PC). The third evaluates the usefulness of the HFVC skill in our domain by planning only with the Pick-and-Place skill (Only-Pick-Place). The last ablation does not replan and executes the entire found path from the initial state without feedback (No-Replan). For this method, we double the planning budget to 60s, so the planner is more likely to find a path all the way to the goal state.
130
+
131
+ See Table 3. The proposed approach achieves a success rate of 74.7%, higher than Est-Primitive (61.1%) and No-PC (48.9%), and much higher than Only-Pick-Place (22.9%) and No-Replan (28.8%). The drop in performance of Est-Primitive shows that while using ground truth primitives gives better precondition predictions, this improvement does not apply when primitives must be estimated from partial point clouds. The other ablations show the importance of learning HFVC preconditions, using HFVC skills in constrained environments, and replanning to compensate for inaccurate subgoal transition models.
132
+
133
+ Real-world Demonstration. Lastly, we demonstrate our planner with the learned precondition can operate in the real world. While the learned precondition can operate on real-world point clouds without further training, we had to tune low-level controller gains to reproduce similar contact-rich motions on the real robot. As with simulation, our planner is able to find plans that include a variety of contact-rich behaviors, like pushing, sliding, toppling, and pivoting, to manipulate objects on the shelf. Please see supplementary materials for more details and real-world videos.
134
+
135
+ Failure Modes and Limitations. The most common failure mode is precondition errors that lead the planner into finding infeasible plans or not finding a plan when there is one. This may be addressed by further improving precondition performance with more data, or enforcing domain-specific in-variances. In addition, the object primitive assumption does prevent the parameter generator from supporting more complex objects, and this may be resolved by learning a parameter generator that operates directly on partial point clouds. For execution, our planner does not perform online precondition adaptation for unexpected object dynamics; doing so may improve task performance for objects with out-of-distribution dynamics parameters. Lastly, our method only applies to having one movable object in the scene. Manipulating multiple objects may require learned perception systems and skill-level dynamics models that work in clutter.
136
+
137
+ ## 5 Conclusion
138
+
139
+ Contact-rich manipulation behaviors can be naturally expressed by HFVCs. However, HFVCs have traditionally relied on precise object models and closed-loop object feedback, hindering their applications in more realistic settings where such information cannot be readily obtained. Our work shows that it is possible to 1) modify HFVCs to not rely on privileged information and 2) learn where HFVCs are successful despite inaccurate models so that 3) a planner can plan sequences of HFVC and Pick-and-Place skills to complete challenging contact-rich tasks in constrained environments.
140
+
141
+ References
142
+
143
+ [1] K. M. Lynch and M. T. Mason. Dynamic nonprehensile manipulation: Controllability, planning, and experiments. The International Journal of Robotics Research, 18(1):64-92, 1999.
144
+
145
+ [2] I. Mordatch, E. Todorov, and Z. Popović. Discovery of complex behaviors through contact-invariant optimization. ACM Transactions on Graphics (TOG), 2012.
146
+
147
+ [3] J. Z. Woodruff and K. M. Lynch. Planning and control for dynamic, nonprehensile, and hybrid manipulation tasks. In International Conference on Robotics and Automation (ICRA), 2017.
148
+
149
+ [4] N. Doshi, F. R. Hogan, and A. Rodriguez. Hybrid differential dynamic programming for planar manipulation primitives. In International Conference on Robotics and Automation, 2020.
150
+
151
+ [5] F. R. Hogan and A. Rodriguez. Reactive planar non-prehensile manipulation with hybrid model predictive control. The International Journal of Robotics Research, 39(7):755-773, 2020.
152
+
153
+ [6] B. Aceituno-Cabezas and A. Rodriguez. A global quasi-dynamic model for contact-trajectory optimization. In Robotics: Science and Systems (RSS), 2020.
154
+
155
+ [7] Y. Hou and M. T. Mason. An efficient closed-form method for optimal hybrid force-velocity control. In International Conference on Robotics and Automation (ICRA), 2021.
156
+
157
+ [8] X. Cheng, E. Huang, Y. Hou, and M. T. Mason. Contact mode guided motion planning for quasidynamic dexterous manipulation in 3d. arXiv:2105.14431, 2021.
158
+
159
+ [9] P. Acharya, K.-D. Nguyen, H. M. La, D. Liu, and I.-M. Chen. Nonprehensile manipulation: a trajectory-planning perspective. Transactions on Mechatronics, 2020.
160
+
161
+ [10] D. M. Saxena, M. S. Saleem, and M. Likhachev. Manipulation planning among movable obstacles using physics-based adaptive motion primitives. arXiv:2102.04324, 2021.
162
+
163
+ [11] N. Doshi, O. Taylor, and A. Rodriguez. Manipulation of unknown objects via contact configuration regulation. arXiv:2203.01203, 2022.
164
+
165
+ [12] F. Ruggiero, V. Lippiello, and B. Siciliano. Nonprehensile dynamic manipulation: A survey. IEEE Robotics and Automation Letters, 3(3):1711-1718, 2018.
166
+
167
+ [13] K. Kutsuzawa, S. Sakaino, and T. Tsuji. Sequence-to-sequence model for trajectory planning of nonprehensile manipulation including contact model. ${RA} - L,{2018}$ .
168
+
169
+ [14] L. Pinto, A. Mandalika, B. Hou, and S. Srinivasa. Sample-efficient learning of nonprehensile manipulation policies via physics-based informed state distributions. arXiv:1810.10654, 2018.
170
+
171
+ [15] K. Lowrey, S. Kolev, J. Dao, A. Rajeswaran, and E. Todorov. Reinforcement learning for non-prehensile manipulation: Transfer from simulation to physical system. In International Conference on Simulation, Modeling, and Programming for Autonomous Robots, 2018.
172
+
173
+ [16] W. Yuan, J. A. Stork, D. Kragic, M. Y. Wang, and K. Hang. Rearrangement with nonprehensile manipulation using deep reinforcement learning. In International Conference on Robotics and Automation (ICRA), 2018.
174
+
175
+ [17] M. Sharma, J. Liang, J. Zhao, A. LaGrassa, and O. Kroemer. Learning to compose hierarchical object-centric controllers for robotic manipulation. Conference on Robot Learning, 2020.
176
+
177
+ [18] M. Danielczuk, J. Mahler, C. Correa, and K. Goldberg. Linear push policies to increase grasp access for robot bin picking. In International Conference on Automation Science and Engineering (CASE), 2018.
178
+
179
+ [19] M. R. Dogar and S. S. Srinivasa. Push-grasping with dexterous hands: Mechanics and a method. In International Conference on Intelligent Robots and Systems, 2010.
180
+
181
+ [20] T. Hermans, J. M. Rehg, and A. Bobick. Guided pushing for object singulation. In International Conference on Intelligent Robots and Systems, 2012.
182
+
183
+ [21] A. Zeng, S. Song, S. Welker, J. Lee, A. Rodriguez, and T. Funkhouser. Learning synergies between pushing and grasping with self-supervised deep reinforcement learning. In International Conference on Intelligent Robots and Systems (IROS), 2018.
184
+
185
+ [22] P. Ni, W. Zhang, H. Zhang, and Q. Cao. Learning efficient push and grasp policy in a totebox from simulation. Advanced Robotics, 34(13):873-887, 2020.
186
+
187
+ [23] C. Correa, J. Mahler, M. Danielczuk, and K. Goldberg. Robust toppling for vacuum suction grasping. In International Conference on Automation Science and Engineering (CASE), 2019.
188
+
189
+ [24] L. Y. Chang, S. S. Srinivasa, and N. S. Pollard. Planning pre-grasp manipulation for transport tasks. In International Conference on Robotics and Automation, 2010.
190
+
191
+ [25] J. E. King, M. Klingensmith, C. M. Dellin, M. R. Dogar, P. Velagapudi, N. S. Pollard, and S. S. Srinivasa. Pregrasp manipulation as trajectory optimization. In Robotics: Science and Systems. Berlin, 2013.
192
+
193
+ [26] D. Kappler, L. Y. Chang, N. S. Pollard, T. Asfour, and R. Dillmann. Templates for pre-grasp sliding interactions. Robotics and Autonomous Systems, 2012.
194
+
195
+ [27] D. Omrčen, C. Böge, T. Asfour, A. Ude, and R. Dillmann. Autonomous acquisition of pushing actions to support object grasping with a humanoid robot. In International Conference on Humanoid Robots, 2009.
196
+
197
+ [28] C. Eppner and O. Brock. Planning grasp strategies that exploit environmental constraints. In International Conference on Robotics and Automation (ICRA), 2015.
198
+
199
+ [29] Z. Sun, K. Yuan, W. Hu, C. Yang, and Z. Li. Learning pregrasp manipulation of objects from ungraspable poses. In International Conference on Robotics and Automation (ICRA), 2020.
200
+
201
+ [30] W. Zhou and D. Held. Learning to grasp the ungraspable with emergent extrinsic dexterity. In ICRA 2022 Workshop: Reinforcement Learning for Contact-Rich Manipulation, 2022.
202
+
203
+ [31] C. Song and A. Boularias. Learning to slide unknown objects with differentiable physics simulations. arXiv:2005.05456, 2020.
204
+
205
+ [32] R. S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2):181-211, 1999.
206
+
207
+ [33] G. Konidaris, L. P. Kaelbling, and T. Lozano-Perez. From skills to symbols: Learning symbolic representations for abstract high-level planning. Journal of Artificial Intelligence Research, 2018.
208
+
209
+ [34] B. Calli, A. Walsman, A. Singh, S. Srinivasa, P. Abbeel, and A. M. Dollar. Benchmarking in manipulation research: The ycb object and model set and benchmarking protocols. arXiv:1502.03143, 2015.
210
+
211
+ [35] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30, 2017.
212
+
213
+ [36] J. Liang, M. Sharma, A. LaGrassa, S. Vats, and O. Saxena, Saumya Kroemer. Search-based task planning with learned skill effect models for lifelong robotic manipulation. arXiv:2109.08771, 2021.
214
+
215
+ [37] R. E. Korf. Real-time heuristic search. Artificial intelligence, 42(2-3):189-211, 1990.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/CC4JMO4dzg/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § LEARNING PRECONDITIONS OF HYBRID FORCE-VELOCITY CONTROLLERS FOR CONTACT-RICH MANIPULATION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Robots need to manipulate objects in constrained environments like shelves and cabinets when assisting humans in everyday settings like homes and offices. These constraints make manipulation difficult by reducing grasp accessibility, so robots need to use non-prehensile strategies that leverage object-environment contacts to perform manipulation tasks. To tackle the challenge of planning and controlling contact-rich behaviors in such settings, this work uses Hybrid Force-Velocity Controllers (HFVCs) as the skill representation and plans skill sequences with learned preconditions. While HFVCs naturally enable robust and compliant contact-rich behaviors, solvers that synthesize them have traditionally relied on precise object models and closed-loop feedback on object pose, which are difficult to obtain in constrained environments due to occlusions. We first relax HFVCs' need for precise models and feedback with our HFVC synthesis framework, then learn a point-cloud-based precondition function to classify where HFVC executions will still be successful despite modeling inaccuracies. Finally, we use the learned precondition in a search-based task planner to complete contact-rich manipulation tasks in a shelf domain. Our method achieves a task success rate of ${73.2}\%$ , outperforming the ${51.5}\%$ achieved by a baseline without the learned precondition. While the precondition function is trained in simulation, it can also transfer to a real-world setup without further fine-tuning.
12
+
13
+ Keywords: Contact-Rich Manipulation, Hybrid Force-Velocity Controllers, Precondition Learning
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ Robots operating in human environments, like homes and offices, need to manipulate objects in constrained environments like shelves and cabinets. These environments introduce challenges in both action and perception. Environmental constraints reduce grasp accessibility, so robots must use non-prehensile motions that leverage object-environment contacts to perform manipulation tasks. For example, a book placed in the corner of a shelf has no collision-free antipodal grasps, but a robot can retrieve the book by first pivoting or sliding the book to reveal a grasp. Environmental constraints also introduce occlusions both before and during robot-object interaction, making precise object modeling and closed-loop vision-based feedback impractical.
18
+
19
+ In this work, we tackle these challenges by first choosing hybrid force-velocity controllers (HFVCs) that use force control to maintain robot-object contacts and velocity control to achieve the desired motion. Prior works in HFVCs require accurate object models, object trajectories, known contact modes, and closed-loop feedback of object pose and contacts. Our work relaxes these requirements by allowing HFVC synthesis and execution in more realistic settings with incomplete models, partial observations, and without closed-loop object feedback. While HFVCs are naturally robust to small modeling errors and collisions, large model mismatches inherent in constrained manipulation may lead to unsuccessful motions. To address this, we learn a precondition function that predicts when an HFVC execution will or will not be successful. The precondition function is a neural network that takes as input segmented point clouds of the scene and the HFVC skill parameters. It is trained entirely in simulation, and using point clouds without color information enables easier sim-to-real transfer. It filters for successful HFVC actions, and we use it in an online search-based task planner to reactively plan sequences of HFVC and Pick-and-Place skills. In our experiments, our approach allows a robot to slide, topple, push, and pivot objects as needed to manipulate cuboid and cylindrical objects in an occluded shelf environment. See Figure 1 for an overview of the proposed approach. For supplementary materials and videos, visit https://sites.google.com/view/constrained-manipulation/.
20
+
21
+ The main contributions of our paper are: 1) A compliant manipulation skill through an HFVC synthesis framework that relaxes previous requirements on object and environment modeling. This skill can achieve diverse motions like pushing, pivoting, and sliding through generating different parameters. 2) A point-cloud based precondition function for this HFVC skill that predicts if an HFVC execution with the given parameter and current observations would be successful, as some may fail due to inaccurate modeling. 3) Using an search-based planner with the HFVC skill and a Pick-and-Place skill to complete contact-rich manipulation tasks in an occluded shelf domain.
22
+
23
+ § 2 RELATED WORKS
24
+
25
+ Nonprehensile Manipulation. Many works have studied planning dexterous contact-rich manipulation. If we know the object's geometric and dynamics models and relevant contacts, then traditional methods can synthesize controllers that lead to complex behaviors like rolling and slipping [1]. Recent works make planning multi-step nonprehensile manipulation more efficient by directly optimizing with contact modes $\left\lbrack {2,3,4,5,6,7,8}\right\rbrack$ . Some $\left\lbrack {7,8}\right\rbrack$ leverage the formulation of HFVCs to effectively model contact-rich behavior. Other works focus on pushing, in part because the planar constraints of table-top setups reduce the number of possible object movements and robot actions. The authors of [9] take a trajectory optimization perspective and optimize for time to push an object to a goal. The authors of [10] explore pushing a target object in clutter while allowing some objects in the clutter to move, even if they are not the target object. These model-based methods are often limited by strict requirements for mathematical modeling of objects and environments. In [11], the authors relax assumptions on object models by jointly estimating and controlling contact configurations for $2\mathrm{D}$ polygons on a flat surface with a single point contact. However, like previous works, they lack high-level policies that can switch between different nonprehensile strategies to complete multi-step tasks [12].
26
+
27
+ Learning approaches can relax modeling assumptions for nonprehensile manipulation. Some works learn dynamics models [13, 14], while others directly learn policies [15, 16, 17]. These works also make additional assumptions on objects and environments, such as uniform objects and full state information, and they do not address planning with different types of nonprehensile strategies.
28
+
29
+ Pregrasp Manipulation. Another area of related works is pregrasp manipulation, where a robot must perform nonprehensile motions not to directly manipulate an object but to enable future grasps. A common setting is grasping in clutter, where a target object must be first singulated before it can be grasped. Singulation is usually done with a pushing policy to maximize downstream grasp, and it can be hardcoded [18], planned [19], or learned [20, 21, 22, 23]. In singulation, objects that prevent access to a target object can be pushed away. In constrained environments, it is the unmovable environment, like tables or shelf walls, that prevent grasping. Under this context, many works study how to push an object on a table surface over the table edge to expose a grasp. Some works plan with known object models [24, 25, 26], learned dynamics [27], or explicit surface constraints [28], and some directly learn an RL policy $\left\lbrack {{29},{30}}\right\rbrack$ . Recent works also proposed online adaptations of object dynamics, like mass and center of mass, to plan for table-edge grasps [31]. While these works show how nonprehensile manipulation can enable downstream grasps, they typically cannot plan with multiple types of nonprehensile behaviors. Furthermore, they are only concerned with obtaining a grasp, and not object manipulation in general, which sometimes may not require any grasps.
30
+
31
+ < g r a p h i c s >
32
+
33
+ Figure 1: Overview of our approach that manipulates objects in constrained environments by planning with Hybrid Force-Velocity Controllers (HFVCs). Given partial point clouds, we first estimate object-environment segmentations and object and environment geometries. These are used to generate HFVC parameters, which are object subgoal poses and robot-object contact points. Due to model and feedback inaccuracies, not all generated parameters will lead to successful HFVC executions. As such, we learn the HFVC precondition, which takes as input segmented point clouds and skill parameters and predicts skill success. The planner uses the subgoal poses from the successful parameters to find the current best action, which is a (skill, parameter) tuple. The robot then executes this action, and replanning is done as needed.
34
+
35
+ § 3 METHOD
36
+
37
+ Problem Definition and Assumptions. We tackle the problem of moving rigid objects from a start to a goal pose in constrained environments by a robot arm. The start and goal poses may be in different stable poses, and the object may not have any collision-free antipodal grasps at these poses. There is only one movable object in the environment. For actions, the robot arm can perform joint torque control. For perception, we assume access to an end-effector force-torque sensor and partial point clouds from an RGB-D camera, from which we can estimate segmentation masks and object poses. For skill parameter generation, we assume object geometries are similar to known geometric primitives, and that we can estimate these primitive shapes and environmental constraints from the segmented point clouds. Lastly, we assume object dynamic properties are within a reasonable range that enables nonprehensile manipulation by our robot arm.
38
+
39
+ Approach Overview. At the high level, the inputs to our system are partial point clouds that represent the current observation, and the outputs are actions represented as parameterized skills. Our approach has three main components. The first is synthesizing HFVCs that allow for compliant execution with inaccurate object models and feedback. The second is learning the HFVC skill precondition to filter out motions that will likely fail due to modeling mismatches. The third is using the learned precondition to plan sequences of skills for object manipulation tasks in constrained environments. The planner uses subgoals from the skill parameters as the skill-level transition model. The planner replans if the reached state deviates a lot from the subgoal. See Figure 1.
40
+
41
+ Parameterized Skill Formulation. We follow the options formulation of skills [32, 33]. We denote a parameterized skill as $o$ with parameters $\theta \in \Theta$ . In our work, a parameterized skill $o$ has five elements: a parameter generator that generates both feasible and infeasible $\theta$ , a precondition function that classifies skill success given current observations and skill parameters, a controller that performs the skill, a termination condition that tells when the skill should stop, and a skill-level dynamics model that predicts the next state after skill execution terminates. We assume the parameter generator, controller, and termination conditions are given. We assume skills have subgoal parameters, which contains information about the next world state if skill execution is successful. For example, an HFVC skill parameter will contain the desired object pose, and the planner assumes the object will reach the desired pose if 1) that parameter satisfies preconditions and 2) the HFVC commands are computed using that parameter.
42
+
43
+ § 3.1 HYBRID FORCE VELOCITY CONTROLLER SKILL
44
+
45
+ The HFVC skill moves an object using a given skill parameter, which contains the initial object pose, the desired object pose and the robot-object contact(s). To achieve the desired object motions, we use an optimization-based solver to output a sequence of hybrid force and velocity commands for the robot to follow. Below, we explain how parameters are generated for both precondition learning and planning, how HFVC commands are synthesized and executed from these parameters, and how we learn the precondition function to classify successful parameters.
46
+
47
+ < g r a p h i c s >
48
+
49
+ Figure 2: Our HFVC parameter generator gives diverse contact-rich behaviors. Each column is an HFVC skill execution. Top: initial states. Bottom: final states. Left to right: sliding, toppling, pivoting, and pushing.
50
+
51
+ HFVC Skill Parameter Generation. The HFVC skill parameter contains the initial and subgoal object poses and the robot-object contact point(s). We generate two types of subgoal poses: ones that are in the same stable pose as the current pose, and ones that are in "neighboring" stable poses (i.e., toppled by ${90}^{ \circ }$ ). Initial robot-object contact points come from evenly spaced points on the surface of object primitives. We filter out contact points at which the robot would collide with the environment either for the initial or subgoal object poses. This parameter generation scheme can generate a diverse range of $3\mathrm{D}$ behaviors such as pushing, sliding, pivoting, and toppling. See Figure 2.
52
+
53
+ We assume known geometric primitives (cuboids and cylinders) for parameter generation, but this does not significantly affect the focus of this work. One reason is that many real-world objects resemble cuboids and cylinders, especially in interacting with constrained environments like shelves and cabinets. Another is that this assumption is only made for parameter generation (the precondition directly takes in point clouds), and it can still generate a wide range of parameters with only some satisfying preconditions. As such, the object primitive assumption does not overly limit the types of behaviors the HFVC skill can achieve, and the parameter generator is useful even if the real object cannot be perfectly represented by the primitives.
54
+
55
+ HFVC Synthesis from Subgoals. Our controller synthesis algorithm generates hybrid force-velocity commands for the robot to execute. Initially, the algorithm needs the current object pose, desired object pose, initial estimated object-environment contacts, and initial robot-object contact point. During HFVC execution, HFVC commands are computed at a frequency of ${20}\mathrm{{Hz}}$ , and this procedure only needs the current estimated object pose (see next section).
56
+
57
+ The force and velocity commands are computed by minimizing object pose errors. This minimization is done subject to velocity constraints as well as contact force constraints imposed by the fixed contact modes. There are three steps in this algorithm. First, we use quadratic programming to optimize the desired hand velocity subject to contact mode constraints, where the objective is matching the hand velocity with the desired object velocity from the current to the goal pose. Second, we compute the force control direction, which is chosen to be as close as possible to the robot-object contact normal while being as orthogonal as possible to the desired hand velocity direction. Third, we solve for the force control magnitude by trying to maintain a small amount of normal contact forces on every non-separating environment contact under the quasi-dynamic assumption. If the robot-object contact normal is parallel to the desired hand velocity direction, we only do velocity control (this means the robot is pushing the object). Please see more details in the Appendix.
58
+
59
+ HFVC Execution with Partial Information. Computing new HFVC commands during execution requires object pose feedback, which cannot be directly obtained due to occlusions in constrained
60
+
61
+ environments. As such, we estimate object poses from robot proprioception, and we constrain HFVC velocity commands to prevent inaccuracies in this estimation to drastically alter execution behavior. See Figure 3 for an illustration.
62
+
63
+ To estimate the current object pose, we first linearly interpolate two trajectories - one for the object from the initial pose to the subgoal pose, and another for the end-effector from the initial contact pose to the final contact pose. During execution, the estimated object pose is the one from the interpolated object trajectory that corresponds to the interpolated end-effector position closest to the current end-effector position.
64
+
65
+ < g r a p h i c s >
66
+
67
+ Figure 3: HFVC execution with partial information with example pivoting motion (start pose is object on the left). Gray objects represent linearly interpolated object poses used by proprioception-based object pose estimation. Object translation is exaggerated for visual clarity. Blue arrow represents direction of force command, purple is velocity. Velocity commands are projected onto the (green) plane containing the interpolated end-effector trajectory.
68
+
69
+ HFVC is intrinsically robust to pose estimation errors in the direction of the force controller, which moves the end-effector to maintain the desired force and robot-object contact. This is not the case for errors in the direction of the velocity controller, which will keep moving as if the object is still on the interpolated trajectory, leading to execution failure. To alleviate this problem, we project velocity controls onto the plane that contains the interpolated end-effector trajectory, preventing the end-effector from traveling too far from the intended motion. Note this is not the same as projecting onto the interpolated end-effector trajectory, which would overly constrain HFVC actions and may conflict with force controller commands.
70
+
71
+ § 3.2 LEARNING HFVC PRECONDITIONS
72
+
73
+ Due to errors in visual perception (e.g. noisy point clouds), real-time feedback (noisy end-effector force-torque sensing, inaccurate object pose estimations), controls (e.g. HFVC solver does not take into account how the low-level controller actually achieves the commanded velocities and forces), and robot-object contacts (they are often non-sticking in practice), HFVC executions are not always successful. This motivates learning the HFVC skill precondition. The inputs to the precondition are segmented point clouds and the skill parameter. The output is whether or not executing the HFVC skill at the given state with the given parameter will be successful.
74
+
75
+ Precondition Success Definition. An HFVC skill execution is considered successful if it satisfies three conditions: 1) the object moved more than ${1.5}\mathrm{\;{cm}}$ or ${20}^{ \circ }$ after skill execution,2) the final pose is within $7\mathrm{\;{cm}}$ and ${60}^{ \circ }$ of the subgoal pose, and 3) the object does not move after the end-effector leaves contact. Since HFVC executions with model mismatches rarely reach exactly the intended subgoals, having a loose subgoal vicinity requirement allows the planner to execute more HFVC skills and make planning feasible.
76
+
77
+ HFVC Data. The precondition is trained with HFVC execution data generated in simulation with cuboid and cylindrical-shaped objects from the YCB dataset [34]. To improve data diversity, we randomize object geometries by sampling non-uniform scales along object principal axes and randomly setting object mass and friction values. The range for both scaling and dynamics values are chosen so that the resulting object is feasible for manipulation in our shelf domain. See Figure 4 for visualization of the objects used. We also randomly sample the environment shelf dimensions, as well as the initial object pose and stable pose. From simulation, we obtain ground truth segmented point clouds and object poses after skill execution, the latter of which is used to compute ground truth skill preconditions. We use Nvidia IsaacGym, a GPU-accelerated robotics simulator ${}^{1}$
78
+
79
+ Model Architecture. The HFVC precondition is a neural network with a PointNet++ [35] backbone. The input point cloud is centered and cropped around the point that is in the center between the initial object position and the subgoal object position. This improves data efficiency as the network only has to reason about environment points relevant to object-environment interactions. The features of each point include its $3\mathrm{D}$ coordinate as well as the segmentation label. For object points, we additionally append the skill parameter as additional features. These parameter dimensions are set to all 0 's for the environment points. The PointNet++ backbone produces embeddings per point. We take the mean of the point-wise embeddings corresponding to the object points to produce a global object embedding. This embedding is passed to a multi-layer perceptron (MLP) to produce the final precondition prediction. The entire network is trained with a binary cross-entropy loss.
80
+
81
+ ${}^{1}$ https://developer.nvidia.com/isaac-gym
82
+
83
+ < g r a p h i c s >
84
+
85
+ Figure 4: Object-in-Shelf Task Domain. Left: cuboid and cylindrical YCB objects used in simulation for training the HFVC precondition. Middle: objects used in real-world experiments. Right: real-world setup.
86
+
87
+ § 3.3 SEARCH-BASED TASK PLANNING
88
+
89
+ Once the precondition is learned, we can use it along with the skill parameter generator to plan for tasks. We perform task planning in a search-based manner on a directed graph, where each node of the graph corresponds to a planning state, and each directed edge corresponds to a (skill, parameter) whose execution from the source state would result in the next state. In our domain, a task is specified by an initial and a goal object pose, and the planning state is the object pose. For parameter generation, if the goal pose is close enough to the current pose, then it is included in the list of subgoal poses, so the planner can find plans that take the object directly to the goal pose. The output of the planner is a sequence of parameterized skills to be executed from the initial state.
90
+
91
+ We interleave planning graph construction with graph search, and search is performed in a best-first manner, similar to [36] which uses ${\mathrm{A}}^{ * }$ to perform task planning. However, it is difficult to efficiently perform A*-style optimal planning in our domain. This is due to several factors -1 ) inaccurate transition models because we directly use subgoals as the next state, 2) expensive edge evaluation (node generation requires storing new point clouds for future precondition inference), and 3) high branching factor with many possible skill parameters at a given state. As such, we instead use Real-Time A* [37], where A* is performed until a search budget is exhausted or when a path to the goal is found. Then, the robot executes the first (skill, parameter) tuple of the path that reaches the best leaf node found so far. After skill execution, if the observed next state is not close enough to the expected next state, or we have reached the end of the current plan, we will replan again.
92
+
93
+ § 4 EXPERIMENTS
94
+
95
+ Our experiments focus on evaluating our approach in a shelf domain, where a robot needs to move an object from a start pose to a goal pose. We perform three types of evaluations. The first gauges how much proprioception-based object pose estimation and velocity command projections improve HFVC execution success. The second is about HFVC skill precondition training performance. The third is on the overall task performance by running the planner with the learned HFVC skill precondition. A key comparison we make is between planning with and without the learned precondition function. Lastly, we demonstrate our planner and learned precondition can be applied to a real-world robot setup without further model fine-tuning.
96
+
97
+ Task Domain. Our task domain consists of a 7 DoF Franka Panda robot arm, a rectangular shelf, and a set of objects for the robot to manipulate. Each task instance is specified by a different
98
+
99
+ max width=
100
+
101
+ X Skill Success SG-ADD (cm) S-SG-ADD (cm)
102
+
103
+ 1-4
104
+ Ours 52.3% ${6.9} \pm {8.1}$ ${3.4} \pm {1.9}$
105
+
106
+ 1-4
107
+ No-Feedback 38.6% ${10.2} \pm {10.3}$ ${3.9} \pm {1.9}$
108
+
109
+ 1-4
110
+ No-Constraint 46.0% ${7.6} \pm {8.1}$ ${3.6} \pm {2.1}$
111
+
112
+ 1-4
113
+ No-Both 38.3% ${10.3} \pm {10.3}$ ${3.8} \pm {1.9}$
114
+
115
+ 1-4
116
+
117
+ Table 1: Skill execution evaluations for our approach vs. ablations that do not use proprioception-based object pose estimation and planar velocity constraints. We report skill success rate and error distance between the subgoal pose and the actual reached pose over all executions (SG-ADD) and over just the successful ones (S-SG-ADD). Numbers after $\pm$ are standard deviations.
118
+
119
+ max width=
120
+
121
+ X Ours Full-PC GT-Primitive No-Params
122
+
123
+ 1-5
124
+ Accuracy 79% 76% 85% 65%
125
+
126
+ 1-5
127
+ Precision 78% 73% 85% 60%
128
+
129
+ 1-5
130
+ Recall 77% 79% 83% 79%
131
+
132
+ 1-5
133
+
134
+ Table 2: Precondition training results with different input representations.
135
+
136
+ object geometry, shelf dimensions, shelf pose, initial object pose, and goal object pose. The robot successfully completes the task if it executes a sequence of (skill, parameter) tuples that brings the object from the initial pose to the goal pose, with a goal threshold of ${1.5}\mathrm{\;{cm}}$ for translation and ${10}^{ \circ }$ for rotation. In addition to the HFVC skill, the planner also has access to a Pick-and-Place skill that can move objects via grasps if grasps are available.
137
+
138
+ § 4.1 HFVC EXECUTION WITH PARTIAL INFORMATION.
139
+
140
+ We first demonstrate the value of estimating object pose from proprioception and applying planar constraints on velocity commands. One ablation is running HFVC with "open-loop" pose estimation, where the current object pose is indexed from the interpolated trajectory with a time-based index (No-Feedback). This assumes the object is following the interpolated trajectory at a fixed speed. Another is running HFVC with proprioceptive feedback but without the velocity planar constraints (No-Constraint). The third variant is running HFVC without both of these modifications (No-Both). We report the execution success rate, the object's average discrepancy distance (ADD) between achieved and subgoal poses for all executions (SG-ADD) and only for the successful executions (S-SG-ADD). A total of ${7.5}\mathrm{k}$ skill executions per method were used in these evaluations. See results in Table 1. Our approach achieves 52.3% success rate, higher than the ablations, with proprioception-based pose feedback (38.6%) making the most difference. These results demonstrate that both proprioceptive object pose feedback and planar velocity constraints can improve HFVC skill executions.
141
+
142
+ § 4.2 PRECONDITION LEARNING
143
+
144
+ We generate the HFVC execution dataset with 6 cuboid-shaped and 2 cylindrical-shaped objects from the YCB dataset (see Figure 4). Train-test split is done across object scaling factors, with the test set having the smallest and biggest scales. To see how the network would perform with full, instead of partial observations, we include one ablation that uses full point clouds (Full-PC) and another that uses vertices of ground-truth object primitive meshes (GT-Primitive). We also train a variant that does not use skill parameter features (No-Params). See Table 2. Our method using partial point clouds is on par with Full-PC, and GT-Primitive performs the best. However, we cannot directly use GT-Primitive in planning due to having access to only partial point clouds. Instead, we include a comparison by using estimated object geometries in the task planning experiments below. No-Params performs the worst as expected, but it is still able to improve over random guessing due to biases in our collected data.
145
+
146
+ § 4.3 TASK PLANNING EXPERIMENTS
147
+
148
+ We evaluate task planning performance across several ablations in simulation. For each method, we the trials across the $8\mathrm{{YCB}}$ objects and 8 task scenarios, with 5 trials per object and scenario pair, resulting in a total of 240 trials per method. Each trial samples different initial and goal poses, and object geometries used in task evaluation are not in the training set. A task scenario specifies whether or not the initial object pose and goal pose are close to the shelf wall ( 4 variants) and whether they have the same stable pose ( 2 variants), for a total of 8 scenarios. We report the overall task success rate, average planning time and plan length for successful trials.
149
+
150
+ max width=
151
+
152
+ X Ours Est-Primitive No-PC Only-Pick-Place No-Replan
153
+
154
+ 1-6
155
+ Plan Success 73.2% 61.1% 51.5% 27.4% 28.0%
156
+
157
+ 1-6
158
+ Plan Time (s) ${43.1} \pm {23.0}$ ${44.5} \pm {26.5}$ ${36.5} \pm {20.3}$ ${6.7} \pm {11.6}$ ${22.4} \pm {13.1}$
159
+
160
+ 1-6
161
+ Plan Length ${3.5} \pm {1.3}$ ${3.2} \pm {1.5}$ ${3.6} \pm {1.5}$ ${1.6} \pm {0.8}$ ${1.9} \pm {1.0}$
162
+
163
+ 1-6
164
+
165
+ Table 3: Task performance of our approach using partial point clouds vs. using estimated object primitives (Est-Primitive), not using learned preconditions (No-PC), only using Pick-and-Place (Only-Pick-Place), and not doing replanning (No-Replan). Plan time includes replanning time. Numbers after $\pm$ are standard deviations. Plan time and length statistics are computed only over successful trials.
166
+
167
+ The first ablation is on using vertices from estimated object primitives (Est-Primitive) as the input to the precondition, instead of partial point clouds. The second is planning without the learned precondition, so the planner treats all generated parameters as feasible (No-PC). The third evaluates the usefulness of the HFVC skill in our domain by planning only with the Pick-and-Place skill (Only-Pick-Place). The last ablation does not replan and executes the entire found path from the initial state without feedback (No-Replan). For this method, we double the planning budget to 60s, so the planner is more likely to find a path all the way to the goal state.
168
+
169
+ See Table 3. The proposed approach achieves a success rate of 74.7%, higher than Est-Primitive (61.1%) and No-PC (48.9%), and much higher than Only-Pick-Place (22.9%) and No-Replan (28.8%). The drop in performance of Est-Primitive shows that while using ground truth primitives gives better precondition predictions, this improvement does not apply when primitives must be estimated from partial point clouds. The other ablations show the importance of learning HFVC preconditions, using HFVC skills in constrained environments, and replanning to compensate for inaccurate subgoal transition models.
170
+
171
+ Real-world Demonstration. Lastly, we demonstrate our planner with the learned precondition can operate in the real world. While the learned precondition can operate on real-world point clouds without further training, we had to tune low-level controller gains to reproduce similar contact-rich motions on the real robot. As with simulation, our planner is able to find plans that include a variety of contact-rich behaviors, like pushing, sliding, toppling, and pivoting, to manipulate objects on the shelf. Please see supplementary materials for more details and real-world videos.
172
+
173
+ Failure Modes and Limitations. The most common failure mode is precondition errors that lead the planner into finding infeasible plans or not finding a plan when there is one. This may be addressed by further improving precondition performance with more data, or enforcing domain-specific in-variances. In addition, the object primitive assumption does prevent the parameter generator from supporting more complex objects, and this may be resolved by learning a parameter generator that operates directly on partial point clouds. For execution, our planner does not perform online precondition adaptation for unexpected object dynamics; doing so may improve task performance for objects with out-of-distribution dynamics parameters. Lastly, our method only applies to having one movable object in the scene. Manipulating multiple objects may require learned perception systems and skill-level dynamics models that work in clutter.
174
+
175
+ § 5 CONCLUSION
176
+
177
+ Contact-rich manipulation behaviors can be naturally expressed by HFVCs. However, HFVCs have traditionally relied on precise object models and closed-loop object feedback, hindering their applications in more realistic settings where such information cannot be readily obtained. Our work shows that it is possible to 1) modify HFVCs to not rely on privileged information and 2) learn where HFVCs are successful despite inaccurate models so that 3) a planner can plan sequences of HFVC and Pick-and-Place skills to complete challenging contact-rich tasks in constrained environments.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/DE8rdNuGj_7/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LEADER: Learning Attention over Driving Behaviors
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Uncertainty on human behaviors poses a significant challenge to autonomous driving in crowded urban environments. The partially observable Markov decision processes (POMDPs) offer a principled framework for planning under uncertainty, often leveraging Monte Carlo sampling to achieve online performance for complex tasks. However, sampling also raises safety concerns by potentially missing critical events. To address this, we propose a new algorithm, LEarning Attention over Driving bEhavioRs (LEADER), that learns to attend to critical human behaviors during planning. LEADER learns a neural network generator to provide attention over human behaviors in real-time situations. It integrates the attention into a belief-space planner, using importance sampling to bias reasoning towards critical events. To train the algorithm, we let the attention generator and the planner form a min-max game. By solving the min-max game, LEADER learns to perform risk-aware planning without human labeling. ${}^{1}$
12
+
13
+ ## 1 Introduction
14
+
15
+ Robots operating in public spaces often contend with a challenging crowded environment. A representative is autonomous driving in busy urban traffic, where a robot vehicle must interact with many human traffic participants. A significant challenge is posed by the vast amount of uncertainty in human behaviors, e.g., on their intentions, driving styles, etc.. The partially observable Markov decision processes (POMDPs) [1] offer a principled framework for planning under uncertainty. However, optimal POMDP planning is computationally expensive. To achieve real-time performance for complex problems, practical POMDP planners [2, 3] often leverage Monte-Carlo (MC) sampling to make approximate decisions, i.e., sampling a subset of representative future scenarios, then condition decision-making on the sampled future. They have shown success in various robotics applications, including autonomous driving [4], navigation [5, 6], and manipulation [7, 8].
16
+
17
+ Sampling-based POMDP planning, however, may compromise safety. It is difficult to sample future events with low probabilities, and some may lead to disastrous outcomes. In autonomous driving, rare, but critical events often arise from adversarial human behaviors, such as recklessly overtaking the ego-vehicle or making illegal U-turns. Failing to consider them would lead to hazards. Earlier work [9] indicates that re-weighting future events using importance sampling leads to improved performance. The approach requires an importance distribution for sampling events. In autonomous driving, an importance distribution re-weights the different intentions of human participants, e.g., routes they intend to take. A higher weight means an increased probability of sampling an intended route, thus examining its consequence more carefully during planning. However, it is difficult to handcraft the importance distribution, due to the complex real-time situations in crowded environments [9].
18
+
19
+ We propose a new algorithm, LEADER, which learns importance distributions both from and for sampling-based POMDP planning. The algorithm uses a neural network attention generator to obtain importance distributions over human behavioral intentions for real-time situations. Next, an online POMDP planner consumes the importance distribution to make risk-aware decisions, applying importance sampling to bias reasoning towards critical human behaviors. To train the algorithm, we treat the attention generator and the planner as opponents in a min-max game. The attention generator seeks to minimize the planner's value, or the expected cumulative return. This maps to highlight the most adversarial human behaviors by learning from experience. On the other hand, the planner seeks to maximize the value conditioned on the learned attention, which maps to find the best conditional policy using look-ahead search. By solving the min-max game, the algorithm learns to perform risk-aware planning, without human labeling. In our experiments, we evaluate the performance of LEADER for autonomous driving in dense urban traffic. Results show that LEADER significantly improves driving performance in terms of safety, efficiency and smoothness, compared to both risk-aware planning and risk-aware reinforcement learning.
20
+
21
+ ---
22
+
23
+ ${}^{1}$ Code will be released upon acceptance of the paper.
24
+
25
+ ---
26
+
27
+ ![01963f97-2671-7b78-9b3c-f09ee5d34ee5_1_316_201_1166_516_0.jpg](images/01963f97-2671-7b78-9b3c-f09ee5d34ee5_1_316_201_1166_516_0.jpg)
28
+
29
+ Figure 1: Overview of LEADER. (a) LEADER contains a learning component (red) and a planning component (blue). The learning components includes: an attention generator ${G}_{\psi }$ that tries to generate attention $q$ over human behaviors, based on the current belief $b$ and observation $z$ from the environment; and a critic ${C}_{\varphi }$ that approximates the planner’s value estimate, $\widehat{V}$ , based on $b, z$ and the generated attention, $q$ . The planning component performs risk-aware planning using the learned attention $q$ . It decides an action $a$ to be executed in the environment and collects experience data. (b) Attention is defined as an importance distribution over human behavioral intentions. The upper box shows the probability of different intentions of the highlighted exo-agent in green, yellow and red, as well as how the attention generator maps the natural-occurrence probabilities to importance probabilities, by highlighting the most adversarial intention (red). (c) We train LEADER using three simulated real-life urban environments: Meskel Square in Addis Ababa, Ethiopia, Magic Roundabout in Swindon, UK, and Highway in Singapore.
30
+
31
+ ## 2 Background and Related Work
32
+
33
+ ### 2.1 POMDP Planning and Monte Carlo Sampling
34
+
35
+ The Partially Observable Markov Decision Process (POMDP) [1] models the interaction between the robot and an environment as a discrete-time stochastic process. A POMDP is written as a tuple $\left\langle {S, A, Z, T, R, O,\gamma ,{b}_{0}}\right\rangle$ with $S$ representing the space of all possible states of the world, $A$ denoting the space of all possible actions the robot can take, and $Z$ as the space of observations it can receive. Function $T\left( {s, a,{s}^{\prime }}\right) = p\left( {{s}^{\prime } \mid s, a}\right)$ represents the probability of the state transition from $s \in S$ to ${s}^{\prime } \in S$ by taking action $a \in A$ . The function $R\left( {s, a}\right)$ defines a real-valued reward specifying the desirability of taking action $a \in A$ at state $s \in S$ . The observation function $O\left( {{s}^{\prime }, a, z}\right) = p\left( {z \mid {s}^{\prime }, a}\right)$ specifies the probability of observing $z \in Z$ by taking action $a \in A$ to reach to ${s}^{\prime } \in S.\gamma \in \lbrack 0,1)$ is the discount factor, i.e., the rate of reward deprecation over time. Because of the robot's perception limitations, the world's full state is unknown to the robot, but can be inferred in the form of beliefs, or probability distributions over $S$ . The robot starts with an initial belief ${b}_{0}$ at $t = 0$ , and updates it throughout an interaction trajectory using the Bayes rule [10], according to the actions taken
36
+
37
+ and observations received. POMDP planning searches for a closed-loop policy, ${\pi }^{ * } : B \rightarrow A$ , prescribing an action for any belief in the belief space $B$ , which maximize the policy value, ${V}_{\pi }\left( {b}_{0}\right) =$ $\mathbb{E}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}R\left( {{s}_{t},\pi \left( {b}_{t}\right) }\right) \mid {b}_{0},\pi }\right\rbrack$ , which computes the cumulative reward to be achieved in the current and future time steps, $t \geq 0$ , if the robot chooses actions according to policy $\pi$ and the updated belief ${b}_{t}$ , from the initial belief ${b}_{0}$ onwards. Discounting using $\gamma \in \lbrack 0,1)$ keeps the value bounded.
38
+
39
+ Online POMDP planning often performs look-ahead search, constructing a belief tree starting from the current belief and branching with future actions and observations. Enumerating all possible futures, however, is often computationally intractable. Practical algorithms like DESPOT [2] leverage Monte-Carlo sampling and heuristic search to break the computational difficulty. DESPOT samples initial states and future trajectories using the POMDP simulative model. Denote a trajectory as $\zeta = \left( {{s}_{0},{a}_{1},{s}_{1},{z}_{1},{a}_{2},{s}_{2},{z}_{2},\ldots }\right)$ . The initial state ${s}_{0}$ is sampled from the current belief $b$ . Given any subsequent state ${s}_{t}$ and robot action ${a}_{t}$ , the next state ${s}_{t + 1}$ and the observation ${z}_{t + 1}$ are sampled with a probability of $p\left( {{s}_{t + 1},{z}_{t + 1} \mid {s}_{t},{a}_{t + 1}}\right) = O\left( {{s}_{t + 1},{a}_{t},{z}_{t + 1}}\right) T\left( {{s}_{t},{a}_{t},{s}_{t + 1}}\right)$ . A DESPOT tree collates a set of sampled trajectories to approximately represent the future. Each node in the tree contains a set of sampled future states, forming an approximate future belief. The DESPOT tree branches with all possible actions and then sampled observations under each visited belief, effectively considering all candidate policies under the sampled scenarios. Figure 2(a) shows an example DESPOT tree.
40
+
41
+ DESPOT evaluates the value of a policy using Monte-Carlo estimation:
42
+
43
+ $$
44
+ {V}_{\pi }\left( b\right) = {\int }_{\zeta \sim p\left( {\cdot \mid b,\pi }\right) }{V}_{\zeta }{d\zeta } \approx \mathop{\sum }\limits_{{\zeta \in \Delta }}p\left( {\zeta \mid b,\pi }\right) {V}_{\zeta } \tag{1}
45
+ $$
46
+
47
+ where $\Delta$ is a set of trajectories sampled by applying $\pi$ . Also,
48
+
49
+ $$
50
+ p\left( {\zeta \mid b,\pi }\right) = b\left( {s}_{0}\right) \mathop{\prod }\limits_{{t = 0}}^{{H - 1}}p\left( {{s}_{t + 1},{z}_{t + 1} \mid {s}_{t},{a}_{t + 1}}\right) \tag{2}
51
+ $$
52
+
53
+ is the probability of a trajectory $\zeta$ being sampled, ${V}_{\zeta } = \mathop{\sum }\limits_{{t = 0}}^{{H - 1}}{\gamma }^{t}R\left( {{s}_{t},{a}_{t + 1}}\right)$ is the cumulative reward along $\zeta$ , and $H$ is maximum look-ahead depth, or the planning horizon. By incrementally building a belief tree using sampled trajectories, DESPOT searches for the policy that provides the best value estimate, and outputs the optimal action for $b$ when exhausting the given planning time.
54
+
55
+ ### 2.2 Risk-Aware Planning and Learning Approaches
56
+
57
+ There are different techniques for risk-aware planning under uncertainty. Nyberg et al. [11] and Gilhuly et al. [12] proposed two measures for estimating safety risks along driving trajectories, with a focus on open-loop trajectory optimization. Huang et al. [13] modeled risk-aware planning as a chance-constrained POMDP to compute closed-loop policies that provide a low chance of violating safety constraints. Kim et al. [14] leveraged bi-directional belief-space solvers. It bridges forward belief tree search with heuristics produced by an offline backward solver. Both methods improved the performance of POMDP planning in safety-critical domains, but at the cost of an expensive offline planning stage. Thus, it can hardly apply to large-scale problems. Most relevant to our method, Luo et al. [9] offered IS-DESPOT, which improves the performance of online POMDP planning by leveraging importance sampling (IS). It biases MC sampling towards critical scenarios according to an importance distribution provided by human experts, then computes a risk-aware policy under the re-weighted scenarios. Manually constructing the importance distribution, however, is difficult for complex problems such as driving in an urban crowd. In this paper, we propose a principled approach to learn importance distributions from experience and adapt them with real-time situations.
58
+
59
+ LEADER is also loosely connected to risk-aware reinforcement learning (RL). Kamran et al. [15] proposed a risk-aware Q-learning algorithm by punishing risky situations instead of only collision failures. Mirchevska et al. [16] proposed to combine DQN with a rule-based safety checker, masking out infeasible actions in the output layer. Eysenbach et al. [17] offered Ensemble-SAC (ESAC) for risk-aware RL, by additionally learning an ensemble of reset policies to assist the robot avoid irreversible states during training. However, because of low sample efficiency, the above model-free approaches were limited to small-scale tasks like lane changing in regulated highway traffic or controlling articulated robots for simple simulated tasks. Some prior work improved the risk-awareness of model-based RL that plans with learned models. Thomas et al. [18] proposed SMBPO, which learns a sufficiently-large terminal cost for failure states. Berkenkamp et al. [19] focused on ensuring the stability of control. The model-based methods provided better sample efficiency. However, it is still difficult to learn a model for large-scale, safety-critical problems like driving in an urban crowd.
60
+
61
+ ## 3 Overview
62
+
63
+ LEADER learns to attend to the most critical human behaviors for risk-aware planning in urban driving. We define attention over the behavioral intentions of human participants. Assume there are $N$ exo-agents or traffic participants near the robot. Each of them may undertake a finite set of intentions, or future routes such as keeping straight, turning left, merging into the right lane, etc. The actual intention of an exo-agent is not directly observable. For each time step, LEADER maintains both a belief $b$ and an importance distribution $q$ over the intention sets of the $N$ exo-agents. The belief $b$ specifies the natural occurrence probability of exo-agents’ intentions. It is inferred from the interaction history. The importance distribution $q$ specifies the attention over exo-agents’ intentions, determining the actual probability of sampling them during planning. We propose to learn the importance distribution or the attention mechanism from the experience of an online POMDP planner. We will use "importance distribution" and "attention" interchangeably in the remaining.
64
+
65
+ Particularly, the LEADER algorithm has two main components: a learner that generates attention for real-time situations, and a planner that consumes the attention to perform conditional planning. The generator uses a neural network, $q = {G}_{\psi }\left( {b, z}\right)$ , to generate an importance distribution $q$ , for the real-time situation specified by the current belief $b$ and observation $z$ . The planner uses online belief tree search to make risk-aware decisions for the given $b$ and $z$ , leveraging the importance distribution $q$ to bias Monte-Carlo sampling towards critical human behaviors. To train the algorithm, we let the generator and the planner form a min-max game:
66
+
67
+ $$
68
+ \mathop{\min }\limits_{{q \in Q}}\mathop{\max }\limits_{{\pi \in \Pi }}{\widehat{V}}_{\pi }\left( {b, z \mid q}\right) \tag{3}
69
+ $$
70
+
71
+ where $\Pi$ is the space of all policies, and $Q$ is the space of all importance distributions. In this game, the generator must learn to generate $q$ ’s that lead to the lowest planning value, meaning to increase the probability of sampling the most adversarial intentions of exo-agents; the planner must find the best policy with the highest value, conditioned on the generated $q$ . We further learn a critic function, $v = {C}_{\varphi }\left( {b, z, q}\right)$ , another neural network that approximates the value function of the planner, ${C}_{\varphi }\left( {b, z \mid q}\right) \approx \mathop{\max }\limits_{{\pi \in \Pi }}{\widehat{V}}_{\pi }\left( {b, z \mid q}\right)$ , to assist gradient-descent training of the generator. Figure 1a and 1b demonstrate the training architecture of LEADER. In the bottom row, the planner plans robot actions using the generated attentions and feeds the driving experience to a replay buffer. In the top row, we train the critic and the generator using sampled data from the replay buffer. The critic is fitted to the planner's value estimates using supervised learning; the generator is trained to maximize the planner's value, using the critic as a differentiable surrogate objective.
72
+
73
+ ## 4 Risk-Aware Planning using Learned Attention
74
+
75
+ The LEADER planner performs belief tree search to plan robot actions conditioned on the attention over exo-agents' behaviors, solving the inner maximization problem in Eq. (3). A belief tree build by LEADER looks similar to a DESPOT tree (Section 2.1). It collates many sampled trajectories, each corresponding to a top-down path in the tree. The tree branches over all actions and all sampled observations under each belief node, effectively considers all possible policies under the sampled scenarios. The difference is, however, LEADER biases the tree towards higher-risk trajectories using importance sampling. Figure 2 provides a side-by-side comparison of belief trees in DESPOT and LEADER. Appendix A introduces the basics of importance sampling, and a theoretical justification on our minimization objective of learning importance distributions.
76
+
77
+ ![01963f97-2671-7b78-9b3c-f09ee5d34ee5_4_317_214_1169_374_0.jpg](images/01963f97-2671-7b78-9b3c-f09ee5d34ee5_4_317_214_1169_374_0.jpg)
78
+
79
+ Figure 2: A comparison of DESPOT and the LEADER planner. (a) The DESPOT tree samples human intentions from the belief, $b\left( {s}_{0}\right)$ , without considering the criticality of the intentions. Some rare critical events might be missed during sampling. (b) The LEADER tree samples human intentions from the learned importance distribution, $q\left( {s}_{0}\right)$ , which is biased towards adversarial intentions. The tree thus considers more critical events (red), and less safe events (green).
80
+
81
+ Concretely, we sample initial states ${s}_{0}$ from the learned importance distribution $q\left( {s}_{0}\right)$ , instead of from the actual belief $b\left( {s}_{0}\right)$ . As a result, the sampling distribution of simulation trajectories is also altered from Eq. (2), becoming:
82
+
83
+ $$
84
+ q\left( {\zeta \mid b, z,\pi }\right) = q\left( {s}_{0}\right) \mathop{\prod }\limits_{{t = 0}}^{{D - 1}}p\left( {{s}_{t + 1},{z}_{t + 1} \mid {s}_{t},{a}_{t + 1}}\right) , \tag{4}
85
+ $$
86
+
87
+ where $\zeta$ is a hypothetical future trajectory. The value of a candidate policy $\pi$ is now evaluated as:
88
+
89
+ $$
90
+ {V}_{\pi }\left( b\right) = {\int }_{\zeta \sim p\left( {\cdot \mid b, z,\pi }\right) }{V}_{\zeta }{d\zeta } = {\int }_{\zeta \sim q\left( {\cdot \mid b, z,\pi }\right) }\frac{p\left( {\zeta \mid b, z,\pi }\right) }{q\left( {\zeta \mid b, z,\pi }\right) }{V}_{\zeta }{d\zeta } \tag{5}
91
+ $$
92
+
93
+ Here, ${V}_{\zeta }$ is the discounted cumulative reward along a trajectory $\zeta$ . Eq. (5) first shows the definition of the value of policy $\pi$ , then applies importance sampling, replacing the sampling distribution $p\left( {\cdot \mid b, z,\pi }\right)$ in Eq. (2) with $q\left( {\cdot \mid b, z,\pi }\right)$ in Eq. (4). It also uses the importance weights $\frac{p\left( {\zeta \mid b, z,\pi }\right) }{q\left( {\zeta \mid b, z,\pi }\right) }$ to unbias the value estimation and ensure the correctness of planning. The value is further approximated using Monte Carlo estimates:
94
+
95
+ $$
96
+ {\widehat{V}}_{\pi }\left( b\right) = \frac{1}{\left| {\Delta }^{\prime }\right| }\mathop{\sum }\limits_{{\zeta \in {\Delta }^{\prime }}}\frac{p\left( {\zeta \mid b, z,\pi }\right) }{q\left( {\zeta \mid b, z,\pi }\right) }{V}_{\zeta } = \frac{1}{\left| {\Delta }^{\prime }\right| }\mathop{\sum }\limits_{{\zeta \in {\Delta }^{\prime }}}\frac{p\left( {s}_{0}\right) }{q\left( {s}_{0}\right) }{V}_{\zeta }, \tag{6}
97
+ $$
98
+
99
+ Eq. (6) starts with approximating the value using a set of sampled trajectories, ${\Delta }^{\prime }$ . It then simplifies the importance weights to $\frac{p\left( {s}_{0}\right) }{q\left( {s}_{0}\right) }$ , as $p\left( {\zeta \mid b, z,\pi }\right)$ and $q\left( {\zeta \mid b, z,\pi }\right)$ only differ in the probability of sampling the initial state ${s}_{0}$ .
100
+
101
+ The LEADER planner is built on top of IS-DESPOT [9], integrating it with learned importance distributions. Following IS-DESPOT, LEADER performs anytime heuristics search, incrementally building a sparse belief tree when sampling more trajectories. During the search, it maintains for each belief node a set of approximate value estimates, and uses them as tree search heuristics. See Luo et al. [9] for more details of the anytime algorithm.
102
+
103
+ ## 5 Learning Attention over Human Behaviors
104
+
105
+ We train the critic and generator neural networks using driving experience from the planner, stored in the replay buffer. The generator is trained to minimize the planner's value estimates, solving the outer minimization problem in Eq. (3). It uses the critic as a differentiable surrogate objective, which is supervised by the planner's value estimates. Appendix B describes the network architectures of the critic and the generator.
106
+
107
+ Critic Network. The Critic network’s parameters $\varphi$ are updated by minimizing the L2-norm between the critic’s value estimate and the planner’s value estimate $\widehat{V}$ using gradient descent, given a sampled tuple of belief $b$ , observation $z$ , and attention $q$ :
108
+
109
+ $$
110
+ J\left( \varphi \right) = {\mathbb{E}}_{\left( {b, z, q,\widehat{V}}\right) \sim D}\left\lbrack {\left| {C}_{\varphi }\left( b, z, q\right) - \widehat{V}\left( b, z \mid q\right) \right| }^{2}\right\rbrack , \tag{7}
111
+ $$
112
+
113
+ where $D$ is the set of online data stored in the replay buffer.
114
+
115
+ Attention Generator Network. The generator network’s parameters $\psi$ are updated by minimizing the planner’s value as estimated by the critic, given a sampled tuple of belief $b$ and observation $z$ :
116
+
117
+ $$
118
+ J\left( \psi \right) = {\mathbb{E}}_{\left( {b, z}\right) \sim D}\left\lbrack {{\mathbb{E}}_{q \sim {G}_{\psi }\left( {b, z}\right) }\left\lbrack {{C}_{\varphi }\left( {b, z, q}\right) }\right\rbrack }\right\rbrack \tag{8}
119
+ $$
120
+
121
+ where $D$ represents online data stored in the replay buffer. This objective is made differentiable using the reparameterization trick [20], enabling gradient descent training via the chain rule:
122
+
123
+ $$
124
+ J\left( \psi \right) = {\mathbb{E}}_{\left( {b, z}\right) \sim D}\left\lbrack {{\mathbb{E}}_{\epsilon \sim \mathcal{N}\left( {0,1}\right) }\left\lbrack {{C}_{\varphi }\left( {b, z,{G}_{\psi }\left( {b, z,\epsilon }\right) }\right) }\right\rbrack }\right\rbrack \tag{9}
125
+ $$
126
+
127
+ The following is the training procedure of LEADER. In each time step, the current belief $b$ and observation $z$ are fed into ${G}_{\psi }$ to produce the importance distribution $q$ . Then, the planner takes $b, z$ and $q$ as inputs to perform risk-aware planning, and outputs the optimal action $a$ to be executed in the environment together with its value estimate $\widehat{V}$ . The data point $\left( {b, z, q,\widehat{V}}\right)$ is sent to a fixed-capacity replay buffer. Next, a batch of data is sampled from the replay buffer, and used to update ${C}_{\varphi }$ and ${G}_{\psi }$ according to Eq. (7) and (8). The updated ${G}_{\psi }$ is then used for next planning step. Training starts from randomly initialized generator and critic networks and an empty replay buffer. In the warm-up phase, the critic is first trained using data collected by LEADER with uniform attention. This provides meaningful objectives for the attention generator to start with. Then, both the critic and the generator are trained with online data collected using the latest attention generator. At execution time, we only deploy the generator and the planner to perform risk-aware planning.
128
+
129
+ ## 6 Experiments and Discussions
130
+
131
+ In this section, we evaluate LEADER on autonomous driving in unregulated urban crowds, show the improvements on the real-time driving performance, and analyze the learned attention. The experiment task is to control the acceleration of a robot ego-vehicle, so that it follows a reference path and drives as fast as possible, while avoiding collision with the traffic crowd under the uncertainty of human intentions. A human intention indicates which path one intends to take, according to the lane network of the urban map. Attention is thus defined as a joint importance distribution over the intentions of 20 exo-agents near the robot, assuming conditional independence between agents. See details on the POMDP model in Appendix C.
132
+
133
+ Our baselines include both risk-aware planning and risk-aware RL methods. We first compare with POMDP planners using handcrafted attention. DESPOT-Uniform uses DESPOT with uniform attention over human intentions [21]. DESPOT-TTC computes the criticality of exo-agent's intentions using the estimated Time-To-Collision (TTC) with the ego-vehicle. The attention score is set proportional to $\frac{1}{{t}_{c}}$ , where ${t}_{c}$ is the TTC if the exo-agent takes the indented path with a constant speed. For RL baselines, we first include a model-free RL method, Ensemble SAC (ESAC) [17], which learns an ensemble of Q-values for risk-aware training. Then, we include a model-based RL method, SMBPO [18] which plans with a learned dynamics and a risk-aware reward model. Because of the difficulty of real-world data collection and testing for driving in a crowd, we instead used the SUMMIT simulator [21], which simulates high-fidelity massive urban traffic on real-world maps, using a realistic traffic motion model [22]. We evaluate LEADER and all baselines on three different environments: the Meskel square in Addis Ababa, a highway in Singapore, and the magic roundabout in Swindon, using 5 different sets of reference paths and crowd initialization for each map. Crowd interactions during driving are perturbed with random noise, thus are unique for each episode. See the driving clips of LEADER and comparisons with baselines in this video: dropbox.com/s/edfurridcegnxti/LEADER.mp4?dl=0.
134
+
135
+ ![01963f97-2671-7b78-9b3c-f09ee5d34ee5_6_316_220_1166_351_0.jpg](images/01963f97-2671-7b78-9b3c-f09ee5d34ee5_6_316_220_1166_351_0.jpg)
136
+
137
+ Figure 3: Learning curves of LEADER and existing risk-aware planning and RL algorithms: (a) average cumulative reward (b) collision rate per 1000 steps (c) average speed.
138
+
139
+ Table 1: Generalized driving performance of LEADER and existing risk-aware planning and learning algorithms on novel scenes averaged over 200 test runs.
140
+
141
+ <table><tr><td>Algorithm</td><td>Cumulative Reward</td><td>Collision Rate</td><td>Travelled Distance</td><td>Smoothness Factor</td></tr><tr><td>DESPOT-Uniform</td><td>$- {2.344} \pm {0.38}$</td><td>${0.011} \pm {0.005}$</td><td>${105.02} \pm {12.14}$</td><td>${3.57} \pm {0.04}$</td></tr><tr><td>DESPOT-TTC</td><td>$- {2.55} \pm {0.21}$</td><td>${0.015} \pm {0.002}$</td><td>${102.01} \pm {9.84}$</td><td>${3.45} \pm {0.1}$</td></tr><tr><td>ESAC</td><td>$- {5.2} \pm {0.73}$</td><td>${0.05} \pm {0.004}$</td><td>${20.18} \pm {4.54}$</td><td>${1.88} \pm {0.2}$</td></tr><tr><td>SMBPO</td><td>$- {4.73} \pm {0.87}$</td><td>${0.037} \pm {0.003}$</td><td>${42.2} \pm {9.9}$</td><td>${2.08} \pm {0.1}$</td></tr><tr><td>LEADER (ours)</td><td>$- {2.19} \pm {0.42}$</td><td>${0.007} \pm {0.001}$</td><td>${111.48} \pm {15.94}$</td><td>${4.76} \pm {0.08}$</td></tr></table>
142
+
143
+ ### 6.1 Performance Comparison
144
+
145
+ Learning Curves. In Figure 3, we show the learning curves of LEADER and the baseline algorithms in terms of the average cumulative reward, the collision rate, and the average driving speed. LEADER starts to outperform the strongest RL baseline from the ${8500}\mathrm{\;{th}}$ iteration, which corresponds to 12 hours of training using 12 real-time planner actors. It starts to outperform the strongest planning baseline from the 12500th iteration, corresponding to 19 hours of training. At the end of training, LEADER achieves the highest cumulative reward, the lowest collision rate, and the highest driving speed among all algorithms.
146
+
147
+ Generalization. To evaluate the generalization of LEADER, we ran it with 5 unseen reference paths and crowd initialization for each map with randomness on crowd interactions. Table 1 shows detailed driving performance of the considered algorithms averaged over 200 test runs, including the cumulative reward, collision rate, travelled distance, and smoothness factor. The collision rate measures the average number of collisions per 1000 time steps. The smoothness factor is the reverse of the number of decelerations, $\frac{1}{{N}_{dec}}$ . As shown, LEADER outperforms other methods in all metrics, consistent with the learning curves. It drives much more safely and efficiently than RL methods which had a hard time handling the highly dynamic crowd. It also improves driving safety, efficiency and smoothness from the planning baselines which applied sub-optimal attention.
148
+
149
+ ### 6.2 The Learned Attention
150
+
151
+ To provide a qualitative analysis on what LEADER has learned, we further provide 2D visualizations of the learned attention in Figure 4. We provide three scenes from the Singapore Highway, Meskel Square, and Magic Roundabout, respectively. In each scene, we highlight a representative exo-agent, show the set of intention paths and visualize the learned attention over the paths using color coding. For other exo-agents, we only show the most critical intentions with the highest attention scores.
152
+
153
+ In Figure 4 (a), the highlighted exo-agent in blue drives in parallel with the ego-vehicle. It has several possible intentions, including continuing straight and merging to neighbouring lanes. The intention of merging right is more likely, as the agent is closer to the right lane. However, the merging left
154
+
155
+ ![01963f97-2671-7b78-9b3c-f09ee5d34ee5_7_319_216_1166_369_0.jpg](images/01963f97-2671-7b78-9b3c-f09ee5d34ee5_7_319_216_1166_369_0.jpg)
156
+
157
+ Figure 4: Visualization of the learned attention in: (a) highway (b) Meskel square (c) Magic. We highlighted one exo-agent in blue in each scene. Learned attention over its intentions are color-coded: green, yellow, red, purple, sorted from low attention to high attention. For other exo-agents, we only show the most-attended intention with dotted lines. See more examples in this video: www.dropbox.com/s/edfurridcegnxti/LEADER.mp4?dl=0.
158
+
159
+ intention is more critical, as the path will interfere with the ego-vehicle's. LEADER learns to put more attention on the more critical possibility. As a result, the planner decides to slow down the vehicle and prepare for potential hazards. In Figure 4 (b), the blue agent can either proceed left or turn right. Both intentions are equality likely. While the turning right intention gets the agent out of the ego-vehicle's way, proceeding left, however, causes the agent to continue interacting with the ego-vehicle. The generator thus assigned more attention to the latter. The planner subsequently decides to stop the ego-vehicle, since the blue agent in front may need to wait for its own path to be cleared. In Figure 4 (c), all intention paths of the blue agent intersect with the ego-vehicle's. The generator learned to attend to the path closest to the ego-vehicle, which is more hazardous. Consequently, the planner stops the ego-vehicle to prevent collision.
160
+
161
+ ## 7 Summary, Limitations and Future Work
162
+
163
+ In this paper, we introduced LEADER, which integrates learning and planning to drive in crowded environments. LEADER forms a min-max game between a learning component that generates attention over potential human behaviors, and a planning component that computes risk-aware policies conditioned on the learned attention. By solving the min-max game, LEADER learned to attend to the most adversarial human behaviors and perform risk-aware planning. Our results show that LEADER helps to improve the real-time performance of driving in crowded urban traffic, effectively lowering the collision rate while keeping the efficiency and smoothness of driving, compared to both risk-aware planning and learning approaches.
164
+
165
+ Limitations of this work are related to our assumptions, model errors and data required for training. This work make a few assumptions. First, we assume there is an available map of the urban environment, providing lane-level information. However, we believe this requirement can be met for most major cities. Second, in our POMDP model, we focused on modeling the uncertainty of human intentions, and ignored perception uncertainty such as significant observation noises and occluded participants. These aspects will be addressed in future work through building more comprehensive models. Third, LEADER will also be affected by model errors, as the approach relies on the planner's value estimates to provide learning signals. But since our approach emphasizes the most adversarial future, i.e., relies on conservative predictions, it is more robust to model errors than typical planning approaches. Lastly, although we have improved tremendously from the sample-efficiency of RL algorithms, the algorithm still requires hours of online training through real-time driving. A promising solution is to "warm up" the critic and generator networks using offline real-world driving datasets, such as the Argoverse Dataset [23] and the Waymo Open Dataset [24], then perform further online training.
166
+
167
+ References
168
+
169
+ [1] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1-2):99-134, 1998.
170
+
171
+ [2] A. Somani, N. Ye, D. Hsu, and W. S. Lee. Despot: Online pomdp planning with regularization. Advances in neural information processing systems, 26, 2013.
172
+
173
+ [3] D. Silver and J. Veness. Monte-carlo planning in large pomdps. In J. Lafferty, C. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems, volume 23. Curran Associates, Inc., 2010. URL https://proceedings.neurips.cc/paper/2010/file/edfbe1afcf9246bb0d40eb4d8027d90f-Paper.pdf.
174
+
175
+ [4] R. Hussain and S. Zeadally. Autonomous cars: Research results, issues, and future challenges. IEEE Communications Surveys & Tutorials, 21(2):1275-1313, 2018.
176
+
177
+ [5] P. Cai, Y. Luo, D. Hsu, and W. S. Lee. Hyp-despot: A hybrid parallel algorithm for online planning under uncertainty. The International Journal of Robotics Research, 40(2-3):558-573, 2021.
178
+
179
+ [6] A. Goldhoorn, A. Garrell, R. Alquézar, and A. Sanfeliu. Continuous real time pomcp to find-and-follow people by a humanoid service robot. In 2014 IEEE-RAS International Conference on Humanoid Robots, pages 741-747. IEEE, 2014.
180
+
181
+ [7] J. K. Li, D. Hsu, and W. S. Lee. Act to see and see to act: Pomdp planning for objects search in clutter. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5701-5707. IEEE, 2016.
182
+
183
+ [8] Y. Xiao, S. Katt, A. ten Pas, S. Chen, and C. Amato. Online planning for target object search in clutter under partial observability. In 2019 International Conference on Robotics and Automation (ICRA), pages 8241-8247. IEEE, 2019.
184
+
185
+ [9] Y. Luo, H. Bai, D. Hsu, and W. S. Lee. Importance sampling for online planning under uncertainty. The International Journal of Robotics Research, 38(2-3):162-181, 2019.
186
+
187
+ [10] A. R. Cassandra, L. P. Kaelbling, and M. L. Littman. Acting optimally in partially observable stochastic domains. In Aaai, volume 94, pages 1023-1028, 1994.
188
+
189
+ [11] T. Nyberg, C. Pek, L. Dal Col, C. Norén, and J. Tumova. Risk-aware motion planning for autonomous vehicles with safety specifications. In IEEE Intelligent Vehicles Symposium, 2021.
190
+
191
+ [12] B. Gilhuly, A. Sadeghi, P. Yedmellat, K. Rezaee, and S. L. Smith. Looking for trouble: Informative planning for safe trajectories with occlusions. IEEE International Conference on Robotics and Automation (ICRA), 2022.
192
+
193
+ [13] X. Huang, A. Jasour, M. Deyo, A. Hofmann, and B. C. Williams. Hybrid risk-aware conditional planning with applications in autonomous vehicles. In 2018 IEEE Conference on Decision and Control (CDC), pages 3608-3614. IEEE, 2018.
194
+
195
+ [14] S.-K. Kim, R. Thakker, and A.-A. Agha-Mohammadi. Bi-directional value learning for risk-aware planning under uncertainty. IEEE Robotics and Automation Letters, 4(3):2493-2500, 2019.
196
+
197
+ [15] D. Kamran, C. F. Lopez, M. Lauer, and C. Stiller. Risk-aware high-level decisions for automated driving at occluded intersections with reinforcement learning. In 2020 IEEE Intelligent Vehicles Symposium (IV), pages 1205-1212. IEEE, 2020.
198
+
199
+ [16] B. Mirchevska, C. Pek, M. Werling, M. Althoff, and J. Boedecker. High-level decision making for safe and reasonable autonomous lane changing using reinforcement learning. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pages 2156-2162. IEEE, 2018.
200
+
201
+ [17] B. Eysenbach, S. Gu, J. Ibarz, and S. Levine. Leave no trace: Learning to reset for safe and autonomous reinforcement learning. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=S1vu0-bCW.
202
+
203
+ [18] G. Thomas, Y. Luo, and T. Ma. Safe reinforcement learning by imagining the near future. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 13859-13869. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/ 73b277c11266681122132d024f53a75b-Paper.pdf.
204
+
205
+ [19] F. Berkenkamp, M. Turchetta, A. Schoellig, and A. Krause. Safe model-based reinforcement learning with stability guarantees. Advances in neural information processing systems, 30, 2017.
206
+
207
+ [20] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
208
+
209
+ [21] P. Cai, Y. Lee, Y. Luo, and D. Hsu. Summit: A simulator for urban driving in massive mixed traffic. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 4023-4029. IEEE, 2020.
210
+
211
+ [22] Y. Luo, P. Cai, Y. Lee, and D. Hsu. Gamma: A general agent motion model for autonomous driving. IEEE Robotics and Automation Letters, 2022.
212
+
213
+ [23] B. Wilson, W. Qi, T. Agarwal, J. Lambert, J. Singh, S. Khandelwal, B. Pan, R. Kumar, A. Hartnett, J. Kaesemodel Pontes, D. Ramanan, P. Carr, and J. Hays. Argoverse 2: Next generation datasets for self-driving perception and forecasting. In J. Vanschoren and S. Yeung, editors, Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, volume 1, 2021. URL https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/4734ba6f3de83d861c3176a6273cac6d-Paper-round2.pdf.
214
+
215
+ [24] P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2446-2454, 2020.
216
+
217
+ 326
papers/CoRL/CoRL 2022/CoRL 2022 Conference/DE8rdNuGj_7/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § LEADER: LEARNING ATTENTION OVER DRIVING BEHAVIORS
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: Uncertainty on human behaviors poses a significant challenge to autonomous driving in crowded urban environments. The partially observable Markov decision processes (POMDPs) offer a principled framework for planning under uncertainty, often leveraging Monte Carlo sampling to achieve online performance for complex tasks. However, sampling also raises safety concerns by potentially missing critical events. To address this, we propose a new algorithm, LEarning Attention over Driving bEhavioRs (LEADER), that learns to attend to critical human behaviors during planning. LEADER learns a neural network generator to provide attention over human behaviors in real-time situations. It integrates the attention into a belief-space planner, using importance sampling to bias reasoning towards critical events. To train the algorithm, we let the attention generator and the planner form a min-max game. By solving the min-max game, LEADER learns to perform risk-aware planning without human labeling. ${}^{1}$
12
+
13
+ § 1 INTRODUCTION
14
+
15
+ Robots operating in public spaces often contend with a challenging crowded environment. A representative is autonomous driving in busy urban traffic, where a robot vehicle must interact with many human traffic participants. A significant challenge is posed by the vast amount of uncertainty in human behaviors, e.g., on their intentions, driving styles, etc.. The partially observable Markov decision processes (POMDPs) [1] offer a principled framework for planning under uncertainty. However, optimal POMDP planning is computationally expensive. To achieve real-time performance for complex problems, practical POMDP planners [2, 3] often leverage Monte-Carlo (MC) sampling to make approximate decisions, i.e., sampling a subset of representative future scenarios, then condition decision-making on the sampled future. They have shown success in various robotics applications, including autonomous driving [4], navigation [5, 6], and manipulation [7, 8].
16
+
17
+ Sampling-based POMDP planning, however, may compromise safety. It is difficult to sample future events with low probabilities, and some may lead to disastrous outcomes. In autonomous driving, rare, but critical events often arise from adversarial human behaviors, such as recklessly overtaking the ego-vehicle or making illegal U-turns. Failing to consider them would lead to hazards. Earlier work [9] indicates that re-weighting future events using importance sampling leads to improved performance. The approach requires an importance distribution for sampling events. In autonomous driving, an importance distribution re-weights the different intentions of human participants, e.g., routes they intend to take. A higher weight means an increased probability of sampling an intended route, thus examining its consequence more carefully during planning. However, it is difficult to handcraft the importance distribution, due to the complex real-time situations in crowded environments [9].
18
+
19
+ We propose a new algorithm, LEADER, which learns importance distributions both from and for sampling-based POMDP planning. The algorithm uses a neural network attention generator to obtain importance distributions over human behavioral intentions for real-time situations. Next, an online POMDP planner consumes the importance distribution to make risk-aware decisions, applying importance sampling to bias reasoning towards critical human behaviors. To train the algorithm, we treat the attention generator and the planner as opponents in a min-max game. The attention generator seeks to minimize the planner's value, or the expected cumulative return. This maps to highlight the most adversarial human behaviors by learning from experience. On the other hand, the planner seeks to maximize the value conditioned on the learned attention, which maps to find the best conditional policy using look-ahead search. By solving the min-max game, the algorithm learns to perform risk-aware planning, without human labeling. In our experiments, we evaluate the performance of LEADER for autonomous driving in dense urban traffic. Results show that LEADER significantly improves driving performance in terms of safety, efficiency and smoothness, compared to both risk-aware planning and risk-aware reinforcement learning.
20
+
21
+ ${}^{1}$ Code will be released upon acceptance of the paper.
22
+
23
+ < g r a p h i c s >
24
+
25
+ Figure 1: Overview of LEADER. (a) LEADER contains a learning component (red) and a planning component (blue). The learning components includes: an attention generator ${G}_{\psi }$ that tries to generate attention $q$ over human behaviors, based on the current belief $b$ and observation $z$ from the environment; and a critic ${C}_{\varphi }$ that approximates the planner’s value estimate, $\widehat{V}$ , based on $b,z$ and the generated attention, $q$ . The planning component performs risk-aware planning using the learned attention $q$ . It decides an action $a$ to be executed in the environment and collects experience data. (b) Attention is defined as an importance distribution over human behavioral intentions. The upper box shows the probability of different intentions of the highlighted exo-agent in green, yellow and red, as well as how the attention generator maps the natural-occurrence probabilities to importance probabilities, by highlighting the most adversarial intention (red). (c) We train LEADER using three simulated real-life urban environments: Meskel Square in Addis Ababa, Ethiopia, Magic Roundabout in Swindon, UK, and Highway in Singapore.
26
+
27
+ § 2 BACKGROUND AND RELATED WORK
28
+
29
+ § 2.1 POMDP PLANNING AND MONTE CARLO SAMPLING
30
+
31
+ The Partially Observable Markov Decision Process (POMDP) [1] models the interaction between the robot and an environment as a discrete-time stochastic process. A POMDP is written as a tuple $\left\langle {S,A,Z,T,R,O,\gamma ,{b}_{0}}\right\rangle$ with $S$ representing the space of all possible states of the world, $A$ denoting the space of all possible actions the robot can take, and $Z$ as the space of observations it can receive. Function $T\left( {s,a,{s}^{\prime }}\right) = p\left( {{s}^{\prime } \mid s,a}\right)$ represents the probability of the state transition from $s \in S$ to ${s}^{\prime } \in S$ by taking action $a \in A$ . The function $R\left( {s,a}\right)$ defines a real-valued reward specifying the desirability of taking action $a \in A$ at state $s \in S$ . The observation function $O\left( {{s}^{\prime },a,z}\right) = p\left( {z \mid {s}^{\prime },a}\right)$ specifies the probability of observing $z \in Z$ by taking action $a \in A$ to reach to ${s}^{\prime } \in S.\gamma \in \lbrack 0,1)$ is the discount factor, i.e., the rate of reward deprecation over time. Because of the robot's perception limitations, the world's full state is unknown to the robot, but can be inferred in the form of beliefs, or probability distributions over $S$ . The robot starts with an initial belief ${b}_{0}$ at $t = 0$ , and updates it throughout an interaction trajectory using the Bayes rule [10], according to the actions taken
32
+
33
+ and observations received. POMDP planning searches for a closed-loop policy, ${\pi }^{ * } : B \rightarrow A$ , prescribing an action for any belief in the belief space $B$ , which maximize the policy value, ${V}_{\pi }\left( {b}_{0}\right) =$ $\mathbb{E}\left\lbrack {\mathop{\sum }\limits_{{t = 0}}^{\infty }{\gamma }^{t}R\left( {{s}_{t},\pi \left( {b}_{t}\right) }\right) \mid {b}_{0},\pi }\right\rbrack$ , which computes the cumulative reward to be achieved in the current and future time steps, $t \geq 0$ , if the robot chooses actions according to policy $\pi$ and the updated belief ${b}_{t}$ , from the initial belief ${b}_{0}$ onwards. Discounting using $\gamma \in \lbrack 0,1)$ keeps the value bounded.
34
+
35
+ Online POMDP planning often performs look-ahead search, constructing a belief tree starting from the current belief and branching with future actions and observations. Enumerating all possible futures, however, is often computationally intractable. Practical algorithms like DESPOT [2] leverage Monte-Carlo sampling and heuristic search to break the computational difficulty. DESPOT samples initial states and future trajectories using the POMDP simulative model. Denote a trajectory as $\zeta = \left( {{s}_{0},{a}_{1},{s}_{1},{z}_{1},{a}_{2},{s}_{2},{z}_{2},\ldots }\right)$ . The initial state ${s}_{0}$ is sampled from the current belief $b$ . Given any subsequent state ${s}_{t}$ and robot action ${a}_{t}$ , the next state ${s}_{t + 1}$ and the observation ${z}_{t + 1}$ are sampled with a probability of $p\left( {{s}_{t + 1},{z}_{t + 1} \mid {s}_{t},{a}_{t + 1}}\right) = O\left( {{s}_{t + 1},{a}_{t},{z}_{t + 1}}\right) T\left( {{s}_{t},{a}_{t},{s}_{t + 1}}\right)$ . A DESPOT tree collates a set of sampled trajectories to approximately represent the future. Each node in the tree contains a set of sampled future states, forming an approximate future belief. The DESPOT tree branches with all possible actions and then sampled observations under each visited belief, effectively considering all candidate policies under the sampled scenarios. Figure 2(a) shows an example DESPOT tree.
36
+
37
+ DESPOT evaluates the value of a policy using Monte-Carlo estimation:
38
+
39
+ $$
40
+ {V}_{\pi }\left( b\right) = {\int }_{\zeta \sim p\left( {\cdot \mid b,\pi }\right) }{V}_{\zeta }{d\zeta } \approx \mathop{\sum }\limits_{{\zeta \in \Delta }}p\left( {\zeta \mid b,\pi }\right) {V}_{\zeta } \tag{1}
41
+ $$
42
+
43
+ where $\Delta$ is a set of trajectories sampled by applying $\pi$ . Also,
44
+
45
+ $$
46
+ p\left( {\zeta \mid b,\pi }\right) = b\left( {s}_{0}\right) \mathop{\prod }\limits_{{t = 0}}^{{H - 1}}p\left( {{s}_{t + 1},{z}_{t + 1} \mid {s}_{t},{a}_{t + 1}}\right) \tag{2}
47
+ $$
48
+
49
+ is the probability of a trajectory $\zeta$ being sampled, ${V}_{\zeta } = \mathop{\sum }\limits_{{t = 0}}^{{H - 1}}{\gamma }^{t}R\left( {{s}_{t},{a}_{t + 1}}\right)$ is the cumulative reward along $\zeta$ , and $H$ is maximum look-ahead depth, or the planning horizon. By incrementally building a belief tree using sampled trajectories, DESPOT searches for the policy that provides the best value estimate, and outputs the optimal action for $b$ when exhausting the given planning time.
50
+
51
+ § 2.2 RISK-AWARE PLANNING AND LEARNING APPROACHES
52
+
53
+ There are different techniques for risk-aware planning under uncertainty. Nyberg et al. [11] and Gilhuly et al. [12] proposed two measures for estimating safety risks along driving trajectories, with a focus on open-loop trajectory optimization. Huang et al. [13] modeled risk-aware planning as a chance-constrained POMDP to compute closed-loop policies that provide a low chance of violating safety constraints. Kim et al. [14] leveraged bi-directional belief-space solvers. It bridges forward belief tree search with heuristics produced by an offline backward solver. Both methods improved the performance of POMDP planning in safety-critical domains, but at the cost of an expensive offline planning stage. Thus, it can hardly apply to large-scale problems. Most relevant to our method, Luo et al. [9] offered IS-DESPOT, which improves the performance of online POMDP planning by leveraging importance sampling (IS). It biases MC sampling towards critical scenarios according to an importance distribution provided by human experts, then computes a risk-aware policy under the re-weighted scenarios. Manually constructing the importance distribution, however, is difficult for complex problems such as driving in an urban crowd. In this paper, we propose a principled approach to learn importance distributions from experience and adapt them with real-time situations.
54
+
55
+ LEADER is also loosely connected to risk-aware reinforcement learning (RL). Kamran et al. [15] proposed a risk-aware Q-learning algorithm by punishing risky situations instead of only collision failures. Mirchevska et al. [16] proposed to combine DQN with a rule-based safety checker, masking out infeasible actions in the output layer. Eysenbach et al. [17] offered Ensemble-SAC (ESAC) for risk-aware RL, by additionally learning an ensemble of reset policies to assist the robot avoid irreversible states during training. However, because of low sample efficiency, the above model-free approaches were limited to small-scale tasks like lane changing in regulated highway traffic or controlling articulated robots for simple simulated tasks. Some prior work improved the risk-awareness of model-based RL that plans with learned models. Thomas et al. [18] proposed SMBPO, which learns a sufficiently-large terminal cost for failure states. Berkenkamp et al. [19] focused on ensuring the stability of control. The model-based methods provided better sample efficiency. However, it is still difficult to learn a model for large-scale, safety-critical problems like driving in an urban crowd.
56
+
57
+ § 3 OVERVIEW
58
+
59
+ LEADER learns to attend to the most critical human behaviors for risk-aware planning in urban driving. We define attention over the behavioral intentions of human participants. Assume there are $N$ exo-agents or traffic participants near the robot. Each of them may undertake a finite set of intentions, or future routes such as keeping straight, turning left, merging into the right lane, etc. The actual intention of an exo-agent is not directly observable. For each time step, LEADER maintains both a belief $b$ and an importance distribution $q$ over the intention sets of the $N$ exo-agents. The belief $b$ specifies the natural occurrence probability of exo-agents’ intentions. It is inferred from the interaction history. The importance distribution $q$ specifies the attention over exo-agents’ intentions, determining the actual probability of sampling them during planning. We propose to learn the importance distribution or the attention mechanism from the experience of an online POMDP planner. We will use "importance distribution" and "attention" interchangeably in the remaining.
60
+
61
+ Particularly, the LEADER algorithm has two main components: a learner that generates attention for real-time situations, and a planner that consumes the attention to perform conditional planning. The generator uses a neural network, $q = {G}_{\psi }\left( {b,z}\right)$ , to generate an importance distribution $q$ , for the real-time situation specified by the current belief $b$ and observation $z$ . The planner uses online belief tree search to make risk-aware decisions for the given $b$ and $z$ , leveraging the importance distribution $q$ to bias Monte-Carlo sampling towards critical human behaviors. To train the algorithm, we let the generator and the planner form a min-max game:
62
+
63
+ $$
64
+ \mathop{\min }\limits_{{q \in Q}}\mathop{\max }\limits_{{\pi \in \Pi }}{\widehat{V}}_{\pi }\left( {b,z \mid q}\right) \tag{3}
65
+ $$
66
+
67
+ where $\Pi$ is the space of all policies, and $Q$ is the space of all importance distributions. In this game, the generator must learn to generate $q$ ’s that lead to the lowest planning value, meaning to increase the probability of sampling the most adversarial intentions of exo-agents; the planner must find the best policy with the highest value, conditioned on the generated $q$ . We further learn a critic function, $v = {C}_{\varphi }\left( {b,z,q}\right)$ , another neural network that approximates the value function of the planner, ${C}_{\varphi }\left( {b,z \mid q}\right) \approx \mathop{\max }\limits_{{\pi \in \Pi }}{\widehat{V}}_{\pi }\left( {b,z \mid q}\right)$ , to assist gradient-descent training of the generator. Figure 1a and 1b demonstrate the training architecture of LEADER. In the bottom row, the planner plans robot actions using the generated attentions and feeds the driving experience to a replay buffer. In the top row, we train the critic and the generator using sampled data from the replay buffer. The critic is fitted to the planner's value estimates using supervised learning; the generator is trained to maximize the planner's value, using the critic as a differentiable surrogate objective.
68
+
69
+ § 4 RISK-AWARE PLANNING USING LEARNED ATTENTION
70
+
71
+ The LEADER planner performs belief tree search to plan robot actions conditioned on the attention over exo-agents' behaviors, solving the inner maximization problem in Eq. (3). A belief tree build by LEADER looks similar to a DESPOT tree (Section 2.1). It collates many sampled trajectories, each corresponding to a top-down path in the tree. The tree branches over all actions and all sampled observations under each belief node, effectively considers all possible policies under the sampled scenarios. The difference is, however, LEADER biases the tree towards higher-risk trajectories using importance sampling. Figure 2 provides a side-by-side comparison of belief trees in DESPOT and LEADER. Appendix A introduces the basics of importance sampling, and a theoretical justification on our minimization objective of learning importance distributions.
72
+
73
+ < g r a p h i c s >
74
+
75
+ Figure 2: A comparison of DESPOT and the LEADER planner. (a) The DESPOT tree samples human intentions from the belief, $b\left( {s}_{0}\right)$ , without considering the criticality of the intentions. Some rare critical events might be missed during sampling. (b) The LEADER tree samples human intentions from the learned importance distribution, $q\left( {s}_{0}\right)$ , which is biased towards adversarial intentions. The tree thus considers more critical events (red), and less safe events (green).
76
+
77
+ Concretely, we sample initial states ${s}_{0}$ from the learned importance distribution $q\left( {s}_{0}\right)$ , instead of from the actual belief $b\left( {s}_{0}\right)$ . As a result, the sampling distribution of simulation trajectories is also altered from Eq. (2), becoming:
78
+
79
+ $$
80
+ q\left( {\zeta \mid b,z,\pi }\right) = q\left( {s}_{0}\right) \mathop{\prod }\limits_{{t = 0}}^{{D - 1}}p\left( {{s}_{t + 1},{z}_{t + 1} \mid {s}_{t},{a}_{t + 1}}\right) , \tag{4}
81
+ $$
82
+
83
+ where $\zeta$ is a hypothetical future trajectory. The value of a candidate policy $\pi$ is now evaluated as:
84
+
85
+ $$
86
+ {V}_{\pi }\left( b\right) = {\int }_{\zeta \sim p\left( {\cdot \mid b,z,\pi }\right) }{V}_{\zeta }{d\zeta } = {\int }_{\zeta \sim q\left( {\cdot \mid b,z,\pi }\right) }\frac{p\left( {\zeta \mid b,z,\pi }\right) }{q\left( {\zeta \mid b,z,\pi }\right) }{V}_{\zeta }{d\zeta } \tag{5}
87
+ $$
88
+
89
+ Here, ${V}_{\zeta }$ is the discounted cumulative reward along a trajectory $\zeta$ . Eq. (5) first shows the definition of the value of policy $\pi$ , then applies importance sampling, replacing the sampling distribution $p\left( {\cdot \mid b,z,\pi }\right)$ in Eq. (2) with $q\left( {\cdot \mid b,z,\pi }\right)$ in Eq. (4). It also uses the importance weights $\frac{p\left( {\zeta \mid b,z,\pi }\right) }{q\left( {\zeta \mid b,z,\pi }\right) }$ to unbias the value estimation and ensure the correctness of planning. The value is further approximated using Monte Carlo estimates:
90
+
91
+ $$
92
+ {\widehat{V}}_{\pi }\left( b\right) = \frac{1}{\left| {\Delta }^{\prime }\right| }\mathop{\sum }\limits_{{\zeta \in {\Delta }^{\prime }}}\frac{p\left( {\zeta \mid b,z,\pi }\right) }{q\left( {\zeta \mid b,z,\pi }\right) }{V}_{\zeta } = \frac{1}{\left| {\Delta }^{\prime }\right| }\mathop{\sum }\limits_{{\zeta \in {\Delta }^{\prime }}}\frac{p\left( {s}_{0}\right) }{q\left( {s}_{0}\right) }{V}_{\zeta }, \tag{6}
93
+ $$
94
+
95
+ Eq. (6) starts with approximating the value using a set of sampled trajectories, ${\Delta }^{\prime }$ . It then simplifies the importance weights to $\frac{p\left( {s}_{0}\right) }{q\left( {s}_{0}\right) }$ , as $p\left( {\zeta \mid b,z,\pi }\right)$ and $q\left( {\zeta \mid b,z,\pi }\right)$ only differ in the probability of sampling the initial state ${s}_{0}$ .
96
+
97
+ The LEADER planner is built on top of IS-DESPOT [9], integrating it with learned importance distributions. Following IS-DESPOT, LEADER performs anytime heuristics search, incrementally building a sparse belief tree when sampling more trajectories. During the search, it maintains for each belief node a set of approximate value estimates, and uses them as tree search heuristics. See Luo et al. [9] for more details of the anytime algorithm.
98
+
99
+ § 5 LEARNING ATTENTION OVER HUMAN BEHAVIORS
100
+
101
+ We train the critic and generator neural networks using driving experience from the planner, stored in the replay buffer. The generator is trained to minimize the planner's value estimates, solving the outer minimization problem in Eq. (3). It uses the critic as a differentiable surrogate objective, which is supervised by the planner's value estimates. Appendix B describes the network architectures of the critic and the generator.
102
+
103
+ Critic Network. The Critic network’s parameters $\varphi$ are updated by minimizing the L2-norm between the critic’s value estimate and the planner’s value estimate $\widehat{V}$ using gradient descent, given a sampled tuple of belief $b$ , observation $z$ , and attention $q$ :
104
+
105
+ $$
106
+ J\left( \varphi \right) = {\mathbb{E}}_{\left( {b,z,q,\widehat{V}}\right) \sim D}\left\lbrack {\left| {C}_{\varphi }\left( b,z,q\right) - \widehat{V}\left( b,z \mid q\right) \right| }^{2}\right\rbrack , \tag{7}
107
+ $$
108
+
109
+ where $D$ is the set of online data stored in the replay buffer.
110
+
111
+ Attention Generator Network. The generator network’s parameters $\psi$ are updated by minimizing the planner’s value as estimated by the critic, given a sampled tuple of belief $b$ and observation $z$ :
112
+
113
+ $$
114
+ J\left( \psi \right) = {\mathbb{E}}_{\left( {b,z}\right) \sim D}\left\lbrack {{\mathbb{E}}_{q \sim {G}_{\psi }\left( {b,z}\right) }\left\lbrack {{C}_{\varphi }\left( {b,z,q}\right) }\right\rbrack }\right\rbrack \tag{8}
115
+ $$
116
+
117
+ where $D$ represents online data stored in the replay buffer. This objective is made differentiable using the reparameterization trick [20], enabling gradient descent training via the chain rule:
118
+
119
+ $$
120
+ J\left( \psi \right) = {\mathbb{E}}_{\left( {b,z}\right) \sim D}\left\lbrack {{\mathbb{E}}_{\epsilon \sim \mathcal{N}\left( {0,1}\right) }\left\lbrack {{C}_{\varphi }\left( {b,z,{G}_{\psi }\left( {b,z,\epsilon }\right) }\right) }\right\rbrack }\right\rbrack \tag{9}
121
+ $$
122
+
123
+ The following is the training procedure of LEADER. In each time step, the current belief $b$ and observation $z$ are fed into ${G}_{\psi }$ to produce the importance distribution $q$ . Then, the planner takes $b,z$ and $q$ as inputs to perform risk-aware planning, and outputs the optimal action $a$ to be executed in the environment together with its value estimate $\widehat{V}$ . The data point $\left( {b,z,q,\widehat{V}}\right)$ is sent to a fixed-capacity replay buffer. Next, a batch of data is sampled from the replay buffer, and used to update ${C}_{\varphi }$ and ${G}_{\psi }$ according to Eq. (7) and (8). The updated ${G}_{\psi }$ is then used for next planning step. Training starts from randomly initialized generator and critic networks and an empty replay buffer. In the warm-up phase, the critic is first trained using data collected by LEADER with uniform attention. This provides meaningful objectives for the attention generator to start with. Then, both the critic and the generator are trained with online data collected using the latest attention generator. At execution time, we only deploy the generator and the planner to perform risk-aware planning.
124
+
125
+ § 6 EXPERIMENTS AND DISCUSSIONS
126
+
127
+ In this section, we evaluate LEADER on autonomous driving in unregulated urban crowds, show the improvements on the real-time driving performance, and analyze the learned attention. The experiment task is to control the acceleration of a robot ego-vehicle, so that it follows a reference path and drives as fast as possible, while avoiding collision with the traffic crowd under the uncertainty of human intentions. A human intention indicates which path one intends to take, according to the lane network of the urban map. Attention is thus defined as a joint importance distribution over the intentions of 20 exo-agents near the robot, assuming conditional independence between agents. See details on the POMDP model in Appendix C.
128
+
129
+ Our baselines include both risk-aware planning and risk-aware RL methods. We first compare with POMDP planners using handcrafted attention. DESPOT-Uniform uses DESPOT with uniform attention over human intentions [21]. DESPOT-TTC computes the criticality of exo-agent's intentions using the estimated Time-To-Collision (TTC) with the ego-vehicle. The attention score is set proportional to $\frac{1}{{t}_{c}}$ , where ${t}_{c}$ is the TTC if the exo-agent takes the indented path with a constant speed. For RL baselines, we first include a model-free RL method, Ensemble SAC (ESAC) [17], which learns an ensemble of Q-values for risk-aware training. Then, we include a model-based RL method, SMBPO [18] which plans with a learned dynamics and a risk-aware reward model. Because of the difficulty of real-world data collection and testing for driving in a crowd, we instead used the SUMMIT simulator [21], which simulates high-fidelity massive urban traffic on real-world maps, using a realistic traffic motion model [22]. We evaluate LEADER and all baselines on three different environments: the Meskel square in Addis Ababa, a highway in Singapore, and the magic roundabout in Swindon, using 5 different sets of reference paths and crowd initialization for each map. Crowd interactions during driving are perturbed with random noise, thus are unique for each episode. See the driving clips of LEADER and comparisons with baselines in this video: dropbox.com/s/edfurridcegnxti/LEADER.mp4?dl=0.
130
+
131
+ < g r a p h i c s >
132
+
133
+ Figure 3: Learning curves of LEADER and existing risk-aware planning and RL algorithms: (a) average cumulative reward (b) collision rate per 1000 steps (c) average speed.
134
+
135
+ Table 1: Generalized driving performance of LEADER and existing risk-aware planning and learning algorithms on novel scenes averaged over 200 test runs.
136
+
137
+ max width=
138
+
139
+ Algorithm Cumulative Reward Collision Rate Travelled Distance Smoothness Factor
140
+
141
+ 1-5
142
+ DESPOT-Uniform $- {2.344} \pm {0.38}$ ${0.011} \pm {0.005}$ ${105.02} \pm {12.14}$ ${3.57} \pm {0.04}$
143
+
144
+ 1-5
145
+ DESPOT-TTC $- {2.55} \pm {0.21}$ ${0.015} \pm {0.002}$ ${102.01} \pm {9.84}$ ${3.45} \pm {0.1}$
146
+
147
+ 1-5
148
+ ESAC $- {5.2} \pm {0.73}$ ${0.05} \pm {0.004}$ ${20.18} \pm {4.54}$ ${1.88} \pm {0.2}$
149
+
150
+ 1-5
151
+ SMBPO $- {4.73} \pm {0.87}$ ${0.037} \pm {0.003}$ ${42.2} \pm {9.9}$ ${2.08} \pm {0.1}$
152
+
153
+ 1-5
154
+ LEADER (ours) $- {2.19} \pm {0.42}$ ${0.007} \pm {0.001}$ ${111.48} \pm {15.94}$ ${4.76} \pm {0.08}$
155
+
156
+ 1-5
157
+
158
+ § 6.1 PERFORMANCE COMPARISON
159
+
160
+ Learning Curves. In Figure 3, we show the learning curves of LEADER and the baseline algorithms in terms of the average cumulative reward, the collision rate, and the average driving speed. LEADER starts to outperform the strongest RL baseline from the ${8500}\mathrm{\;{th}}$ iteration, which corresponds to 12 hours of training using 12 real-time planner actors. It starts to outperform the strongest planning baseline from the 12500th iteration, corresponding to 19 hours of training. At the end of training, LEADER achieves the highest cumulative reward, the lowest collision rate, and the highest driving speed among all algorithms.
161
+
162
+ Generalization. To evaluate the generalization of LEADER, we ran it with 5 unseen reference paths and crowd initialization for each map with randomness on crowd interactions. Table 1 shows detailed driving performance of the considered algorithms averaged over 200 test runs, including the cumulative reward, collision rate, travelled distance, and smoothness factor. The collision rate measures the average number of collisions per 1000 time steps. The smoothness factor is the reverse of the number of decelerations, $\frac{1}{{N}_{dec}}$ . As shown, LEADER outperforms other methods in all metrics, consistent with the learning curves. It drives much more safely and efficiently than RL methods which had a hard time handling the highly dynamic crowd. It also improves driving safety, efficiency and smoothness from the planning baselines which applied sub-optimal attention.
163
+
164
+ § 6.2 THE LEARNED ATTENTION
165
+
166
+ To provide a qualitative analysis on what LEADER has learned, we further provide 2D visualizations of the learned attention in Figure 4. We provide three scenes from the Singapore Highway, Meskel Square, and Magic Roundabout, respectively. In each scene, we highlight a representative exo-agent, show the set of intention paths and visualize the learned attention over the paths using color coding. For other exo-agents, we only show the most critical intentions with the highest attention scores.
167
+
168
+ In Figure 4 (a), the highlighted exo-agent in blue drives in parallel with the ego-vehicle. It has several possible intentions, including continuing straight and merging to neighbouring lanes. The intention of merging right is more likely, as the agent is closer to the right lane. However, the merging left
169
+
170
+ < g r a p h i c s >
171
+
172
+ Figure 4: Visualization of the learned attention in: (a) highway (b) Meskel square (c) Magic. We highlighted one exo-agent in blue in each scene. Learned attention over its intentions are color-coded: green, yellow, red, purple, sorted from low attention to high attention. For other exo-agents, we only show the most-attended intention with dotted lines. See more examples in this video: www.dropbox.com/s/edfurridcegnxti/LEADER.mp4?dl=0.
173
+
174
+ intention is more critical, as the path will interfere with the ego-vehicle's. LEADER learns to put more attention on the more critical possibility. As a result, the planner decides to slow down the vehicle and prepare for potential hazards. In Figure 4 (b), the blue agent can either proceed left or turn right. Both intentions are equality likely. While the turning right intention gets the agent out of the ego-vehicle's way, proceeding left, however, causes the agent to continue interacting with the ego-vehicle. The generator thus assigned more attention to the latter. The planner subsequently decides to stop the ego-vehicle, since the blue agent in front may need to wait for its own path to be cleared. In Figure 4 (c), all intention paths of the blue agent intersect with the ego-vehicle's. The generator learned to attend to the path closest to the ego-vehicle, which is more hazardous. Consequently, the planner stops the ego-vehicle to prevent collision.
175
+
176
+ § 7 SUMMARY, LIMITATIONS AND FUTURE WORK
177
+
178
+ In this paper, we introduced LEADER, which integrates learning and planning to drive in crowded environments. LEADER forms a min-max game between a learning component that generates attention over potential human behaviors, and a planning component that computes risk-aware policies conditioned on the learned attention. By solving the min-max game, LEADER learned to attend to the most adversarial human behaviors and perform risk-aware planning. Our results show that LEADER helps to improve the real-time performance of driving in crowded urban traffic, effectively lowering the collision rate while keeping the efficiency and smoothness of driving, compared to both risk-aware planning and learning approaches.
179
+
180
+ Limitations of this work are related to our assumptions, model errors and data required for training. This work make a few assumptions. First, we assume there is an available map of the urban environment, providing lane-level information. However, we believe this requirement can be met for most major cities. Second, in our POMDP model, we focused on modeling the uncertainty of human intentions, and ignored perception uncertainty such as significant observation noises and occluded participants. These aspects will be addressed in future work through building more comprehensive models. Third, LEADER will also be affected by model errors, as the approach relies on the planner's value estimates to provide learning signals. But since our approach emphasizes the most adversarial future, i.e., relies on conservative predictions, it is more robust to model errors than typical planning approaches. Lastly, although we have improved tremendously from the sample-efficiency of RL algorithms, the algorithm still requires hours of online training through real-time driving. A promising solution is to "warm up" the critic and generator networks using offline real-world driving datasets, such as the Argoverse Dataset [23] and the Waymo Open Dataset [24], then perform further online training.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/DLkubm-dq-y/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Learning Road Scene-level Representations via Semantic Region Prediction
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: In this work, we tackle two vital tasks in automated driving systems, i.e., driver intent prediction and risk object identification from egocentric images. Mainly, we investigate the question: what would be good road scene-level representations for these two tasks? We contend that a scene-level representation must capture higher-level semantic and geometric representations of traffic scenes around ego-vehicle while performing actions to their destinations. To this end, we introduce the representation of semantic regions, which are areas where ego-vehicles visit while taking an afforded action (e.g., left-turn at 4-way intersections). We propose to learn scene-level representations via a novel semantic region prediction task and an automatic semantic region labeling algorithm. Extensive evaluations are conducted on the HDD and nuScenes datasets, and the learned representations lead to state-of-the-art performance for driver intention prediction and risk object identification.
12
+
13
+ Keywords: Semantic Region Prediction, Egocentric Vision, Driver Intent, Risk Object Identification
14
+
15
+ ## 1 Introduction
16
+
17
+ For automated driving systems (e.g., advanced driver assist systems, ADAS) to navigate highly interactive scenarios, they must be able to perceive states of traffic elements, forecast traffic situations, identify potential hazards, and plan the corresponding actions. The field has made substantial progress in the past few years $\left\lbrack {1,2,3,4,5,6,7,8,9,{10},{11},{12},{13},7,{14},{15},{16},{17},{18},8,{19},{20}}\right\rbrack$ . In this work, we focus on improving the performance of driver intention prediction [21, 22, 23] and risk object identification [24, 25, 26, 27, 28] from egocentric videos. Note that solving both tasks from egocentric videos is crucial for safety systems such as ADAS, where front-facing cameras are the primary device.
18
+
19
+ Existing works for both tasks $\left\lbrack {{22},{24},{26},{27},{28}}\right\rbrack$ utilize image annotations of the tasks (intent prediction and potential hazard identification) and object cues from object detection to train networks in a supervised learning manner. Additionally, the authors [22, 29] leverage temporal models such as LSTM [30] and ConvLSTM [31], and spatial-temporal interaction between traffic participants is modeled using spatial-temporal graph [23] and graph convolutional networks [32] to further improve the performance of tasks. While promising results are demonstrated, the learned representations are effective to the trained task. Moreover, the representations only encode road scenes in the nearby locality. We contend that a scene-level representation must capture higher-level semantic and geometric representations of traffic scenes around ego-vehicles while performing actions to their destinations, in order to reason about the larger scenes. We introduce a novel representation called the semantic region, as shown in Fig. 1. Semantic regions are areas where ego-vehicles visit while taking an afforded action to their destination. For instance, while turning left at an intersection, the vehicle visits the semantic regions (in yellow), i.e., the crosswalk near the ego vehicle, the area of intersection, and the crosswalk on the left sequentially. If different afforded actions are taken, different semantic regions will be visited. Note that different road topologies (e.g., 3-way intersections and straight roads) afford different actions. We associate egocentric images, representing views from different locations of road scenes under certain actions, to the corresponding semantic regions.
20
+
21
+ ![01963f5a-04e1-7b7b-b320-41549b970fd3_1_365_199_1106_667_0.jpg](images/01963f5a-04e1-7b7b-b320-41549b970fd3_1_365_199_1106_667_0.jpg)
22
+
23
+ Figure 1: Main idea. We contend that a scene-level representation must capture higher-level semantic and geometric representations of traffic scenes around ego-vehicles while performing actions to their destinations, in order to reason about the larger scenes. We propose semantic region, a novel representation that represents areas where ego vehicles visit while taking an afforded action. We associate egocentric images, representing views from different locations of road scenes under certain actions, with the corresponding semantic regions. We cast road scene-level representation learning as semantic region prediction and demonstrate the learned representations are effective for driver intention prediction and risk object identification.
24
+
25
+ We cast scene-level representation learning as semantic region prediction. Specifically, the model predicts future semantic regions sequentially, given historical observations before turning left. For instance, as shown in Fig. 2a, given egocentric images representing semantic regions $S$ and ${A}_{1}$ , the task aims to predict future semantic regions ${B}_{1},{C}_{1}$ , and ${T}_{1}$ in sequential order. To enable representation learning, we design an automatic semantic region annotation strategy to label every egocentric image collected in intersections with the corresponding semantic region, which reduces the annotation burden.
26
+
27
+ We demonstrate the effectiveness of the scene-level representation learning framework on driver intention prediction [22] and risk object identification [27]. We achieve superior performance compared to strong baselines for driver intention prediction on the HDD dataset [33]. Furthermore, we show favorable generalization capability without additional training on nuScenes [34]. Moreover, our framework obtains state-of-the-art performance for risk object identification. Specifically, we boost the current best-performing algorithm [27] by 6%.
28
+
29
+ Our contributions are summarized as follows. First, we propose a novel representation called semantic region, which aims to capture higher-level semantic and geometric representations of traffic scenes around ego-vehicles while performing actions to their destination. Second, we cast scene-level representation learning as semantic region prediction (SRP) and propose an automatic labeling algorithm for intersections to reduce annotation burdens. Third, we conduct extensive evaluations on the HDD and nuScenes datasets to prove that the effectiveness of the learned representations leads to significant improvements in driver intention prediction and risk object identification.
30
+
31
+ ![01963f5a-04e1-7b7b-b320-41549b970fd3_2_423_235_958_663_0.jpg](images/01963f5a-04e1-7b7b-b320-41549b970fd3_2_423_235_958_663_0.jpg)
32
+
33
+ Figure 2: Semantic regions in different types of road topology. Semantic regions are areas where ego-vehicles visit while taking actions afforded the underlying road topology. For instance, at 4- way intersections, three actions (i.e., Left-turn, Straight, Right-turn) are afforded. Given egocentric images while performing an afforded action, we associate them with the corresponding semantic regions. In this work, we cover a wide range of road topologies, i.e., 4-way/3-way intersections and straight road with multiple lanes.
34
+
35
+ ## 2 Related Work
36
+
37
+ Driver Intention Prediction. Advanced driver-assistance systems predict driver intention [21, 22, ${35},{36},{37},{38}\rbrack$ for preventing potential hazards. Doshi et al.,[21] predict driver’s intent via reasoning distances to lane markings and vehicle dynamics for driver intention prediction in highway scenarios. In the Brain4Car project [22, 39, 23], multi-sensory signals, including GPS and street maps, are used for anticipation of driving maneuvers. Similarly, pre-computed road topology maps around intersections are utilized to extract features such as ego position and dynamics, distance to surrounding traffic participants, and legal actions at the upcoming intersection to predict driver intention [35]. Recently, Casas et al., [37] leverage rasterized HD maps as input deep neural networks for intent prediction. Instead of formulating intent prediction as a recognition problem, Hu et al., [38] formulate intention prediction as entering an insertion area defined on a pre-computed road topology map. For instance, if the intent is turning left, the corresponding insertion area is ${T}_{1}$ as shown in Fig. 2a. Unlike existing methods exploiting pre-computed road topology, we learn road scene-level representations via semantic region prediction, which capture higher-level semantic and geometric representations of traffic scenes around ego-vehicles while performing actions to their destinations. We empirically demonstrate the value of learned representations.
38
+
39
+ Risk Object Identification. The goal of risk object identification is to identify object(s) that impact ego-vehicle navigation [24, 40, 41, 25, 26, 42, 43, 27]. The authors of [24, 25, 43] construct datasets with object importance annotation, and supervised learning-based algorithms are designed and trained to identify risk/important objects. The task can be formulated as selecting regions/objects with high activations in visual attention heat maps learned from end-to-end driving models [40, 26, 42]. Recently, Li et al., [27] formulate risk object identification as cause-effect problem [44]. They propose a two-stage risk object identification framework and demonstrate favorable performance over $\left\lbrack {{40},{26}}\right\rbrack$ . In this work, we extend $\left\lbrack {27}\right\rbrack$ with the learned road topology representation because road topology information is crucial for risk/important object identification [25]. Note that they [25] assume that the planned path is given. In this work, we tackle a more challenging setting where driver intention is unknown and should be inferred from egocentric images.
40
+
41
+ ![01963f5a-04e1-7b7b-b320-41549b970fd3_3_353_201_1082_391_0.jpg](images/01963f5a-04e1-7b7b-b320-41549b970fd3_3_353_201_1082_391_0.jpg)
42
+
43
+ Figure 3: Automatic labeling of semantic regions in intersections. We show an example of generating labels of semantic regions from a Left-turn egocentric video sequence. The semantic regions are consistent with the ones in Fig. 2a. Best viewed in color.
44
+
45
+ ## 3 Automatic Semantic Region Labeling
46
+
47
+ We propose an automatic labeling strategy to ease the burdens. The overall generation process is depicted in Fig. 3. Specifically, a three-step strategy is proposed. First, given egocentric videos collected while taking afforded actions (i.e., Left-turn, Straight, Right-turn, and lane-change) without interacting with traffic participants from the HDD dataset [33] ${}^{1}$ , we apply COLMAP [45], to obtain a dense 3D reconstruction and camera poses. In addition, semantic segmentation [46] is applied to every egocentric image. Second, each 3D point is projected onto images so that the point is visible to obtain the corresponding semantic candidates. Then, a simple winner-take-all strategy is used to determine the final label. We project the semantic 3D point cloud to the ground plane to obtain a semantic Bird-Eye-View (BEV) image. Third, we label the semantic region of each camera pose with the information from semantic BEV image. For example, in intersections, we assume that ego vehicles will visit two crosswalks sequentially while taking afforded actions. Camera poses that overlap with the first crosswalk and the second crosswalk are denoted as ${A}_{i}$ and ${C}_{i}$ , respectively. The poses located between ${A}_{i}$ and ${C}_{i}$ are ${B}_{i}$ . Camera poses located before the first crosswalk and the second crosswalk as ${S}_{i}$ and ${T}_{i}$ , respectively. Each index $i$ represents an afforded action. Last but not least, while the results of COLMAP and semantic segmentation are generally well, we use two additional criteria to select good samples: 1) 3D reconstruction is successful, and 2) reconstructed camera poses form a coherent trajectory. Note that the algorithm is in general applicable for different topologies. However, we observed failures for lane-change in non-intersection due to inaccurate 3D reconstruction. Therefore, we manually annotate videos that ego-vehicles perfrom lane-change.
48
+
49
+ ## 4 Methodology
50
+
51
+ In this section, we discuss the details of road scene-level representation learning from egocentric video via semantic region prediction. In addition, we illustrate how to transfer the learned representation to two downstream tasks, i.e., driver intention prediction and risk object identification.
52
+
53
+ ### 4.1 Scene-level Representation Learning via Semantic Region Prediction
54
+
55
+ We contend that a scene-level representation must capture higher-level semantic and geometric representations of traffic scenes around ego-vehicle while performing actions to their destinations. Thus, we proposed the representation called semantic region, which is a high-level abstraction of road affordance. We expect a model capturing the association between the temporal evolution of egocentric views and semantic regions. To this end, We cast the egocentric road scene affordance representation learning as a semantic region prediction task. We build our Semantic Region Prediction (SRP) cell based on TRN cells [47]. The STA (spatio-temporal accumulator) in TRN cell makes use of predicted future cues from the temporal decoder and the accumulated historical information to form better action representations. We make the following changes to TRN cells. First, we replace the action classifier with two semantic region classifiers for both intersections and non-intersections. Second, in the decoder, the predicted logits of two semantic region classifiers are fused into the input of the next time frame after increasing dimensions with FC layers. Third, we add a topology classifier to classify whether the current topology is intersection or non-intersection.
56
+
57
+ ---
58
+
59
+ ${}^{1}$ The HDD dataset provides large-scale annotations of afforded actions.
60
+
61
+ ---
62
+
63
+ ![01963f5a-04e1-7b7b-b320-41549b970fd3_4_364_203_1062_886_0.jpg](images/01963f5a-04e1-7b7b-b320-41549b970fd3_4_364_203_1062_886_0.jpg)
64
+
65
+ Figure 4: The proposed network architecture for semantic region prediction and models for downstream tasks. We propose to learn road scene representation via Semantic Region Prediction (SRP). The hidden state of the SRP cell serves as road scene representation and is utilized in two downstream tasks: driver intention prediction and risk object identification.
66
+
67
+ With SRP cells, our network takes ${t}_{e}$ historical frames as input. For each frame, topology type (i.e., whether it is in intersection), the current semantic region as well as ${t}_{d}$ future semantic regions are predicted. We have separate semantic region classifiers for intersections and non-intersections. During training, we only compute losses for the one that matches the groudtruth topology type. The loss for learning semantic regions is
68
+
69
+ $$
70
+ \mathop{\sum }\limits_{t}^{{t}_{e}}\mathbf{l}\left( {{z}_{t},{o}_{t}}\right) + \mathop{\sum }\limits_{{i = 0}}^{1}{\mathbb{1}}_{{o}_{t} = i}\mathop{\sum }\limits_{t}^{{t}_{e}}\left( {\mathbf{l}\left( {{y}_{t}^{i, e},{s}_{t}^{i}}\right) + \frac{1}{{t}_{d}}\mathop{\sum }\limits_{{m = 0}}^{{t}_{d}}\mathbf{l}\left( {{y}_{t}^{i, d},{s}_{t + m}^{i}}\right) }\right) \tag{1}
71
+ $$
72
+
73
+ where $i \in \{ 0,1\}$ denotes the $i$ th semantic region classifier, ${z}_{t}$ denotes the topology prediction, and ${y}_{t}^{i, e}$ and ${y}_{t}^{i, d}$ are the semantic region prediction based on the hidden state of STA and decoder respectively for topology classifier $i$ . 1 is Cross-entropy loss,1is the indicator function, ${o}_{t}$ is the groudtruth topology type, and ${s}_{t}^{i}$ is the groudtruth of semantic regions derived from Section 3. The overall architecture is depicted in Fig. 4. Note that in practice, it is not necessary to observe all semantic regions in one video clip. For instance, in an intersection, when there are no crosswalks, ${A}_{i}$ and ${C}_{i}$ will not be presented. For a left-turn vehicle at intersections without a crosswalk, the semantic region sequences will be $S - {B}_{1} - {T}_{1}$ . The hidden state ${\mathbf{h}}_{e}^{t}$ of STA contains rich information about road scene. Next, we are going to show how to incorporate the learned representation into downstream tasks.
74
+
75
+ ### 4.2 SRP-guided Driver Intention Prediction
76
+
77
+ We follow the definition of anticipation in [39] to define driver intention prediction. Formally, given an sequence of egocentric observations $\left\{ {{\mathbf{x}}^{1},{\mathbf{x}}^{2},\ldots ,{\mathbf{x}}^{t}}\right\}$ , our goal is to predict the future intention ${\mathbf{y}}_{\text{int }}^{T}$ , where $T > t$ . Driver intention prediction benefits downstream applications like risk assessment [48]. There are 5 different types of intentions in our setting (i.e., Left-turn, Straight, Right-turn, Left-lane-change, Right-lane-change). We add an intention classifier on top of the hidden state of the STA, ${\mathbf{h}}_{e}^{t}$ in SRP,
78
+
79
+ $$
80
+ {\mathbf{y}}_{\text{int }}^{T} = \operatorname{softmax}\left( {{\mathbf{W}}_{\text{int }}^{\top }{\mathbf{h}}_{e}^{t} + {\mathbf{b}}_{\text{int }}}\right) \tag{2}
81
+ $$
82
+
83
+ where ${\mathbf{W}}_{int}^{\top }$ and ${\mathbf{b}}_{int}$ are the weight and bias terms in the intention classifier respectively. We name the driver intention prediction model as SRP-INT.
84
+
85
+ ### 4.3 SRP-guided Risk Object Identification
86
+
87
+ The risk object identification task was first introduced in [27]. A Risk object is defined as the one influencing the behavior of the ego-vehicle most in each frame. Given an egocentric video $\left\{ {{\mathbf{x}}^{1},{\mathbf{x}}^{2},\ldots ,{\mathbf{x}}^{t}}\right\}$ , the goal of risk object identification is to output $\left\{ {{\mathbf{b}}^{1},{\mathbf{b}}^{2},\ldots ,{\mathbf{b}}^{t}}\right\}$ , where ${\mathbf{b}}^{j}$ , $j \in \left\lbrack {1, t}\right\rbrack$ is the bounding box of the risk object in the $j$ -th frame. The authors of [27] proposed a two-stage framework to solve the problem. In the first stage they trained an object-level manipulable model to predict the driver behavior by incorporating partial CNNs [49]. In the second stage they iterated through the risk object candidate list and intervened the input video to simulate scenarios without the presence of a candidate. The simulated scenarios were passed into the driver behavior model. The object causing the maximum driving behavior change was their risk object prediction. The ego-representation in [27] takes a very important role because it captures the information from image frame as well as as the messages from all the objects. The representation in time $t$ , i.e., last time step, can be written as
88
+
89
+ $$
90
+ {\mathbf{g}}_{e}^{t} = {\mathbf{g}}_{f}^{t} \oplus \frac{1}{N}\mathop{\sum }\limits_{{k = 1}}^{N}{\mathbf{g}}_{k}^{t} \tag{3}
91
+ $$
92
+
93
+ where ${\mathbf{g}}_{f}^{t}$ is the representation of the image frame, ${\mathbf{g}}_{k}^{t}, k \in \left\lbrack {1, N}\right\rbrack$ in the representation for each object, $\oplus$ indicates a concatenation operation, and ${\mathbf{g}}_{e}^{t}$ is the final ego-representation in [27].
94
+
95
+ We propose SRP-ROI by fusing SRP with the model in [27]. We argue that road scene-level information can benefit the risk object identification task, and propose an SRP-guided representation:
96
+
97
+ $$
98
+ {\mathbf{g}}_{e}^{t} = \left( {{\mathbf{W}}_{ego}\left( {{\mathbf{g}}_{f}^{t} \oplus \frac{1}{N}\mathop{\sum }\limits_{{k = 1}}^{N}{\mathbf{g}}_{k}^{t}}\right) + {\mathbf{b}}_{ego}^{t}}\right) \oplus {\mathbf{h}}_{e}^{t} \tag{4}
99
+ $$
100
+
101
+ where ${\mathbf{W}}_{ego}$ and ${\mathbf{b}}_{ego}$ are the weights and bias terms of a fully connected layer respectively. We follow the two-stage framework in [27] and evaluate our SRP-ROI model on two challenging dynamic risk object categories: crossing vehicles and crossing pedestrians.
102
+
103
+ ## 5 Experiments
104
+
105
+ ### 5.1 Semantic Region Prediction
106
+
107
+ Data Collection and Annotation. We collect video clips of Left-turn, Straight, Right-turn, Left-lane-change, Right-lane-change from the HDD dataset to train our semantic region predictor. For each video clip, we manually label the topology type. Labels of semantic regions at the intersections are automatically generated with the methods proposed in Section 3. The semantic regions for non-intersections are annotated by humans.For each video clip, we apply a sliding-window method to obtain training samples. For each sample, we have annotations including topology type, current, and future semantic region labels.
108
+
109
+ <table><tr><td rowspan="2">Metric</td><td colspan="2">Intersection</td><td colspan="2">Non-intersetion</td></tr><tr><td>Current SR</td><td>Future SR</td><td>Current SR</td><td>Future SR</td></tr><tr><td>Micro Avg Pre</td><td>47.0</td><td>52.7</td><td>65.3</td><td>62.9</td></tr><tr><td>Macro Avg Pre</td><td>20.9</td><td>20.3</td><td>50.4</td><td>53.8</td></tr><tr><td>mAP</td><td>26.4</td><td>24.5</td><td>51.2</td><td>53.8</td></tr></table>
110
+
111
+ Table 1: Performances of Semantic Region Prediction. Current SR stands for current semantic region, while Future SR stands for future semantic region.
112
+
113
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Aux</td><td colspan="3">HDD</td><td colspan="3">HDD Interactive</td><td colspan="3">nuScenes</td></tr><tr><td>Macro Avg Pre</td><td>Micro Avg Pre</td><td>mAP</td><td>Macro Avg Pre</td><td>Micro Avg Pre</td><td>mAP</td><td>Macro Avg Pre</td><td>Micro Avg Pre</td><td>mAP</td></tr><tr><td>LSTM[30]</td><td>-</td><td>45.0</td><td>64.9</td><td>51.5</td><td>30.8</td><td>56.2</td><td>62.4</td><td>37.3</td><td>68.8</td><td>62.0</td></tr><tr><td>LSTM-EL[39]</td><td>-</td><td>45.0</td><td>65.5</td><td>52.4</td><td>29.1</td><td>51.8</td><td>60.9</td><td>35.6</td><td>62.0</td><td>61.0</td></tr><tr><td>OadTR[51]</td><td>-</td><td>35.9</td><td>24.3</td><td>36.3</td><td>48.4</td><td>46.9</td><td>54.1</td><td>47.8</td><td>64.3</td><td>50.7</td></tr><tr><td>TRN-Tra</td><td>Tra</td><td>45.0</td><td>70.8</td><td>47.5</td><td>30.9</td><td>59.8</td><td>57.2</td><td>35.7</td><td>58.8</td><td>58.7</td></tr><tr><td>SRP-INT</td><td>SR</td><td>55.3</td><td>73.8</td><td>57.9</td><td>67.0</td><td>70.3</td><td>69.5</td><td>41.1</td><td>68.3</td><td>66.7</td></tr></table>
114
+
115
+ Table 2: Quantitative results of driver intention prediction. We compare SRP-INT with baselines. Aux stands for auxiliary tasks. Tra and SR stands for trajectory and semantic region, re-specyively. All models have the same feature extractor [46].
116
+
117
+ Implementation Details and and Results. We leverage ResNet50 [23] pre-trained on Mapillary Vistas [45] dataset as the feature extractor. Our SRP takes ${l}_{e} = 3$ historical frames as input. For each frame, ${l}_{d} = 5$ future semantic regions as well as topology type are predicted. As shown in Fig. 2a and Fig. 2b, the number of semantic regions in intersection and non-intersections are 13 and 5, respectively. We use Adam optimizer [50] with default parameters, a learning rate of 0.0001 and weight decay of 0.0005 . The model is trained for 60 epochs. We train the model with the loss function in Eq. (1). The performances are shown in Table 1. Macro Average Precision, Micro Average Precision, and mAP are chosen as the evaluation metrics.
118
+
119
+ ### 5.2 Driver Intention Prediction
120
+
121
+ Testing Data and Experiment Setup After training SRP on the video clips in Section 5.1, we further use the intention labels to train the intention classifier. Details are provided in the supplementary materials. We evaluate driver intention prediction models on both HDD [33] test set and nuScenes [34] datasets. Note that in HDD, there is no overlap between the training data and test data. We evaluate models on 1438 sequences in HDD (including 393 interactive scenarios) and 221 sequences in nuScenes. We use the same evaluation metrics as Section 5.1.
122
+
123
+ Baselines and Comparisons We implement several baselines with the same image feature extractor as the proposed SRP-INT. LSTM [30] is a general-purpose sequential modeling methods. OadTR [51] takes advantage of the popular Transformers [52] and is a competitive online/real-time action recognition model. We implement LSTM with Exponential Loss (LSTM+EL), as [39] shows the effectiveness of Exponential Loss for driver intention prediction. We modify TRN [47] to predict trajectories (similar to the work [53]) and use the learned representation for intention prediction. As shown in Table 2, we demonstrate favorable performances on both datasets and prove the effectiveness of our framework empirically. Qualitative results are presented in the supplementary materials.
124
+
125
+ <table><tr><td rowspan="2">Model</td><td colspan="3">Crossing Vehicle</td><td colspan="3">Crossing Pedestrian</td></tr><tr><td>Acc 0.5</td><td>Acc 0.75</td><td>mAcc</td><td>Acc 0.5</td><td>Acc 0.75</td><td>mAcc</td></tr><tr><td>[27](paper)</td><td>49.2</td><td>48.6</td><td>43.0</td><td>35.7</td><td>32.1</td><td>27.0</td></tr><tr><td>[27](our implementation)</td><td>49.2</td><td>48.2</td><td>42.7</td><td>33.3</td><td>29.8</td><td>26.2</td></tr><tr><td>SRP-ROI</td><td>51.8</td><td>51.1</td><td>45.1</td><td>42.9</td><td>39.3</td><td>33.3</td></tr></table>
126
+
127
+ Table 3: Quantitative results of risk object identification. We evaluate risk object identification models on two risk object categories: Crossing Vehicle and Crossing Pedestrian.
128
+
129
+ ### 5.3 Risk Object Identification
130
+
131
+ Experimental Setup and Evaluation. We follow the experiment setup in [27] and train separate models on two challenging dynamic risk object categories: Crossing Vehicle and Crossing Pedestrian. Like [27], we evaluate our models by calculating the IOU between the predicted risk object and groundtruth. We report accuracy at IOU thresholds of 0.5, 0.75 , and mean accuracy.
132
+
133
+ Implementation Details. We utilize Mask R-CNN [54] and DeepSORT [9] to compute the tracking proposals of risk object candidates. The pre-trained semantic region representation are fussed with the ego representation in [27] after passing through a fully connected layer. In practice, the output dimension of the fully connected layer is 100 . In this stage, we train the model using Adam [50] optimizer with default parameters, a learning rate of 0.0001 , and weight decay of 0.0001 . The model is trained for 20 epochs. After training, we follow the inference procedure in [27] to obtain the bounding boxes of the risk object in each frame. We do not apply any heuristic to remove objects from tracking proposals and models are trained separately for each category.
134
+
135
+ Quantitative Results. We compare our method with [27]. The quantitative results show our model have the better performances, which demonstrates that semantic region prediction can help risk object identification. Qualitative results are presented in the supplementary materials.
136
+
137
+ ## 6 Limitations
138
+
139
+ Although we have shown the effectiveness our proposed representation, some limitations need further exploration. First, our proposed semantic regions cannot be applied to complicated topologies like a roundabout. A more comprehensive definition of semantic regions is desired. Second, learning semantic regions from egocentric view images alone is challenging. Additionally, the performance of semantic region prediction at intersections are unsatisfactory. To improve the performance, we could consider incorporating Bird-Eye-View representation [55]. Third, we have not truly associated images with semantic regions. Instead of predicting the label of semantic regions, we could consider an encoder-decoder based model to predict the current/future scene representations [56].
140
+
141
+ ## 7 Conclusion
142
+
143
+ In this work, we study the problem of road scene-level representation learning from egocentric videos for driver intention prediction and risk object identification. We propose a novel representation called semantic region, which aims to capture higher-level semantic and geometric representations of traffic scenes around ego-vehicles while performing actions to their destination. We cast representation learning as semantic region prediction and propose an automatic semantic region labeling algorithms for egocentric videos collected in intersections. We demonstrate the effectiveness of the learned representation on real-world datasets, i.e., HDD and nuScenes. In particular, the learned representation can generalize to unseen data (i.e., nuScenes dataset) without finetuning on the driver intention prediction task. We hope that our findings will pave the way for further advances in road scene-level representation learning from egocentric views for downstream tasks such as planning and decision making.
144
+
145
+ References
146
+
147
+ [1] H. Zhang, A. Geiger, and R. Urtasun. Understanding High-Level Semantics by Modeling Traffic Patterns. In ${ICCV},{2013}$ .
148
+
149
+ [2] A. Geiger, M. Lauer, C. Wojek, C. Stiller, and R. Urtasun. 3D Traffic Scene Understanding from Movable Platforms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36:5, 2014.
150
+
151
+ [3] S. R. T. R. M. E. R. B. U. F. S. R. M. Cordts, M. Omran and B. Schiele. The Cityscapes Dataset for Semantic Urban Scene Understanding. In CVPR, 2016.
152
+
153
+ [4] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe. Unsupervised Learning of Depth and Ego-Motion from Video. In CVPR, 2017.
154
+
155
+ [5] S. R. B. G. Neuhold, T. Ollmann and P. Kontschieder. The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes. In ICCV, 2017.
156
+
157
+ [6] K. He, G. Gkioxari, P. Dollar, and R. Girshick. Mask R-CNN. In CVPR, 2017.
158
+
159
+ [7] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko. End-to-End Object Detection with Transformers. In ${ECCV},{2020}$ .
160
+
161
+ [8] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao. YOLOv4: Optimal Speed and Accuracy of Object Detection. In arXivpreprint arXiv:2004.10934, 2021.
162
+
163
+ [9] N. Wojke, A. Bewley, and D. Paulus. Simple Online and Realtime Tracking with a Deep Association Metric. In ICIP, 2017.
164
+
165
+ [10] S. Schulter, M. Zhai, N. Jacobs, and M. Chandraker. Learning to Look around Objects for Top-View Representations of Outdoor Scenes. In ${ECCV},{2018}$ .
166
+
167
+ [11] Z. Wang, B. Liu, S. Schulter, and M. Chandraker. A Parametric Top-View Representation of Complex Road Scenes. In ${CVPR},{2019}$ .
168
+
169
+ [12] L. Yang, Y. Fan, and N. Xu. Video instance segmentation. In ICCV, 2020.
170
+
171
+ [13] R. Chandra, U. Bhattacharya, A. Bera, and D. Manocha. TraPHic: Trajectory Prediction in Dense and Heterogeneous Traffic Using Weighted Interactions. In CVPR, 2019.
172
+
173
+ [14] T. Roddick and R. Cipolla. Predicting Semantic Map Representations from Images using Pyramid Occupancy Networks. In CVPR, 2020.
174
+
175
+ [15] J. Philion and S. Fidler. Lift, Splat, Shoot: Encoding Images from Arbitrary Camera Rigs by Implicitly Unprojecting to 3D. In ECCV, 2020.
176
+
177
+ [16] V. Guizilini, R. Hou, J. Li, R. Ambrus, and A. Gaidon. Semantically-Guided Representation Learning for Self-Supervised Monocular Depth . In ICLR, 2020.
178
+
179
+ [17] B. G. H. A. A. Y. L.-C. C. Huiyu Wang, Yukun Zhu. Axial-deeplab: Stand-alone axial-attention for panoptic segmentation. In ${ECCV},{2020}$ .
180
+
181
+ [18] H. Jung, E. Park, and S. Yooo. Fine-grained Semantics-aware Representation Enhancement for Self-supervised Monocular Depth Estimation . In ICCV, 2021.
182
+
183
+ [19] A. V. Malawade, S.-Y. Yu, B. Hsu, H. Kaeley, A. Karra, and M. A. A. Faruquea. roadscene2vec: A Tool for Extracting and Embedding Road Scene-Graphs. In arXiv: 2109.01183, 2021.
184
+
185
+ [20] L. Neumann and A. Vedaldi. Pedestrian and Ego-vehicle Trajectory Prediction from Monocular Camera. In ${CVPR},{2020}$ .
186
+
187
+ [21] A. Doshi, B. Morris, and M. Trivedi. On-road Prediction of Driver's Intent with Multimodal Sensory Cues. IEEE Pervasive Computing, 10(3):22-34, 2011.
188
+
189
+ [22] A. Jain, H. Koppula, B. Raghavan, S. Soh, and A. Saxena. Car that Knows Before You Do: Anticipating Maneuvers via Learning Temporal Driving Models. In ICCV, 2015.
190
+
191
+ [23] A. Jain, A. Zamir, S. Savarese, and A. Saxena. Structural-RNN: Deep Learning on SpatioTemporal Graphs. In ${CVPR},{2016}$ .
192
+
193
+ [24] E. Ohn-Bar and M. M. Trivedi. Are all objects equal? Deep Spatio-temporal Importance Prediction in Driving Videos. Pattern Recognition, 64:425-436, 2017.
194
+
195
+ [25] M. Gao, A. Tawari, and S. Martin. Goal-oriented Object Importance Estimation in On-road Driving Videos. In ICRA, 2019.
196
+
197
+ [26] D. Wang, C. Devin, Q.-Z. Cai, F. Yu, and T. Darrell. Deep Object-Centric Policies for Autonomous Driving. In ${ICRA},{2019}$ .
198
+
199
+ [27] C. Li, S. H. Chan, and Y.-T. Chen. Who Make Drivers Stop? Towards Driver-centric Risk assessment: Risk Object Identification via Causal Inference. In IROS, 2020.
200
+
201
+ [28] H. M. M. T. C. C. Jiachen Li, Haiming Gang. Important object identification with semi-supervised learning for autonomous driving. In ICRA, 2022.
202
+
203
+ [29] Y. Rong, Z. Akata, and E. Kasneci. Driver intention anticipation based on in-cabin and driving scene monitoring. In ITSC, 2020.
204
+
205
+ [30] S. Hochreiter and J. Schmidhuber. Long Short-term Memory. Neural Computation, 1997.
206
+
207
+ [31] X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W. kin Wong, and W. chun Woo. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. In NeurIPS, 2015.
208
+
209
+ [32] T. N. Kipf and M. Welling. Semi-supervised Classification with Graph Convolutional Networks. In ${ICLR},{2017}$ .
210
+
211
+ [33] V. Ramanishka, Y.-T. Chen, T. Misu, and K. Saenko. Toward Driving Scene Understanding: A Dataset for Learning Driver Behavior and Causal Reasoning. In CVPR, 2018.
212
+
213
+ [34] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom. nuScenes: A Multimodal Dataset for Autonomous Driving. In CVPR, 2020.
214
+
215
+ [35] D. Phillips, T. Wheeler, and M. Kochenderfer. Generalizable Intention Prediction of Human Drivers at Intersections. In ${IV},{2017}$ .
216
+
217
+ [36] A. Zyner, S. Worrall, and E. Nebot. A Recurrent Neural Network Solution for Predicting Driver Intention at Unsignalized Intersections. IEEE Robotics and Automation Letters, 3(3): 1759-1764, 2018.
218
+
219
+ [37] S. Casas, W. Luo, and R. Urtasun. IntentNet: Learning to Predict Intention from Raw Sensor Data. In ${CoRL},{2018}$ .
220
+
221
+ [38] Y. Hu, W. Zhan, and M. Tomizuka. Probabilistic Prediction of Vehicle Semantic Intention and Motion. In ${IV},{2018}$ .
222
+
223
+ [39] A. Jain, A. Singh, H. S. Koppula, S. Soh, and A. Saxena. Recurrent Neural Networks for Driver Activity Anticipation via Sensory-fusion Architecture. In ICRA, 2016.
224
+
225
+ [40] J. Kim and J. Canny. Interpretable Learning for Self-driving Cars by Visualizing Causal Attention. In ${ICCV},{2017}$ .
226
+
227
+ [41] A. Palazzi, D. Abati, S. Calderara, F. Solera, and R. Cucchiara. Predicting the Driver's Focus of Attention: the DR(eye)VE Project. PAMI, 41:1720-1733, 2018.
228
+
229
+ [42] C. Li, Y. Meng, S. H. Chan, and Y.-T. Chen. Learning 3D-aware Egocentric Spatial-Temporal Interaction via Graph Convolutional Networks. In ICRA, 2020.
230
+
231
+ [43] Z. Zhang, A. Tawari, S. Martin, and D. Crandall. Interaction Graphs for Object Importance Estimation in On-road Driving Videos. In ICRA, 2020.
232
+
233
+ [44] J. Pearl. Causality. Cambridge University Press, 2009.
234
+
235
+ [45] J. L. Schönberger and J.-M. Frahm. Structure-from-Motion Revisited. In CVPR, 2016.
236
+
237
+ [46] G. Neuhold, T. Ollmann, S. Rota Bulò, and P. Kontschieder. The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes. In ICCV, 2017.
238
+
239
+ [47] M. Xu, M. Gao, Y.-T. Chen, L. Davis, and D. Crandall. Temporal Recurrent Networks for Online Action Detection. In ${ICCV},{2019}$ .
240
+
241
+ [48] S. Lefèvre, D. Vasquez, and C. Laugier. A survey on motion prediction and risk assessment for intelligent vehicles. ROBOMECH journal, 1(1):1-14, 2014.
242
+
243
+ [49] G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro. Image inpainting for irregular holes using partial convolutions. In ${ECCV},{2018}$ .
244
+
245
+ [50] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
246
+
247
+ [51] X. Wang, S. Zhang, Z. Qing, Y. Shao, Z. Zuo, C. Gao, and N. Sang. Oadtr: Online action detection with transformers. In ${ICCV},{2021}$ .
248
+
249
+ [52] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. De-hghani, M. Minderer, G. Heigold, S. Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. 2021.
250
+
251
+ [53] Y. Yao, M. Xu, C. Choi, D. J. Crandall, E. M. Atkins, and B. Dariush. Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems. In ICRA, 2019.
252
+
253
+ [54] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In ICCV, 2017.
254
+
255
+ [55] A. Saha, O. M. Maldonado, C. Russell, and R. Bowden. Translating Images into Maps. In ICRA, 2022.
256
+
257
+ [56] S. K. Ramakrishnan, T. Nagarajan, Z. Al-Halah, and K. Grauman. Environment Predictive Coding for Embodied Agents. In ICLR, 2022.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/DLkubm-dq-y/Initial_manuscript_tex/Initial_manuscript.tex ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ § LEARNING ROAD SCENE-LEVEL REPRESENTATIONS VIA SEMANTIC REGION PREDICTION
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: In this work, we tackle two vital tasks in automated driving systems, i.e., driver intent prediction and risk object identification from egocentric images. Mainly, we investigate the question: what would be good road scene-level representations for these two tasks? We contend that a scene-level representation must capture higher-level semantic and geometric representations of traffic scenes around ego-vehicle while performing actions to their destinations. To this end, we introduce the representation of semantic regions, which are areas where ego-vehicles visit while taking an afforded action (e.g., left-turn at 4-way intersections). We propose to learn scene-level representations via a novel semantic region prediction task and an automatic semantic region labeling algorithm. Extensive evaluations are conducted on the HDD and nuScenes datasets, and the learned representations lead to state-of-the-art performance for driver intention prediction and risk object identification.
12
+
13
+ Keywords: Semantic Region Prediction, Egocentric Vision, Driver Intent, Risk Object Identification
14
+
15
+ § 1 INTRODUCTION
16
+
17
+ For automated driving systems (e.g., advanced driver assist systems, ADAS) to navigate highly interactive scenarios, they must be able to perceive states of traffic elements, forecast traffic situations, identify potential hazards, and plan the corresponding actions. The field has made substantial progress in the past few years $\left\lbrack {1,2,3,4,5,6,7,8,9,{10},{11},{12},{13},7,{14},{15},{16},{17},{18},8,{19},{20}}\right\rbrack$ . In this work, we focus on improving the performance of driver intention prediction [21, 22, 23] and risk object identification [24, 25, 26, 27, 28] from egocentric videos. Note that solving both tasks from egocentric videos is crucial for safety systems such as ADAS, where front-facing cameras are the primary device.
18
+
19
+ Existing works for both tasks $\left\lbrack {{22},{24},{26},{27},{28}}\right\rbrack$ utilize image annotations of the tasks (intent prediction and potential hazard identification) and object cues from object detection to train networks in a supervised learning manner. Additionally, the authors [22, 29] leverage temporal models such as LSTM [30] and ConvLSTM [31], and spatial-temporal interaction between traffic participants is modeled using spatial-temporal graph [23] and graph convolutional networks [32] to further improve the performance of tasks. While promising results are demonstrated, the learned representations are effective to the trained task. Moreover, the representations only encode road scenes in the nearby locality. We contend that a scene-level representation must capture higher-level semantic and geometric representations of traffic scenes around ego-vehicles while performing actions to their destinations, in order to reason about the larger scenes. We introduce a novel representation called the semantic region, as shown in Fig. 1. Semantic regions are areas where ego-vehicles visit while taking an afforded action to their destination. For instance, while turning left at an intersection, the vehicle visits the semantic regions (in yellow), i.e., the crosswalk near the ego vehicle, the area of intersection, and the crosswalk on the left sequentially. If different afforded actions are taken, different semantic regions will be visited. Note that different road topologies (e.g., 3-way intersections and straight roads) afford different actions. We associate egocentric images, representing views from different locations of road scenes under certain actions, to the corresponding semantic regions.
20
+
21
+ < g r a p h i c s >
22
+
23
+ Figure 1: Main idea. We contend that a scene-level representation must capture higher-level semantic and geometric representations of traffic scenes around ego-vehicles while performing actions to their destinations, in order to reason about the larger scenes. We propose semantic region, a novel representation that represents areas where ego vehicles visit while taking an afforded action. We associate egocentric images, representing views from different locations of road scenes under certain actions, with the corresponding semantic regions. We cast road scene-level representation learning as semantic region prediction and demonstrate the learned representations are effective for driver intention prediction and risk object identification.
24
+
25
+ We cast scene-level representation learning as semantic region prediction. Specifically, the model predicts future semantic regions sequentially, given historical observations before turning left. For instance, as shown in Fig. 2a, given egocentric images representing semantic regions $S$ and ${A}_{1}$ , the task aims to predict future semantic regions ${B}_{1},{C}_{1}$ , and ${T}_{1}$ in sequential order. To enable representation learning, we design an automatic semantic region annotation strategy to label every egocentric image collected in intersections with the corresponding semantic region, which reduces the annotation burden.
26
+
27
+ We demonstrate the effectiveness of the scene-level representation learning framework on driver intention prediction [22] and risk object identification [27]. We achieve superior performance compared to strong baselines for driver intention prediction on the HDD dataset [33]. Furthermore, we show favorable generalization capability without additional training on nuScenes [34]. Moreover, our framework obtains state-of-the-art performance for risk object identification. Specifically, we boost the current best-performing algorithm [27] by 6%.
28
+
29
+ Our contributions are summarized as follows. First, we propose a novel representation called semantic region, which aims to capture higher-level semantic and geometric representations of traffic scenes around ego-vehicles while performing actions to their destination. Second, we cast scene-level representation learning as semantic region prediction (SRP) and propose an automatic labeling algorithm for intersections to reduce annotation burdens. Third, we conduct extensive evaluations on the HDD and nuScenes datasets to prove that the effectiveness of the learned representations leads to significant improvements in driver intention prediction and risk object identification.
30
+
31
+ < g r a p h i c s >
32
+
33
+ Figure 2: Semantic regions in different types of road topology. Semantic regions are areas where ego-vehicles visit while taking actions afforded the underlying road topology. For instance, at 4- way intersections, three actions (i.e., Left-turn, Straight, Right-turn) are afforded. Given egocentric images while performing an afforded action, we associate them with the corresponding semantic regions. In this work, we cover a wide range of road topologies, i.e., 4-way/3-way intersections and straight road with multiple lanes.
34
+
35
+ § 2 RELATED WORK
36
+
37
+ Driver Intention Prediction. Advanced driver-assistance systems predict driver intention [21, 22, ${35},{36},{37},{38}\rbrack$ for preventing potential hazards. Doshi et al.,[21] predict driver’s intent via reasoning distances to lane markings and vehicle dynamics for driver intention prediction in highway scenarios. In the Brain4Car project [22, 39, 23], multi-sensory signals, including GPS and street maps, are used for anticipation of driving maneuvers. Similarly, pre-computed road topology maps around intersections are utilized to extract features such as ego position and dynamics, distance to surrounding traffic participants, and legal actions at the upcoming intersection to predict driver intention [35]. Recently, Casas et al., [37] leverage rasterized HD maps as input deep neural networks for intent prediction. Instead of formulating intent prediction as a recognition problem, Hu et al., [38] formulate intention prediction as entering an insertion area defined on a pre-computed road topology map. For instance, if the intent is turning left, the corresponding insertion area is ${T}_{1}$ as shown in Fig. 2a. Unlike existing methods exploiting pre-computed road topology, we learn road scene-level representations via semantic region prediction, which capture higher-level semantic and geometric representations of traffic scenes around ego-vehicles while performing actions to their destinations. We empirically demonstrate the value of learned representations.
38
+
39
+ Risk Object Identification. The goal of risk object identification is to identify object(s) that impact ego-vehicle navigation [24, 40, 41, 25, 26, 42, 43, 27]. The authors of [24, 25, 43] construct datasets with object importance annotation, and supervised learning-based algorithms are designed and trained to identify risk/important objects. The task can be formulated as selecting regions/objects with high activations in visual attention heat maps learned from end-to-end driving models [40, 26, 42]. Recently, Li et al., [27] formulate risk object identification as cause-effect problem [44]. They propose a two-stage risk object identification framework and demonstrate favorable performance over $\left\lbrack {{40},{26}}\right\rbrack$ . In this work, we extend $\left\lbrack {27}\right\rbrack$ with the learned road topology representation because road topology information is crucial for risk/important object identification [25]. Note that they [25] assume that the planned path is given. In this work, we tackle a more challenging setting where driver intention is unknown and should be inferred from egocentric images.
40
+
41
+ < g r a p h i c s >
42
+
43
+ Figure 3: Automatic labeling of semantic regions in intersections. We show an example of generating labels of semantic regions from a Left-turn egocentric video sequence. The semantic regions are consistent with the ones in Fig. 2a. Best viewed in color.
44
+
45
+ § 3 AUTOMATIC SEMANTIC REGION LABELING
46
+
47
+ We propose an automatic labeling strategy to ease the burdens. The overall generation process is depicted in Fig. 3. Specifically, a three-step strategy is proposed. First, given egocentric videos collected while taking afforded actions (i.e., Left-turn, Straight, Right-turn, and lane-change) without interacting with traffic participants from the HDD dataset [33] ${}^{1}$ , we apply COLMAP [45], to obtain a dense 3D reconstruction and camera poses. In addition, semantic segmentation [46] is applied to every egocentric image. Second, each 3D point is projected onto images so that the point is visible to obtain the corresponding semantic candidates. Then, a simple winner-take-all strategy is used to determine the final label. We project the semantic 3D point cloud to the ground plane to obtain a semantic Bird-Eye-View (BEV) image. Third, we label the semantic region of each camera pose with the information from semantic BEV image. For example, in intersections, we assume that ego vehicles will visit two crosswalks sequentially while taking afforded actions. Camera poses that overlap with the first crosswalk and the second crosswalk are denoted as ${A}_{i}$ and ${C}_{i}$ , respectively. The poses located between ${A}_{i}$ and ${C}_{i}$ are ${B}_{i}$ . Camera poses located before the first crosswalk and the second crosswalk as ${S}_{i}$ and ${T}_{i}$ , respectively. Each index $i$ represents an afforded action. Last but not least, while the results of COLMAP and semantic segmentation are generally well, we use two additional criteria to select good samples: 1) 3D reconstruction is successful, and 2) reconstructed camera poses form a coherent trajectory. Note that the algorithm is in general applicable for different topologies. However, we observed failures for lane-change in non-intersection due to inaccurate 3D reconstruction. Therefore, we manually annotate videos that ego-vehicles perfrom lane-change.
48
+
49
+ § 4 METHODOLOGY
50
+
51
+ In this section, we discuss the details of road scene-level representation learning from egocentric video via semantic region prediction. In addition, we illustrate how to transfer the learned representation to two downstream tasks, i.e., driver intention prediction and risk object identification.
52
+
53
+ § 4.1 SCENE-LEVEL REPRESENTATION LEARNING VIA SEMANTIC REGION PREDICTION
54
+
55
+ We contend that a scene-level representation must capture higher-level semantic and geometric representations of traffic scenes around ego-vehicle while performing actions to their destinations. Thus, we proposed the representation called semantic region, which is a high-level abstraction of road affordance. We expect a model capturing the association between the temporal evolution of egocentric views and semantic regions. To this end, We cast the egocentric road scene affordance representation learning as a semantic region prediction task. We build our Semantic Region Prediction (SRP) cell based on TRN cells [47]. The STA (spatio-temporal accumulator) in TRN cell makes use of predicted future cues from the temporal decoder and the accumulated historical information to form better action representations. We make the following changes to TRN cells. First, we replace the action classifier with two semantic region classifiers for both intersections and non-intersections. Second, in the decoder, the predicted logits of two semantic region classifiers are fused into the input of the next time frame after increasing dimensions with FC layers. Third, we add a topology classifier to classify whether the current topology is intersection or non-intersection.
56
+
57
+ ${}^{1}$ The HDD dataset provides large-scale annotations of afforded actions.
58
+
59
+ < g r a p h i c s >
60
+
61
+ Figure 4: The proposed network architecture for semantic region prediction and models for downstream tasks. We propose to learn road scene representation via Semantic Region Prediction (SRP). The hidden state of the SRP cell serves as road scene representation and is utilized in two downstream tasks: driver intention prediction and risk object identification.
62
+
63
+ With SRP cells, our network takes ${t}_{e}$ historical frames as input. For each frame, topology type (i.e., whether it is in intersection), the current semantic region as well as ${t}_{d}$ future semantic regions are predicted. We have separate semantic region classifiers for intersections and non-intersections. During training, we only compute losses for the one that matches the groudtruth topology type. The loss for learning semantic regions is
64
+
65
+ $$
66
+ \mathop{\sum }\limits_{t}^{{t}_{e}}\mathbf{l}\left( {{z}_{t},{o}_{t}}\right) + \mathop{\sum }\limits_{{i = 0}}^{1}{\mathbb{1}}_{{o}_{t} = i}\mathop{\sum }\limits_{t}^{{t}_{e}}\left( {\mathbf{l}\left( {{y}_{t}^{i,e},{s}_{t}^{i}}\right) + \frac{1}{{t}_{d}}\mathop{\sum }\limits_{{m = 0}}^{{t}_{d}}\mathbf{l}\left( {{y}_{t}^{i,d},{s}_{t + m}^{i}}\right) }\right) \tag{1}
67
+ $$
68
+
69
+ where $i \in \{ 0,1\}$ denotes the $i$ th semantic region classifier, ${z}_{t}$ denotes the topology prediction, and ${y}_{t}^{i,e}$ and ${y}_{t}^{i,d}$ are the semantic region prediction based on the hidden state of STA and decoder respectively for topology classifier $i$ . 1 is Cross-entropy loss,1is the indicator function, ${o}_{t}$ is the groudtruth topology type, and ${s}_{t}^{i}$ is the groudtruth of semantic regions derived from Section 3. The overall architecture is depicted in Fig. 4. Note that in practice, it is not necessary to observe all semantic regions in one video clip. For instance, in an intersection, when there are no crosswalks, ${A}_{i}$ and ${C}_{i}$ will not be presented. For a left-turn vehicle at intersections without a crosswalk, the semantic region sequences will be $S - {B}_{1} - {T}_{1}$ . The hidden state ${\mathbf{h}}_{e}^{t}$ of STA contains rich information about road scene. Next, we are going to show how to incorporate the learned representation into downstream tasks.
70
+
71
+ § 4.2 SRP-GUIDED DRIVER INTENTION PREDICTION
72
+
73
+ We follow the definition of anticipation in [39] to define driver intention prediction. Formally, given an sequence of egocentric observations $\left\{ {{\mathbf{x}}^{1},{\mathbf{x}}^{2},\ldots ,{\mathbf{x}}^{t}}\right\}$ , our goal is to predict the future intention ${\mathbf{y}}_{\text{ int }}^{T}$ , where $T > t$ . Driver intention prediction benefits downstream applications like risk assessment [48]. There are 5 different types of intentions in our setting (i.e., Left-turn, Straight, Right-turn, Left-lane-change, Right-lane-change). We add an intention classifier on top of the hidden state of the STA, ${\mathbf{h}}_{e}^{t}$ in SRP,
74
+
75
+ $$
76
+ {\mathbf{y}}_{\text{ int }}^{T} = \operatorname{softmax}\left( {{\mathbf{W}}_{\text{ int }}^{\top }{\mathbf{h}}_{e}^{t} + {\mathbf{b}}_{\text{ int }}}\right) \tag{2}
77
+ $$
78
+
79
+ where ${\mathbf{W}}_{int}^{\top }$ and ${\mathbf{b}}_{int}$ are the weight and bias terms in the intention classifier respectively. We name the driver intention prediction model as SRP-INT.
80
+
81
+ § 4.3 SRP-GUIDED RISK OBJECT IDENTIFICATION
82
+
83
+ The risk object identification task was first introduced in [27]. A Risk object is defined as the one influencing the behavior of the ego-vehicle most in each frame. Given an egocentric video $\left\{ {{\mathbf{x}}^{1},{\mathbf{x}}^{2},\ldots ,{\mathbf{x}}^{t}}\right\}$ , the goal of risk object identification is to output $\left\{ {{\mathbf{b}}^{1},{\mathbf{b}}^{2},\ldots ,{\mathbf{b}}^{t}}\right\}$ , where ${\mathbf{b}}^{j}$ , $j \in \left\lbrack {1,t}\right\rbrack$ is the bounding box of the risk object in the $j$ -th frame. The authors of [27] proposed a two-stage framework to solve the problem. In the first stage they trained an object-level manipulable model to predict the driver behavior by incorporating partial CNNs [49]. In the second stage they iterated through the risk object candidate list and intervened the input video to simulate scenarios without the presence of a candidate. The simulated scenarios were passed into the driver behavior model. The object causing the maximum driving behavior change was their risk object prediction. The ego-representation in [27] takes a very important role because it captures the information from image frame as well as as the messages from all the objects. The representation in time $t$ , i.e., last time step, can be written as
84
+
85
+ $$
86
+ {\mathbf{g}}_{e}^{t} = {\mathbf{g}}_{f}^{t} \oplus \frac{1}{N}\mathop{\sum }\limits_{{k = 1}}^{N}{\mathbf{g}}_{k}^{t} \tag{3}
87
+ $$
88
+
89
+ where ${\mathbf{g}}_{f}^{t}$ is the representation of the image frame, ${\mathbf{g}}_{k}^{t},k \in \left\lbrack {1,N}\right\rbrack$ in the representation for each object, $\oplus$ indicates a concatenation operation, and ${\mathbf{g}}_{e}^{t}$ is the final ego-representation in [27].
90
+
91
+ We propose SRP-ROI by fusing SRP with the model in [27]. We argue that road scene-level information can benefit the risk object identification task, and propose an SRP-guided representation:
92
+
93
+ $$
94
+ {\mathbf{g}}_{e}^{t} = \left( {{\mathbf{W}}_{ego}\left( {{\mathbf{g}}_{f}^{t} \oplus \frac{1}{N}\mathop{\sum }\limits_{{k = 1}}^{N}{\mathbf{g}}_{k}^{t}}\right) + {\mathbf{b}}_{ego}^{t}}\right) \oplus {\mathbf{h}}_{e}^{t} \tag{4}
95
+ $$
96
+
97
+ where ${\mathbf{W}}_{ego}$ and ${\mathbf{b}}_{ego}$ are the weights and bias terms of a fully connected layer respectively. We follow the two-stage framework in [27] and evaluate our SRP-ROI model on two challenging dynamic risk object categories: crossing vehicles and crossing pedestrians.
98
+
99
+ § 5 EXPERIMENTS
100
+
101
+ § 5.1 SEMANTIC REGION PREDICTION
102
+
103
+ Data Collection and Annotation. We collect video clips of Left-turn, Straight, Right-turn, Left-lane-change, Right-lane-change from the HDD dataset to train our semantic region predictor. For each video clip, we manually label the topology type. Labels of semantic regions at the intersections are automatically generated with the methods proposed in Section 3. The semantic regions for non-intersections are annotated by humans.For each video clip, we apply a sliding-window method to obtain training samples. For each sample, we have annotations including topology type, current, and future semantic region labels.
104
+
105
+ max width=
106
+
107
+ 2*Metric 2|c|Intersection 2|c|Non-intersetion
108
+
109
+ 2-5
110
+ Current SR Future SR Current SR Future SR
111
+
112
+ 1-5
113
+ Micro Avg Pre 47.0 52.7 65.3 62.9
114
+
115
+ 1-5
116
+ Macro Avg Pre 20.9 20.3 50.4 53.8
117
+
118
+ 1-5
119
+ mAP 26.4 24.5 51.2 53.8
120
+
121
+ 1-5
122
+
123
+ Table 1: Performances of Semantic Region Prediction. Current SR stands for current semantic region, while Future SR stands for future semantic region.
124
+
125
+ max width=
126
+
127
+ 2*Model 2*Aux 3|c|HDD 3|c|HDD Interactive 3|c|nuScenes
128
+
129
+ 3-11
130
+ Macro Avg Pre Micro Avg Pre mAP Macro Avg Pre Micro Avg Pre mAP Macro Avg Pre Micro Avg Pre mAP
131
+
132
+ 1-11
133
+ LSTM[30] - 45.0 64.9 51.5 30.8 56.2 62.4 37.3 68.8 62.0
134
+
135
+ 1-11
136
+ LSTM-EL[39] - 45.0 65.5 52.4 29.1 51.8 60.9 35.6 62.0 61.0
137
+
138
+ 1-11
139
+ OadTR[51] - 35.9 24.3 36.3 48.4 46.9 54.1 47.8 64.3 50.7
140
+
141
+ 1-11
142
+ TRN-Tra Tra 45.0 70.8 47.5 30.9 59.8 57.2 35.7 58.8 58.7
143
+
144
+ 1-11
145
+ SRP-INT SR 55.3 73.8 57.9 67.0 70.3 69.5 41.1 68.3 66.7
146
+
147
+ 1-11
148
+
149
+ Table 2: Quantitative results of driver intention prediction. We compare SRP-INT with baselines. Aux stands for auxiliary tasks. Tra and SR stands for trajectory and semantic region, re-specyively. All models have the same feature extractor [46].
150
+
151
+ Implementation Details and and Results. We leverage ResNet50 [23] pre-trained on Mapillary Vistas [45] dataset as the feature extractor. Our SRP takes ${l}_{e} = 3$ historical frames as input. For each frame, ${l}_{d} = 5$ future semantic regions as well as topology type are predicted. As shown in Fig. 2a and Fig. 2b, the number of semantic regions in intersection and non-intersections are 13 and 5, respectively. We use Adam optimizer [50] with default parameters, a learning rate of 0.0001 and weight decay of 0.0005 . The model is trained for 60 epochs. We train the model with the loss function in Eq. (1). The performances are shown in Table 1. Macro Average Precision, Micro Average Precision, and mAP are chosen as the evaluation metrics.
152
+
153
+ § 5.2 DRIVER INTENTION PREDICTION
154
+
155
+ Testing Data and Experiment Setup After training SRP on the video clips in Section 5.1, we further use the intention labels to train the intention classifier. Details are provided in the supplementary materials. We evaluate driver intention prediction models on both HDD [33] test set and nuScenes [34] datasets. Note that in HDD, there is no overlap between the training data and test data. We evaluate models on 1438 sequences in HDD (including 393 interactive scenarios) and 221 sequences in nuScenes. We use the same evaluation metrics as Section 5.1.
156
+
157
+ Baselines and Comparisons We implement several baselines with the same image feature extractor as the proposed SRP-INT. LSTM [30] is a general-purpose sequential modeling methods. OadTR [51] takes advantage of the popular Transformers [52] and is a competitive online/real-time action recognition model. We implement LSTM with Exponential Loss (LSTM+EL), as [39] shows the effectiveness of Exponential Loss for driver intention prediction. We modify TRN [47] to predict trajectories (similar to the work [53]) and use the learned representation for intention prediction. As shown in Table 2, we demonstrate favorable performances on both datasets and prove the effectiveness of our framework empirically. Qualitative results are presented in the supplementary materials.
158
+
159
+ max width=
160
+
161
+ 2*Model 3|c|Crossing Vehicle 3|c|Crossing Pedestrian
162
+
163
+ 2-7
164
+ Acc 0.5 Acc 0.75 mAcc Acc 0.5 Acc 0.75 mAcc
165
+
166
+ 1-7
167
+ [27](paper) 49.2 48.6 43.0 35.7 32.1 27.0
168
+
169
+ 1-7
170
+ [27](our implementation) 49.2 48.2 42.7 33.3 29.8 26.2
171
+
172
+ 1-7
173
+ SRP-ROI 51.8 51.1 45.1 42.9 39.3 33.3
174
+
175
+ 1-7
176
+
177
+ Table 3: Quantitative results of risk object identification. We evaluate risk object identification models on two risk object categories: Crossing Vehicle and Crossing Pedestrian.
178
+
179
+ § 5.3 RISK OBJECT IDENTIFICATION
180
+
181
+ Experimental Setup and Evaluation. We follow the experiment setup in [27] and train separate models on two challenging dynamic risk object categories: Crossing Vehicle and Crossing Pedestrian. Like [27], we evaluate our models by calculating the IOU between the predicted risk object and groundtruth. We report accuracy at IOU thresholds of 0.5, 0.75, and mean accuracy.
182
+
183
+ Implementation Details. We utilize Mask R-CNN [54] and DeepSORT [9] to compute the tracking proposals of risk object candidates. The pre-trained semantic region representation are fussed with the ego representation in [27] after passing through a fully connected layer. In practice, the output dimension of the fully connected layer is 100 . In this stage, we train the model using Adam [50] optimizer with default parameters, a learning rate of 0.0001, and weight decay of 0.0001 . The model is trained for 20 epochs. After training, we follow the inference procedure in [27] to obtain the bounding boxes of the risk object in each frame. We do not apply any heuristic to remove objects from tracking proposals and models are trained separately for each category.
184
+
185
+ Quantitative Results. We compare our method with [27]. The quantitative results show our model have the better performances, which demonstrates that semantic region prediction can help risk object identification. Qualitative results are presented in the supplementary materials.
186
+
187
+ § 6 LIMITATIONS
188
+
189
+ Although we have shown the effectiveness our proposed representation, some limitations need further exploration. First, our proposed semantic regions cannot be applied to complicated topologies like a roundabout. A more comprehensive definition of semantic regions is desired. Second, learning semantic regions from egocentric view images alone is challenging. Additionally, the performance of semantic region prediction at intersections are unsatisfactory. To improve the performance, we could consider incorporating Bird-Eye-View representation [55]. Third, we have not truly associated images with semantic regions. Instead of predicting the label of semantic regions, we could consider an encoder-decoder based model to predict the current/future scene representations [56].
190
+
191
+ § 7 CONCLUSION
192
+
193
+ In this work, we study the problem of road scene-level representation learning from egocentric videos for driver intention prediction and risk object identification. We propose a novel representation called semantic region, which aims to capture higher-level semantic and geometric representations of traffic scenes around ego-vehicles while performing actions to their destination. We cast representation learning as semantic region prediction and propose an automatic semantic region labeling algorithms for egocentric videos collected in intersections. We demonstrate the effectiveness of the learned representation on real-world datasets, i.e., HDD and nuScenes. In particular, the learned representation can generalize to unseen data (i.e., nuScenes dataset) without finetuning on the driver intention prediction task. We hope that our findings will pave the way for further advances in road scene-level representation learning from egocentric views for downstream tasks such as planning and decision making.
papers/CoRL/CoRL 2022/CoRL 2022 Conference/ED0G14V3WeH/Initial_manuscript_md/Initial_manuscript.md ADDED
@@ -0,0 +1,291 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Data-Efficient Model Learning for Control with Jacobian-Regularized Dynamic Mode Decomposition
2
+
3
+ Anonymous Author(s)
4
+
5
+ Affiliation
6
+
7
+ Address
8
+
9
+ email
10
+
11
+ Abstract: We present a novel algorithm for learning Koopman models of controlled nonlinear dynamical systems from data based on Dynamic-Mode Decomposition (DMD). Our approach, Jacobian-Regularized DMD (JDMD), offers dramatically improved sample efficiency over existing DMD-based algorithms by leveraging Jacobian information from an approximate prior model of the system. We demonstrate JDMD's ability to quickly learn bilinear Koopman dynamics representations across several realistic examples in simulation, including a quadrotor and a perching fixed-wing aircraft. In all cases, we show that the models learned by JDMD provide superior tracking and generalization performance in a model-predictive control framework when compared to both the approximate prior models used in training and models learned by standard extended DMD.
12
+
13
+ ## 12 1 Introduction
14
+
15
+ In recent years, both model-based optimal-control $\left\lbrack {1,2,3,4}\right\rbrack$ and data-driven reinforcement-learning methods $\left\lbrack {5,6,7}\right\rbrack$ have demonstrated impressive successes on complex, nonlinear robotic systems. However, both of approaches suffer from inherent drawbacks: Data-driven methods often require extremely large amounts of data and fail to generalize outside of the domain or task on which they were trained. On the other hand, model-based methods require an accurate model of the system to achieve good performance. In many cases, high-fidelity models can be too difficult to construct from first principles or too computationally expensive to be of practical use. However, low-order approximate models that can be evaluated cheaply at the expense of controller performance are often available. With this in mind, we seek a middle ground between model-based and data-driven approaches in this work.
16
+
17
+ We propose a method for learning bilinear Koopman models of nonlinear dynamical systems for use in model-predictive control that leverages information from an approximate prior dynamics model of the system in the training process. Our new algorithm builds on extended Dynamic Mode Decomposition (EDMD), which learns Koopman models from trajectory data [8, 9, 10, 11, 12], by adding a derivative regularization term based on derivatives computed from a prior model. We show that this new algorithm, Jacobian-regularized Dynamic Mode Decomposition (JDMD), can learn models with dramatically fewer samples than EDMD, even when the prior model differs significantly from the true dynamics of the system. We also demonstrate the effectiveness of these learned models in a model-predictive control (MPC) framework. The result is a fast, robust, and sample-efficient pipeline for quickly training a model that can outperform previous Koopman-based MPC approaches as well as purely model-based controllers that do not leverage data collected from the actual system.
18
+
19
+ Our work is most closely related to the recent work of Folkestad et. al. [11, 13, 14], which learn bilinear models and apply nonlinear model-predictive control directly on the learned bilinear dynamics. Other recent works have combined linear Koopman models with model-predictive control [10] and Lyapunov control techniques with bilinear Koopman [15]. Our contributions are:
20
+
21
+ - A novel extension to extended dynamic mode decomposition, called JDMD, that incorporates gradient information from an approximate analytic model
22
+
23
+ - A recursive, batch QR algorithm for solving the least-squares problems that arise when learning bilinear dynamical systems using DMD-based algorithms, including JDMD and EDMD
24
+
25
+ - A simple linear MPC control technique for learned bilinear control systems that is computationally efficient and, when combined with JDMD, requires very little training data to achieve good performance
26
+
27
+ The remainder of the paper is organized as follows: In Section 2 we provide some background on the application of Koopman operator theory to controlled dynamical systems and review some related works. Section 3 then describes the proposed JDMD algorithm. In Section 4 we outline a memory-efficient technique for solving the large, sparse linear least-squares problems that arise when applying JDMD and other DMD-based algorithms. Next, in Section 5, we propose an efficient model-predictive control technique that utilizes the learned bilinear models produced by JDMD. Section 6 then provides simulation results and analysis of the proposed algorithm applied to control tasks on a cartpole, a quadrotor, and a small foam airplane, all subject to significant model mismatch. In Section 7 we discuss the limitations of our approach, followed by some concluding remarks in Section 8.
28
+
29
+ ## 2 Background and Related Work
30
+
31
+ ### 2.1 Koopman Operator Theory
32
+
33
+ The theoretical underpinnings of the Koopman operator and its application to dynamical systems has been extensively studied $\left\lbrack {{16},{17},9,{18}}\right\rbrack$ . Rather than describe the theory in detail, we highlight the key concepts employed by the current work and refer the reader to the existing literature on Koopman theory for further details.
34
+
35
+ We start by assuming a controlled, nonlinear, discrete-time dynamical system,
36
+
37
+ $$
38
+ {x}^{ + } = f\left( {x, u}\right) , \tag{1}
39
+ $$
40
+
41
+ where $x \in \mathcal{X} \subseteq {\mathbb{R}}^{{N}_{x}}$ is the state vector, ${u}_{k} \in {\mathbb{R}}^{{N}_{u}}$ is the control vector, and ${x}^{ + }$ is the state at the next time step. The key idea behind the Koopman operator is that the nonlinear finite-dimensional dynamics (1) can be represented exactly by an infinite-dimensional bilinear system of the form,
42
+
43
+ $$
44
+ {y}^{ + } = {Ay} + {Bu} + \mathop{\sum }\limits_{{i = 1}}^{m}{u}_{i}{C}_{i}y = g\left( {y, u}\right) , \tag{2}
45
+ $$
46
+
47
+ where $y = \phi \left( x\right)$ is a nonlinear mapping from the finite-dimensional state space $\mathcal{X}$ to the infinite-dimensional Hilbert space of observables $\mathcal{Y}$ . In practice, we approximate (2) by restricting $\mathcal{Y}$ to be a finite-dimensional vector space, in which case $\phi$ becomes a finite-dimensional nonlinear function of the state variables that must be chosen by the user.
48
+
49
+ Intuitively, $\phi$ "lifts" our state $x$ into a higher dimensional space $\mathcal{Y}$ where the dynamics are approximately (bi)linear, effectively trading dimensionality for (bi)linearity. Similarly, we can perform the inverse operation by projecting a lifted state $y$ back into the original state space $\mathcal{X}$ . In this work, we will assume that $\phi$ is constructed in such a way that this inverse mapping is linear:
50
+
51
+ $$
52
+ x = {Gy} \tag{3}
53
+ $$
54
+
55
+ ### 2.2 Extended Dynamic Mode Decomposition
56
+
57
+ A lifted bilinear system of the form (2) can be learned from $P$ samples of the system dynamics 76 $\left( {{x}_{j}^{ + },{x}_{j},{u}_{j}}\right)$ using Extended Dynamic Mode Decomposition (EDMD) [18,13]. We first define the
58
+
59
+ 7 following data matrices:
60
+
61
+ $$
62
+ {Z}_{1 : P} = \left\lbrack \begin{matrix} {y}_{1} & {y}_{2} & \ldots & {y}_{P} \\ {u}_{1} & {u}_{2} & \ldots & {u}_{P} \\ {u}_{1,1}{y}_{1} & {u}_{2,1}{y}_{2} & \ldots & {u}_{P,1}{y}_{P} \\ \vdots & \vdots & \ddots & \vdots \\ {u}_{1, m}{y}_{1} & {u}_{2, m}{y}_{2} & \ldots & {u}_{P, m}{y}_{P} \end{matrix}\right\rbrack ,\;{Y}_{1 : P}^{ + } = \left\lbrack \begin{array}{llll} {y}_{1}^{ + } & {y}_{2}^{ + } & \ldots & {y}_{P}^{ + } \end{array}\right\rbrack , \tag{4}
63
+ $$
64
+
65
+ 78 We then concatenate all of the model coefficient matrices as follows:
66
+
67
+ $$
68
+ E = \left\lbrack \begin{array}{lllll} A & B & {C}_{1} & \ldots & {C}_{m} \end{array}\right\rbrack \in {\mathbb{R}}^{{N}_{y} \times {N}_{z}}, \tag{5}
69
+ $$
70
+
71
+ The model learning problem can then be written as the following linear least-squares problem:
72
+
73
+ $$
74
+ \mathop{\operatorname{minimize}}\limits_{E}{\begin{Vmatrix}E{Z}_{1 : P} - {Y}_{1 : P}^{ + }\end{Vmatrix}}_{2}^{2} \tag{6}
75
+ $$
76
+
77
+ ## 3 Jacobian-Regularizated Dynamic Mode Decomposition
78
+
79
+ We now present JDMD as a straightforward adaptation of the original EDMD algorithm described in Section 2.2. Given $P$ samples of the dynamics $\left( {{x}_{i}^{ + },{x}_{i},{u}_{i}}\right)$ , and an approximate discrete-time dynamics model,
80
+
81
+ $$
82
+ {x}^{ + } = \widetilde{f}\left( {x, u}\right) , \tag{7}
83
+ $$
84
+
85
+ we can evaluate the Jacobians of our approximate model $\widetilde{f}$ at each of the sample points: ${\widetilde{A}}_{i} =$ $\frac{\partial \widetilde{f}}{\partial x},{\widetilde{B}}_{i} = \frac{\partial \widetilde{f}}{\partial u}$ . After choosing a nonlinear mapping $\phi : {\mathbb{R}}^{{N}_{x}} \mapsto {\mathbb{R}}^{{N}_{y}}$ our goal is to find a bilinear dynamics model (2) that matches the Jacobians of our approximate model, while also matching our dynamics samples. We accomplish this by penalizing differences between the Jacobians of our learned bilinear model with respect to the original states $x$ and controls $u$ , and the Jacobians we expect from our analytical model. These projected Jacobians are calculated by differentiating through the projected dynamics:
86
+
87
+ $$
88
+ {x}^{ + } = G\left( {{A\phi }\left( x\right) + {Bu} + \mathop{\sum }\limits_{{i = 1}}^{m}{u}_{i}{C}_{i}\phi \left( x\right) }\right) = \bar{f}\left( {x, u}\right) . \tag{8}
89
+ $$
90
+
91
+ 1 Differentiating (8) with respect to $x$ and $u$ gives us
92
+
93
+ $$
94
+ {\bar{A}}_{j} = \frac{\partial \widehat{f}}{\partial x}\left( {{x}_{j},{u}_{j}}\right) = G\left( {A + \mathop{\sum }\limits_{{i = 1}}^{m}{u}_{j, i}{C}_{i}}\right) \Phi \left( {x}_{j}\right) = {GE}\widehat{A}\left( {{x}_{j},{u}_{j}}\right) = {GE}{\widehat{A}}_{j} \tag{9a}
95
+ $$
96
+
97
+ $$
98
+ {\bar{B}}_{j} = \frac{\partial \widehat{f}}{\partial u}\left( {{x}_{j},{u}_{j}}\right) = G\left( {B + \left\lbrack \begin{array}{lll} {C}_{1}{x}_{j} & \ldots & {C}_{m}{x}_{j} \end{array}\right\rbrack }\right) = {GE}\widehat{B}\left( {{x}_{j},{u}_{j}}\right) = {GE}{\widehat{B}}_{j} \tag{9b}
99
+ $$
100
+
101
+ 92 where $\Phi \left( x\right) = \partial \phi /\partial x$ is the Jacobian of the nonlinear map $\phi$ , and
102
+
103
+ $$
104
+ \widehat{A}\left( {x, u}\right) = \left\lbrack \begin{matrix} {I}_{{N}_{y}} \\ 0 \\ {u}_{1}{I}_{{N}_{y}} \\ {u}_{2}{I}_{{N}_{y}} \\ \vdots \\ {u}_{m}{I}_{{N}_{y}} \end{matrix}\right\rbrack \Phi \left( x\right) \in {\mathbb{R}}^{{N}_{z} \times {N}_{x}},\;\widehat{B}\left( {x, u}\right) = \left\lbrack \begin{matrix} 0 \\ {I}_{{N}_{u}} \\ \left\lbrack \begin{matrix} x\;0\;\cdots \;0 \\ 0\;\cdots \;0 \end{matrix}\right\rbrack \\ \vdots \\ \left\lbrack \begin{matrix} 0\;0\;\cdots \;0 \end{matrix}\right\rbrack \end{matrix}\right\rbrack \in {\mathbb{R}}^{{N}_{z} \times {N}_{u}}. \tag{10}
105
+ $$
106
+
107
+ 93 We then solve the following linear least-squares problem:
108
+
109
+ $$
110
+ \mathop{\operatorname{minimize}}\limits_{E}\left( {1 - \alpha }\right) {\begin{Vmatrix}E{Z}_{1 : P} - {Y}_{1 : P}^{ + }\end{Vmatrix}}_{2}^{2} + \alpha \mathop{\sum }\limits_{{j = 1}}^{P}\left( {{\begin{Vmatrix}GE{\widehat{A}}_{j} - {\widetilde{A}}_{j}\end{Vmatrix}}_{2}^{2} + {\begin{Vmatrix}GE{\widehat{B}}_{j} - {\widetilde{B}}_{j}\end{Vmatrix}}_{2}^{2}}\right) \tag{11}
111
+ $$
112
+
113
+ 4 The resulting linear least-squares problem has $\left( {{N}_{y} + {N}_{x}^{2} + {N}_{x} \cdot {N}_{u}}\right) \cdot P$ rows and ${N}_{y} \cdot {N}_{z}$ columns.
114
+
115
+ 95 Given that the number of rows in this problem grows quadratically with the state dimension, solving this problem can be challenging from a computational perspective. In the Section 4, we propose an algorithm for solving these problems without needing to move to a distributed-memory setup in order to solve these large linear systems. The proposed method also provides a straightforward way to approach incremental updates to the bilinear system, where the coefficients could be efficiently learned "live" while the robot gathers data by moving through its environment.
116
+
117
+ ![01963f5c-6360-7672-bc66-e28d7961f2ee_3_307_193_1184_199_0.jpg](images/01963f5c-6360-7672-bc66-e28d7961f2ee_3_307_193_1184_199_0.jpg)
118
+
119
+ Figure 1: Airplane perching trajectory, a high angle-of-attack maneuver that minimizes velocity at the goal position
120
+
121
+ ## 4 Efficient Recursive Least Squares
122
+
123
+ In its canonical formulation, a linear least squares problem can be represented as the following unconstrained optimization problem:
124
+
125
+ $$
126
+ \mathop{\min }\limits_{x}\parallel {Fx} - d{\parallel }_{2}^{2}. \tag{12}
127
+ $$
128
+
129
+ We assume $F$ is a large, sparse matrix and that solving it directly using a QR or Cholesky decomposition requires too much memory for a single computer. While solving (12) using an iterative method such as LSMR [19] or LSQR [20] is possible, we find that these methods do not work well in practice for solving (11) due to ill-conditioning. Standard recursive methods for solving these problems are able to process the rows of the matrices sequentially to build a QR decomposition of the full matrix, but also tend to suffer from ill-conditioning [21, 22, 23].
130
+
131
+ To overcome these issues, we propose an alternative recursive method based. We solve (12) by dividing up rows of $F$ into batches:
132
+
133
+ $$
134
+ {F}^{T}F = {F}_{1}^{T}{F}_{1} + {F}_{2}^{T}{F}_{2} + \ldots + {F}_{N}^{T}{F}_{N}. \tag{13}
135
+ $$
136
+
137
+ The main idea is to maintain and update an upper-triangular Cholesky factor ${U}_{i}$ of the first $i$ terms of the sum (13). Given ${U}_{i}$ , we can calculate ${U}_{i + 1}$ using the QR decomposition, as shown in [24]:
138
+
139
+ $$
140
+ {U}_{i + 1} = \sqrt{{U}_{i}^{T}{U}_{i} + {F}_{i + 1}^{T}{F}_{i + 1}} = {\mathrm{{QR}}}_{\mathrm{R}}\left( \left\lbrack \begin{matrix} {U}_{i} \\ {F}_{i + 1} \end{matrix}\right\rbrack \right) , \tag{14}
141
+ $$
142
+
143
+ where ${\mathrm{{QR}}}_{\mathrm{R}}$ returns the upper triangular matrix $R$ from the $\mathrm{{QR}}$ decomposition. For an efficient implementation, this function should be an "economy" or "Q-less" QR decomposition since the $Q$ matrix is never needed.
144
+
145
+ We also handle regularization of the normal equations, equivalent to adding Tikhonov regularization to the original least squares problem, during the base case of our recursion. If we want to add an L2 regularization with weight $\lambda$ , we calculate ${U}_{1}$ as:
146
+
147
+ $$
148
+ {U}_{1} = {\mathrm{{QR}}}_{\mathrm{R}}\left( {\left\lbrack \begin{matrix} {F}_{1} \\ \sqrt{\lambda }I \end{matrix}\right\rbrack .}\right) \tag{15}
149
+ $$
150
+
151
+ ## 5 Projected Bilinear MPC
152
+
153
+ We propose a simple approach to model-predictive control for the bilinear systems learned using either classic EDMD or the proposed JDMD approach. The key idea is to use the projected Jacobians $\bar{A}$ and $\bar{B}$ in (9), effectively reducing the problem to a standard linear MPC problem in the original
154
+
155
+ state space instead of the larger, lifted one. In the all of the examples in the following section, our MPC controller solves the following convex Quadratic Program (QP):
156
+
157
+ $$
158
+ \mathop{\operatorname{minimize}}\limits_{{{x}_{1 : N},{u}_{1 : N - 1}}}\frac{1}{2}{x}_{N}^{T}{Q}_{N}{x}_{N} + \frac{1}{2}\mathop{\sum }\limits_{{k = 1}}^{{N - 1}}{x}_{k}^{T}{Q}_{k}{x}_{k} + {u}_{k}^{T}{R}_{k}{u}_{k} \tag{16}
159
+ $$
160
+
161
+ $$
162
+ \text{subject to}\;{x}_{k + 1} = {\bar{A}}_{k}{x}_{k} + {\bar{B}}_{k}{u}_{k} + {d}_{k}\text{,}
163
+ $$
164
+
165
+ $$
166
+ {x}_{1} = {x}_{\text{init }}
167
+ $$
168
+
169
+ where here we define $x$ and $u$ to be the "delta" from the reference trajectory ${\bar{x}}_{1 : N},{\bar{u}}_{1 : N - 1}$ . The affine dynamics term ${d}_{k} = f\left( {{\bar{x}}_{k},{\bar{u}}_{k}}\right) - {\bar{x}}_{k + 1}$ allows for dynamically infeasible reference trajectories. The projected Jacobians can be efficiently calculated from the bilinear dynamics either offline or online, and since the problem dimension is the same size as the linear MPC problem for the original dynamics, it is no more expensive to compute. This formulation also makes it trivial to enforce additional control or path constraints, and avoids the need to regularize or otherwise constrain the lifted states.
170
+
171
+ ## 6 Experimental Results
172
+
173
+ This section presents the results of several simulation experiments to evaluate the performance of JDMD. For each simulated system we specify two models: a nominal model, which is simplified and contains both parametric and non-parametric model error, and a true model, which is used exclusively for simulating the system and evaluating algorithm performance.
174
+
175
+ All models were trained by simulating the "true" system with a nominal controller to collect data in the region of the state space relevant to the task. A set of fixed-length trajectories were collected, each at a sample rate of ${20} - {25}\mathrm{\;{Hz}}$ . The bilinear EDMD model was trained using the same approach introduced by Folkestad and Burdick [13]. All continuous dynamics were discretized with an explicit fourth-order Runge Kutta integrator. Code for all experiments is available at TODO: removed for anonymous review.
176
+
177
+ ### 6.1 Systems and Tasks
178
+
179
+ Cartpole: We perform a swing-up task on a cartpole system. The true model includes Coulomb friction between the cart and the floor, viscous damping at both joints, and a deadband in the control input that were not included in the nominal model. Additionally, the mass of the cart and pole model were altered by ${20}\%$ and ${25}\%$ with respect to the nominal model, respectively. The following nonlinear mapping was used when learning the bilinear models: $\phi \left( x\right) =$ $\left\lbrack {1, x,\sin \left( x\right) ,\cos \left( x\right) ,\sin \left( {2x}\right) ,\sin \left( {4x}\right) ,{T}_{2}\left( x\right) ,{T}_{3}\left( x\right) ,{T}_{4}\left( x\right) }\right\rbrack \in {\mathbb{R}}^{33}$ , where ${T}_{i}\left( x\right)$ is a Cheby-shev polynomial of the first kind of order $i$ . All reference trajectories for the swing up task were generated using ALTRO [24, 25].
180
+
181
+ Quadrotor: We attempt to track point-to-point linear reference trajectories from various initial conditions on both planar and full 3D quadrotor models. For both systems, the true model includes aerodynamic drag terms not included in the nominal model, as well as parametric error of roughly $5\%$ on the system parameters (e.g. mass, rotor arm length, etc.). The planar model was trained using a nonlinear mapping of $\phi \left( x\right) = \left\lbrack {1, x,\sin \left( x\right) ,\cos \left( x\right) ,\sin \left( {2x}\right) ,{T}_{2}\left( x\right) }\right\rbrack \in$ ${\mathbb{R}}^{25}$ while the full quadrotor model was train using a nonlinear mapping of $\phi \left( x\right) =$ $\left\lbrack {1, x,{T}_{2}\left( x\right) ,\sin \left( p\right) ,\cos \left( p\right) ,{R}^{T}v,{v}^{T}R{R}^{T}v, p \times v, p \times \omega ,\omega \times \omega }\right\rbrack \in {\mathbb{R}}^{44}$ , where $p$ is the quadro-tor’s position, $v$ and $\omega$ are the translational and angular velocities respectively, and $R$ is the rotation matrix.
182
+
183
+ Airplane: We perform a post-stall perching maneuver on a high-fidelity model of an airplane constructed from wind-tunnel data [26]. A demonstration perching trajectory was produced using trajectory optimization (see Figure 1) and then tracked using MPC. The nominal model uses a simple flat-plate wing model with linear lift and quadratic drag coefficient approximations. The bilinear models use a 68-dimensional nonlinear mapping $\phi$ including terms such as the rotation matrix (expressed in terms of a Modified Rodriguez Parameter), powers of the angle of attack and side slip angle, the body frame velocity, various cross products with the angular velocity, and some 3rd and 4th order Chebyshev polynomials of the states.
184
+
185
+ ![01963f5c-6360-7672-bc66-e28d7961f2ee_5_312_207_1177_392_0.jpg](images/01963f5c-6360-7672-bc66-e28d7961f2ee_5_312_207_1177_392_0.jpg)
186
+
187
+ Figure 2: MPC tracking error vs training trajectories for both the cartpole (left) and airplane (right). Tracking error is defined as the average L2 error over all the test trajectories between the reference and simulated trajectories.
188
+
189
+ ![01963f5c-6360-7672-bc66-e28d7961f2ee_5_309_735_1182_464_0.jpg](images/01963f5c-6360-7672-bc66-e28d7961f2ee_5_309_735_1182_464_0.jpg)
190
+
191
+ Figure 3: Generalizability with respect to initial conditions sampled outside of the training domain. The initial conditions are sampled from a uniform distribution, whose limits are determined by a scaling of the limits used for the training distribution. A training range fraction greater than 1 indicates the distribution range is beyond that used to generate the training trajectories. The thick lines represent the algorithm with a heavy regularization parameter.
192
+
193
+ ### 6.2 Sample Efficiency
194
+
195
+ We highlight the sample efficiency of the proposed algorithm in Figure 2. For both the cartpole swing up and the airplane perch trajectory tracking tasks, the proposed method achieves better tracking than the nominal MPC controller with just two sample trajectories, and performs better than EDMD on both trajectory tracking tasks. To achieve comparable performance on the perching task, EDMD requires about 4x the number of samples (20 vs 5) compared to the proposed approach.
196
+
197
+ ### 6.3 Generalization
198
+
199
+ We demonstrate the generalizability of the proposed method on both the planar and $3\mathrm{D}$ quadrotor. In all tasks, the goal is to return to the origin, given an initial condition sampled from some uniform distribution centered at the origin. To test the generalizability of the algorithms, we scale the size of the sampling "window" relative to the window on which it was trained, e.g. if the initial lateral position was trained on data in the interval $\left\lbrack {-{1.5}, + {1.5}}\right\rbrack$ , we sampled the test initial condition from the window $\left\lbrack {-{\gamma 1.5}, + {\gamma 1.5}}\right\rbrack$ . The results for the planar quadrotor are shown in Figure 3b. With a well-picked regularization value, JDMD generalized past where the performance of EDMD suffers. Additionally, in Figure 3a we show the effect of changing the equilibrium position away from the origin: while the true dynamics should be invariant to this change, EDMD fails to learn this whereas JDMD does.
200
+
201
+ <table><tr><td/><td>Nominal</td><td>EDMD</td><td>JDMD</td></tr><tr><td>Tracking Err.</td><td>0.30</td><td>0.63</td><td>0.11</td></tr><tr><td>Success Rate</td><td>82%</td><td>18%</td><td>80%</td></tr></table>
202
+
203
+ Table 1: Performance summary of MPC tracking of 6-DOF quadrotor
204
+
205
+ ![01963f5c-6360-7672-bc66-e28d7961f2ee_6_316_203_1175_536_0.jpg](images/01963f5c-6360-7672-bc66-e28d7961f2ee_6_316_203_1175_536_0.jpg)
206
+
207
+ Figure 4: Generalizability with respect to initial conditions sampled outside of the training domain. The initial conditions are sampled from a uniform distribution, whose limits are determined by a scaling of the limits used for the training distribution. A training range fraction greater than 1 indicates the distribution range is beyond that used to generate the training trajectories. The thick lines represent the algorithm with a heavy regularization parameter.
208
+
209
+ For the full quadrotor, given the goal of tracking a straight line back to the origin, we test 50 initial conditions, many of which are far from the goal, have large velocities, or are nearly inverted (see Figure 4a). The results using an MPC controller are shown in Table 1, demonstrating the excellent generalizability of the algorithm, given that the algorithm was only trained on 30 initial conditions, sampled relatively sparsely given the size of the sampling window. EDMD only successfully brings about ${18}\%$ of the samples to the origin, while the majority of the time resulting in trajectories like those in Figure 4b.
210
+
211
+ ### 6.4 Lifted versus Projected MPC
212
+
213
+ We performed a simple experiment to highlight the value of the proposed "projected" MPC, outlined in Section 5. We trained EDMD and JDMD models with an increasing number of training trajectories, and recorded the first sample size at which the "lifted" and "projected" MPC controllers consistently stabilized the system (i.e. stabilized 95% of the test initial conditions for the cartpole system for that sample size and subsequent ones). The results are summarized in Table 2. The results quantitatively show what we qualitatively observed while training and testing these various examples: the projected MPC approach usually required far fewer samples to "train" and usually had better performance than its lifted counterpart that used the bilinear lifted dynamics. This was especially pronounced when combined with the proposed JDMD approach, which makes sense given that the approach explicitly encourages these Jacobians to match the analytical ones, so quickly converges to reasonable values with just a few training examples.
214
+
215
+ <table><tr><td>Friction ( $\mu$ )</td><td>0.0</td><td>0.1</td><td>0.2</td><td>0.3</td><td>0.4</td><td>0.5</td><td>0.6</td></tr><tr><td>Nominal</td><td>✓</td><td>✓</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td></tr><tr><td>EDMD</td><td>3</td><td>19</td><td>6</td><td>14</td><td>✘</td><td>✘</td><td>✘</td></tr><tr><td>JDMD</td><td>2</td><td>2</td><td>2</td><td>2</td><td>3</td><td>7</td><td>12</td></tr></table>
216
+
217
+ Table 3: Training trajectories required to stabilize the cartpole with the given friction coefficient
218
+
219
+ ### 6.5 Sensitivity to Model Mismatch
220
+
221
+ While we've introduced a significant mount of model mismatch in all of the examples so far, a natural argument against model-based methods is that they're only as good as your model is at capturing the salient dynamics of the system. We investigated the effect of increasing model mismatch by incrementally increasing the Coulomb friction coefficient between the cart and the floor for the cartpole stabilization task (recall the nominal model assumed zero friction). The results are shown in Table 3. As expected, the number of training trajectories required to find a good stabilizing controller increases for the proposed approach. We achieved the results above by setting $\alpha = {0.01}$ , corresponding to a decreased confidence in our model, thereby placing greater weight on the experimental data. The standard EDMD approach always required more samples, and was unable to find a good enough model above friction values of 0.4 . While this could likely be remedied by adjusting the nonlinear mapping $\phi$ , the proposed approach works well with the given bases. Note that the nominal MPC controller failed to stabilize the system above friction values of 0.1 , so again, we demonstrate that we can improve MPC performance substantially with just a few training samples by combining analytical gradient information and data sampled from the true dynamics.
222
+
223
+ <table><tr><td>MPC</td><td>EDMD</td><td>JDMD</td></tr><tr><td>Lifted</td><td>17</td><td>15</td></tr><tr><td>Projected</td><td>18</td><td>2</td></tr></table>
224
+
225
+ Table 2: Training trajectories required to beat nominal MPC
226
+
227
+ ## 7 Limitations
228
+
229
+ As with most data-driven techniques, it is difficult to claim that our method will increase performance in all cases. It is possible that having an extremely poor prior model may hurt rather than help the training process. However, we found that even when the $\alpha$ parameter is extremely small (placing little weight on the Jacobians during the learning process), it still dramatically improves the sample efficiency over standard EDMD. It is also quite possible that the performance gaps between EDMD and JDMD shown here can be reduced through better selection of basis functions and better training data sets; however, given that the proposed approach converges to EDMD as $\alpha \rightarrow 0$ , we see no reason to not adopt the proposed methodology as simply tune $\alpha$ based on the confidence of the model and the quantity (and quality) of training data.
230
+
231
+ ## 8 8 Conclusions and Future Work
232
+
233
+ We have presented JDMD, a simple but powerful extension to EDMD that incorporates derivative information from an approximate prior model. We have tested JDMD in combination with a simple linear MPC control policy across a range of systems and tasks, and have found that the resulting combination can dramatically increase sample efficiency over EDMD, often improving over a nominal MPC policy with just a few sample trajectories. Substantial areas for future work remain: most notably, demonstrating the proposed pipeline on hardware. Additional directions include lifelong learning or adaptive control applications, combining simulated and real data through the use of modern differentiable physics engines $\left\lbrack {{27},{28}}\right\rbrack$ , residual dynamics learning, as well as the development of specialized numerical methods for solving nonlinear optimal control problems using the learned bilinear dynamics.
234
+
235
+ References
236
+
237
+ [1] F. Farshidian, M. Neunert, A. W. Winkler, G. Rey, and J. Buchli. An efficient optimal planning and control framework for quadrupedal locomotion. In 2017 [IEEE] International Conference on Robotics and Automation ( $\{$ ICRA $\}$ ), pages 93-100. doi:10.1109/ICRA.2017.7989016.
238
+
239
+ [2] S. Kuindersma, F. Permenter, and R. Tedrake. An efficiently solvable quadratic program for stabilizing dynamic locomotion. pages 2589-2594. ISSN 9781479936854. doi:10.1109/ICRA. 2014.6907230.
240
+
241
+ [3] M. Bjelonic, R. Grandia, O. Harley, C. Galliard, S. Zimmermann, and M. Hutter. Whole-Body MPC and Online Gait Sequence Generation for Wheeled-Legged Robots. pages 8388-8395. ISSN 9781665417143. doi:10.1109/IROS51168.2021.9636371.
242
+
243
+ [4] J. K. Subosits and J. C. Gerdes. From the racetrack to the road: Real-time trajectory replanning for autonomous driving. 4(2):309-320. doi:10.1109/TIV.2019.2904390.
244
+
245
+ [5] N. Karnchanachari, M. I. Valls, S. David Hoeller, and M. Hutter. Practical Reinforcement Learning For MPC: Learning from sparse objectives in under an hour on a real robot. pages 1-14. doi:10.3929/ETHZ-B-000404690. URL https://doi.org/10.3929/ ethz-b-000404690.
246
+
247
+ [6] D. . Hoeller, F. . Farshidian, M. Hutter, F. Farshidian, and D. Hoeller. Deep Value Model Predictive Control. 100:990-1004. doi:10.3929/ETHZ-B-000368961. URL https://doi.org/10.3929/ethz-b-000368961.
248
+
249
+ [7] Z. Li, X. Cheng, X. B. Peng, P. Abbeel, S. Levine, G. Berseth, and K. Sreenath. Reinforcement Learning for Robust Parameterized Locomotion Control of Bipedal Robots. 2021-May:2811- 2817. ISSN 9781728190778. doi:10.1109/ICRA48506.2021.9560769.
250
+
251
+ [8] A. Meduri, P. Shah, J. Viereck, M. Khadiv, I. Havoutis, and L. Righetti. BiConMP: A Nonlinear Model Predictive Control Framework for Whole Body Motion Planning. doi:10.48550/arxiv. 2201.07601. URL https://arxiv.org/abs/2201.07601v1.
252
+
253
+ [9] D. Bruder, X. Fu, and R. Vasudevan. Advantages of Bilinear Koopman Realizations for the Modeling and Control of Systems with Unknown Dynamics. 6(3):4369-4376. doi:10.1109/ LRA.2021.3068117.
254
+
255
+ [10] M. Korda and I. Mezić. Linear predictors for nonlinear dynamical systems: Koopman operator meets model predictive control. 93:149-160. doi:10.1016/j.automatica.2018.03.046. URL https://doi.org/10.1016/j.automatica.2018.03.046.
256
+
257
+ [11] C. Folkestad, D. Pastor, and J. W. Burdick. Episodic Koopman Learning of Nonlinear Robot Dynamics with Application to Fast Multirotor Landing. pages 9216-9222. ISSN 9781728173955. doi:10.1109/ICRA40945.2020.9197510.
258
+
259
+ [12] H. J. Suh and R. Tedrake. The Surprising Effectiveness of Linear Models for Visual Foresight in Object Pile Manipulation. 17:347-363. doi:10.48550/arxiv.2002.09093. URL https: //arxiv.org/abs/2002.09093v3.
260
+
261
+ [13] C. Folkestad and J. W. Burdick. Koopman NMPC: Koopman-based Learning and Nonlinear Model Predictive Control of Control-affine Systems. In Proceedings - IEEE International Conference on Robotics and Automation, volume 2021-May, pages 7350-7356. Institute of Electrical and Electronics Engineers Inc. ISBN 978-1-72819-077-8. doi:10.1109/ICRA48506. 2021.9562002.
262
+
263
+ [14] C. Folkestad, S. X. Wei, and J. W. Burdick. Quadrotor Trajectory Tracking with Learned Dynamics: Joint Koopman-based Learning of System Models and Function Dictionaries. URL http://arxiv.org/abs/2110.10341.
264
+
265
+ [15] A. Narasingam, J. Sang, and I. Kwon. Data-driven feedback stabilization of nonlinear systems: Koopman-based model predictive control. pages 1-12.
266
+
267
+ [16] SINDy with Control: A Tutorial. URL https://github.com/urban-fasel/SEIR.
268
+
269
+ [17] J. L. Proctor, S. L. Brunton, and J. Nathan Kutz. Generalizing koopman theory to allow for inputs and control. 17(1):909-930. doi:10.1137/16M1062296. URL http://www.siam.org/journals/siads/17-1/M106229.html.
270
+
271
+ [18] M. O. Williams, I. G. Kevrekidis, and C. W. Rowley. A Data-Driven Approximation of the Koopman Operator: Extending Dynamic Mode Decomposition. 25(6):1307-1346. doi:10.1007/S00332-015-9258-5/FIGURES/14. URL https://link.springer.com/ article/10.1007/s00332-015-9258-5.
272
+
273
+ [19] D. C.-L. Fong and M. Saunders. LSMR: An Iterative Algorithm for Sparse Least-Squares Problems. 33(5):2950-2971. ISSN 1064-8275. doi:10.1137/10079687X. URL https: //epubs.siam.org/doi/abs/10.1137/10079687X.
274
+
275
+ [20] C. C. Paige and M. A. Saunders. LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares. 8(1):43-71. ISSN 0098-3500, 1557-7295. doi:10.1145/355984.355989. URL https://dl.acm.org/doi/10.1145/355984.355989.
276
+
277
+ [21] P. Strobach. Recursive Least-Squares Using the QR Decomposition. In P. Strobach, editor, Linear Prediction Theory: A Mathematical Basis for Adaptive Systems, Springer Series in Information Sciences, pages 63-101. Springer. ISBN 978-3-642-75206-3. doi:10.1007/ 978-3-642-75206-3_4. URL https://doi.org/10.1007/978-3-642-75206-3_ 4.
278
+
279
+ [22] A. Sayed and T. Kailath. Recursive Least-Squares Adaptive Filters, volume 20094251 of Electrical Engineering Handbook, pages 1-40. CRC Press. ISBN 978-1-4200-4606-9 978-1-4200- 4607-6. doi:10.1201/9781420046076-c21. URL http://www.crcnetbase.com/doi/ abs/10.1201/9781420046076-c21.
280
+
281
+ [23] A. Ghimikar and S. Alexander. Stable recursive least squares filtering using an inverse QR decomposition. In International Conference on Acoustics, Speech, and Signal Processing, pages 1623-1626 vol.3. doi:10.1109/ICASSP.1990.115736.
282
+
283
+ [24] T. A. Howell, B. E. Jackson, and Z. Manchester. ALTRO: A Fast Solver for Constrained Trajectory Optimization. pages 7674-7679. ISSN 9781728140049. doi:10.1109/IROS40897. 2019.8967788.
284
+
285
+ [25] B. E. Jackson, T. Punnoose, D. Neamati, K. Tracy, R. Jitosho, and Z. Manchester. ALTRO-C: A Fast Solver for Conic Model-Predictive Control; ALTRO-C: A Fast Solver for Conic Model-Predictive Control. ISSN 9781728190778. doi:10.1109/ICRA48506.2021.9561438. URL https://github.com/.
286
+
287
+ [26] Z. Manchester, J. Lipton, R. Wood, and S. Kuindersma. A Variable Forward-Sweep Wing Design for Enhanced Perching in Micro Aerial Vehicles. In AIAA Aerospace Sciences Meeting. URL https://rexlab.stanford.edu/papers/Morphing_Wing.pdf.
288
+
289
+ [27] T. A. Howell, S. Le Cleac', J. Z. Kolter, M. Schwager, and Z. Manchester. Dojo: A Differentiable Simulator for Robotics.
290
+
291
+ [28] E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In 2012 IEEE/RSJ Ineternational Conference on Intelligent Robots and Systems, pages 5026-5033. doi: 10.1109/IROS.2012.6386109.